<<

ISSN 1940­1884

nternational Handbook of Academic Research and Teaching

2009 Proceedings Volume 7

Published by: Intellectbase International Consortium .

0

INTELLECTBASE INTERNATIONAL CONSORTIUM Academic Conference, Atlanta, GA, Oct. 15-17, 2009 Intellectual Perspectives & Multi-Disciplinary Foundations

Conference Proceedings Fall 2009

PROGRAM COMMITTEE

Dr. David King Ms. Belinda Krigel Conference Co­Chair Conference Co­Chair

CONFERENCE ORGANIZERS & INTERNATIONAL AFFILIATES

United States Australia Europe

Ms.Sylvia Carter Mrs. Karina Dyer Mr. Kevin Kofi Ms. Tiara Walker Mr. Graeme William Mr. Benjamin Effa Mr. Ben Murray Ms. Michelle Joanne Ms. Christina Maame Ms. Loria Hampton Mrs. Wendy Morrell Mr. Kenneth Obeng

ACADEMIC ASSOCIATES

Dr. Nitya Karmakar Dr. Danka Radulovic Australian Affiliate European Affiliate

Dr. Sloan T. Letman III Dr. Peter Ross Affiliate United States Affiliate

www.intellectbase.org

1

Published by Intellectbase International Consortium (IIC) Conference Committee: Intellectbase International Consortium, 1615 Seventh Avenue North, Nashville, TN 37208, USA

ISSN (Print): 1940­1876 ­­­­­­­­­ Issued by the Library of Congress, Washington DC, USA ISSN (CD­ROM): 1940­1884 ­­­­­­­­­ Issued by the Library of Congress, Washington DC, USA

©2009. This volume is copyright to the Intellectbase International Consortium Academic Conferences. Apart from use as permitted under the Copyright Act of 1976, no part may be reproduced by any process without prior written permission.

EXECUTIVE EDITORIAL BOARD (EEB) AND REVIEWERS TASK PANEL (RTP)

Dr. David White Dr. Dennis Taylor Roosevelt University, USA RMIT University, Australia Dr. Danka Radulovic Dr. Harrison C. Hartman University of Belgrade, Serbia University of Georgia, USA Dr. Sloan T. Letman, III Dr. Sushil Misra American Intercontinental University, USA Concordia University, Canada Dr. Jiri Strouhal Dr. Avis Smith University of Economics­Prague, Czech Republic City College of Technology, USA Dr. Joel Jolayemi Dr. Smaragda Papadopoulou Tennessee State University, USA University of Ioannina, Greece Dr. Xuefeng Wang Dr. Burnette Hamil Taiyun Normal University, China Mississippi State University, USA Dr. Jeanne Kuhler Dr. Alejandro Flores Castro Auburn University, USA Universidad de Pacifico, Peru Dr. Babalola J. Ogunkola Dr. Robert Robertson Olabisi Onabanjo University, Nigeria Southern Utah University, USA Dr. Debra Shiflett Dr. Sonal Chawla American Intercontinental University, USA Panjab University, India Dr. Cheaseth Seng Dr. Jianjun Yin RMIT University, Australia Jackson State Univerrsity, USA Dr. R. Ivan Blanco Dr. Shikha Vyas­Doorgapersad Texas State University – San Marcos, USA North­West University, South Africa Dr. Tahir Husain Dr. James D. Williams Memorial University of Newfoundland, Canada Kutztown University, USA Dr. Jifu Wang Dr. Tehmina Khan University of Houston Victoria, USA RMIT University, Australia Dr. Janet Forney Dr. Werner Heyns Piedmont College, USA Savell Bird & Axon, UK Dr. Adnan Bahour Dr. Mike Thomas Zagazig University, Egypt Humboldt State University, USA Dr. Rodney Davis Dr. William Ebomoyi Troy University, USA Chicago State University, USA Dr. Mumbi Kariuki Dr. Khalid Alrawi Nipissing University, Canada Al­Ain University of Science and Technology, United Arab Emirates

2

EXECUTIVE EDITORIAL BOARD (EEB) AND REVIEWERS TASK PANEL (RTP) (Continued)

Dr. Mohsen Naser­Tavakolian Dr. Joselina Cheng San Francisco State University, USA University of Central Oklahoma, USA Dr. Rafiuddin Ahmed Dr. Natalie Housel James Cook University, Australia Tennessee State University, USA Dr. Regina Schaefer Dr. Nitya Karmakar University of La Verne, USA University of Western Sydney, Australia Dr. Ademola Olatoye Dr. Anita King Olabisi Onabanjo University, Nigeria University of South Alabama, USA Dr. Dana Tesone Dr. Lloyd V. Dempster University of Central Florida, USA Texas A & M University ­ Kingsville, USA Dr. Farhad Simyar Dr. Bijesh Tolia Chicago State University, USA Chicago State University, USA Dr. John O'Shaughnessy Dr. John Elson San Francisco State University, USA National University, USA Dr. Stephen Kariuki Dr. Demi Chung Nipissing University, Canada University of Sydney, Australia Dr. Rose Mary Newton Dr. James (Jim) Robbins University of Alabama, USA Trinity Washington University, USA Dr. Mahmoud Al­Dalahmeh Dr. Jeffrey (Jeff) Kim University of Wollongong, Australia University of Washington, USA Dr. Shahnawaz Muhammed Dr. Dorothea Gaulden Fayetteville State University, USA Sensible Solutions, USA Dr. Brett Sims Dr. Gerald Marquis Grambling State University, USA Tennessee State University, USA Dr. Frank Tsui Ms. Katherine Leslie Southern Polytechnic State University, USA American Intercontinental University, USA Dr. John Tures Dr. David Davis LaGrange College, USA The University of West Florida, USA Dr. Mary Montgomery Dr. Peter Ross Jacksonville State University, USA Mercer University, USA Dr. Frank Cheng Dr. Van Reidhead Central Michigan University, USA University of Texas­Pan American, USA Dr. Vera Lim Mei­Lin Dr. Denise Richardson The University of Sydney, Australia Bluefield State College, USA Dr. Robin Latimer Dr. Reza Vaghefi Lamar University, USA University of North Florida, USA Ms. Alison Duggins Dr. Jeffrey Siekpe American Intercontinental University, USA Tennessee State University, USA Dr. Michael Alexander Dr. Greg Gibbs University of Arkansas at Monticello, USA St. Bonaventure University, USA Dr. Kehinde Alebiosu Dr. Mike Rippy Olabisi Onabanjo University, Nigeria Troy University, USA

3

EXECUTIVE EDITORIAL BOARD (EEB) AND REVIEWERS TASK PANEL (RTP) (Continued)

Dr. Gina Pipoli de Azambuja Dr. Steven Watts Universidad de Pacifico, Peru Pepperdine University, USA Dr. Andy Ju An Wang Dr. Ada Anyamene Southern Polytechnic State University, USA Nnamdi Azikiwe University, Nigeria Dr. Edilberto Raynes Dr. Nancy Miller Tennessee State University, USA Governors State University, USA Dr. Dobrivoje Radovanovic Dr. David F. Summers University of Belgrade, Serbia University of Houston­Victoria, USA Dr. George Romeo Dr. Robert Kitahara Rowan University, USA Troy University – Southeast Region, USA Dr. William Root Dr. Brandon Hamilton Augusta State University, USA Hamilton's Solutions, USA Dr. Natalie Weathers Dr. William Cheng Philadelphia University, USA Troy University, USA Dr. Linwei Niu Dr. Taida Kelly Claflin University, USA Governors State University, USA Dr. Nesa L’Abbe Wu Dr. Denise de la Rosa Eastern Michigan University, USA Grand Valley State University, USA Dr. Rena Ellzy Dr. Kimberly Johnson Tennessee State University, USA Auburn University Montgomery, USA Dr. Kathleen Quinn Dr. Sameer Vaidya Louisiana State University, USA Texas Wesleyan University, USA Dr. Josephine Ebomoyi Dr. Pamela Guimond Northwestern Memorial Hospital, USA Governors State University, USA Dr. Douglas Main Dr. Vivian Kirby Eastern New Mexico University, USA Kennesaw State University, USA Dr. Sonya Webb Dr. Randall Allen Montgomery Public Schools, USA Southern Utah University, USA Dr. Angela Williams Dr. Claudine Jaenichen Alabama A&M University, USA Chapman University, USA Dr. Carolyn Spillers Jewell Dr. Richard Dane Holt Fayetteville State University, USA Eastern New Mexico University, USA Dr. Kingsley Harbor Dr. Barbara­Leigh Tonelli Jacksonville State University, USA Coastline Community College, USA Dr. Barbara Mescher Dr. William J. Carnes University of Sydney, Australia Metropolitan State College of Denver, USA Dr. Chris Myers Dr. Faith Anyachebelu Texas A & M University – Commerce, USA Nnamdi Azikiwe University, Nigeria Dr. Kevin Barksdale Dr. Donna Cooner Union University, USA Colorado State University, USA Dr. Michael Campbell Dr. Kenton Fleming Florida A&M University, USA Southern Polytechnic State University, USA

4

EXECUTIVE EDITORIAL BOARD (EEB) AND REVIEWERS TASK PANEL (RTP) (Continued)

Dr. Thomas Griffin Dr. Zoran Ilic Nova Southeastern University, USA University of Belgrade, Serbia Dr. James N. Holm Dr. Edilberto A. Raynes University of Houston­Victoria, USA Tennessee State University, USA Dr. Richard Dane Holt Dr. Cerissa Stevenson Veterans' Administration, USA Colorado State University, USA Dr. Rhonda Holt Dr. Donna Stringer New Mexico Christian Children's Home, USA University of Houston­Victoria, USA Dr. Yu­Wen Huang Dr. Lesley M. Mace Spalding University, USA Auburn University Montgomery, USA Dr. Christian V. Fugar Dr. Cynthia Summers Dillard University, USA University of Houston­Victoria, USA Dr. John M. Kagochi Dr. Barbara­Leigh Tonelli University of Houston­Victoria, USA Coastline Community College, USA Dr. Yong­Gyo Lee Dr. Rehana Whatley University of Houston­Victoria, USA Oakwood University, USA Dr. George Mansour Dr. Jianjun Yin DeVry College of NY, USA Jackson State University, USA Dr. Peter Miller Dr. Carolyn S. Payne Indiana Wesleyan University, USA Nova Southeastern University, USA Dr. Ted Mitchell Dr. Veronica Paz University of Nevada, USA Nova Southeastern University, USA Dr. Alma Mintu­Wimsatt Dr. Terence Perkins Texas A & M University – Commerce, USA Veterans' Administration, USA Dr. Liz Mulig Dr. Dev Prasad University of Houston­Victoria, USA University of Massachusetts Lowell, USA Dr. Robert R. O'Connell Jr. Dr. Kong­Cheng Wong JSA Healthcare Corporation, USA Governors State University, USA Dr. P.N. Okorji Dr. Azene Zenebe Nnamdi Azikiwe University, Nigeria Bowie State University, USA Dr. James Ellzy Dr. Sandra Davis Tennessee State University, USA The University of West Florida, USA Dr. Padmini Banerjee Dr. Yvonne Ellis Delaware State University, USA Columbus State University, USA Dr. Aditi Mitra Dr. Elizabeth Kunnu University of Colorado, USA Tennessee State University, USA Dr. Myna German Delaware State University, USA

Intellectbase International Consortium and the Conference Program Committee express their sincere thanks to the following sponsors:  The Ellzy Foundation  The King Foundation  Tennessee State University (TSU)  International Institute of Academic Research (IIAR) 5

PREFACE

Intellectbase International Consortium (IIC) is a professional and academic organization dedicated to advancing and encouraging quantitative and qualitative, including hybrid and triangulation, research practices. This volume contains articles presented at the Fall 2009 Intellectbase International Consortium Conference in Atlanta, GA USA, Oct. 15­17.

The conference provides an open forum for Academics, Scientists, Researchers, Engineers and Practitioners from a wide range of research disciplines. It is the seventh volume produced in a unique, peer­reviewed multi­disciplinary format and intellectual foundation (See back cover of the proceedings).

Intellectbase International Consortium is responsible for publishing innovative and refereed research work on the following hard and soft systems related themes – Business, Engineering, Science, Technology, Management, Administration, Political and Social (BESTMAPS). The scope of the proceeding (IHART) includes: literature reviews and critiques, data collection and analysis, data evaluation and merging, research design and development, hypothesis­based creativity and reliable data interpretation.

The theme of the proceeding is related to pedagogy, research methodologies, organizational practice, ethics, accounting, management, leadership, policy and political issues, health­care systems, engineering, social psychology, eBusiness, marketing, technology and information science. Intellectbase International Consortium promotes broader intellectual resources and exchange of ideas among global research professionals through a collaborative process.

To accomplish research collaboration, knowledge sharing and transfer, Intellectbase is dedicated to publishing a range of refereed academic journals, book chapters and conference proceedings, as well as sponsoring several annual academic conferences globally.

Senior, Middle and Junior level scholars are invited to participate and contribute one or several article(s) to the Intellectbase International conferences. Intellectbase welcomes and encourages the active participation of all researchers seeking to broaden their horizons and share experiences on new research challenges, research findings and state­of­the­ art solutions.

SCOPE & MISSION

 Build and stimulate intellectual interrelationships among individuals and institutions who have interest in the research discipline.

 Promote the collaboration of a diverse group of intellectuals and professionals worldwide.

 Bring together researchers, practitioners, academicians, and scientists across research disciplines globally ­ Australia, Europe, Africa, North America, South America and Asia.

 Support governmental, organizational and professional research that will enhance the overall knowledge, innovation and creativity.

 Present resources and incentives to existing and new­coming scholars who are or planning to become effective researchers or experts in a global research setting.

 Promote and publish professional and scholarly journals, handbook, book chapters and other forms of refereed publications in diversified research disciplines.

 Plan, organize, promote, and present educational prospects ­ conferences, workshops, colloquiums, conventions — for global researchers.

6

LIST OF AUTHORS

Last Name First Name Institution State Country

Adsavakulchai S. University of the Thai Chamber of Commerce Thailand

Alderman Betsy B. University of Tennessee at Chattanooga TN USA

Amiri Shahram Stetson University FL USA

Anayet K. Multimedia University Malaysia

Bagot­Allen Donnette Judy Piece, Montserrat, BWI

Baramichai M. University of the Thai Chamber of Commerce Thailand

Battista David Kennesaw State University GA USA

Bauer Ryan Stetson University FL USA

Blake Laura Mitchell College and Pace University CT and NY USA

Bolen Yvette Athens State University AL USA

Boonmanang N. University of the Thai Chamber of Commerce Thailand

Broadway S. Camille University of Texas at Arlington TX USA

Brown Wayne Florida Institute of Technology FL USA

Buck Kathy Athens State University AL USA

Bunger Alan Tennessee State University TN USA

Campbell Michael M. Florida A&M University FL USA

Cardenas Tina Y. Paine College GA USA

Carnes William J. Metropolitan State College of Denver CO USA

Chandler Prentice Athens State University AL USA

Channell Linda Jackson State University MS USA

Cheng William Troy University Global Campus USA

Colon Eileen J. Western Carolina University NC USA

Cowan Wendy Athens State University AL USA

Davis Rodney Troy University AL USA

Davis Dana Tennessee State University TN USA

Dhawan Sunaina Tennessee State University TN USA

Edwards Matthew Nipissing University ON Canada

Eyanson Jeff Azusa Pacific University CA USA

Ferrer Edgar Turabo University Puerto Rico USA

Fleming Kenton Southern Polytechnic State University GA USA

i

LIST OF AUTHORS (CONTINUED)

Last Name First Name Institution State Country

Griffith Brian A. Vanderbilt University TN USA

Haddad Hisham M. Kennesaw State University GA USA

Harbor Kingsley O. Jacksonville State University AL USA

Harke Swen Stetson University FL USA

Harney Suzy University of the Virgin Islands VI USA

Harper Jr. Ralph Florida Institute of Technology FL USA

Hartman Harrison C. University of Georgia GA USA

Heatherly Ben Brookhill Elementary School AL USA

Heshizer Brian Georgia Southwestern State University GA USA

Hossen J. Multimedia University Malaysia

Howell Curtis C. Georgia Southwestern State University GA USA

Hussey Jim University of South Carolina SC USA

Hyde Lisa Athens State University AL USA

Ishak Norzamri bin Multimedia University Malaysia

Johnson Kimberly Auburn University Montgomery AL USA

Jones Michael D. Kirkwood Community College IA USA

Juthamanee K. Boontavorn Co.Ltd. Thailand

Kadir Mohd Rizuan Abd Universiti Tenaga Nasional Malaysia

Kargbo Ibrahim Coppin State University MD USA

Kariuki Mumbi Nipissing University ON Canada

Kariuki Stephen Nipissing University ON Canada

Kazarian William Howard Hawaii Pacific University HI USA

Kitti S. University of the Thai Chamber of Commerce Thailand

Lanaria Lois Kutztown University PA USA

Latham Vickie Jackson State University MS USA

Lawrence Malia S. Azusa Pacific University CA USA

Lee LaNedra Tennessee State University TN USA

Lewis Christine W. Auburn University Montgomery AL USA

Linna Ken Auburn University Montgomery AL USA

Liu Binjie University of Shanghai for Science and Technology China

ii

LIST OF AUTHORS (CONTINUED)

Last Name First Name Institution State Country

Mace Lesley Auburn University Montgomery AL USA

Mak Simon S. Southern Methodist University TX USA

McKay Joane W. University of the Virgin Islands VI USA

Milrod Lucas University of Tennessee at Chattanooga TN USA

Mintah Joseph K. Azusa Pacific University CA USA

Moneyham Linda University of Alabama AL USA

Chanput S. University of the Thai Chamber of Commerce Thailand

Chantanabubpha Patcharee University of the Thai Chamber of Commerce Thailand

Perkins Stephynie C. University of North Florida FL USA

Phongkusolchit Kiattisak University of Tennessee at Martin TN USA

Radojevich­Kelley Nina Metropolitan State College of Denver CO USA

Rahman A. Multimedia University Malaysia

Ramli Juliana Anis Bte Universiti Tenaga Nasional Malaysia

Raynes Edilberto A. Tennessee State University TN USA

Reid James Huntingdon College AL USA

Riyabuth K. University of the Thai Chamber of Commerce Thailand

Robertson Robert Saint Leo University FL USA

Ryan Thomas Nipissing University ON Canada

Scharer Kathleen University of South Carolina SC USA

Shrestha R. Asian Institute of Technology Thailand

Shugart Margaret Emory University GA USA

Smith Avis J. College of Technology NY USA

Surbaini Khairul Nizam Universiti Tenaga Nasional Malaysia

Sutcharitrungsee A. University of the Thai Chamber of Commerce Thailand

Szygenda Stephen Southern Methodist University TX USA

Tang Su University of Shanghai for Science and Technology China

Tavakoli Abbas University of South Carolina SC USA

Taylor Vivian Jackson State University MS USA

Theamsumrid E. University of the Thai Chamber of Commerce Thailand

Thomas Bruce Athens State University AL USA

iii

LIST OF AUTHORS (CONTINUED)

Last Name First Name Institution State Country

Tseng L. P. Douglas Portland State University OR USA

Ueatrongchit Prawet University of the Thai Chamber of Commerce Thailand

Unni Ramprasad Portland State University OR USA

Varjavand Reza Saint Xavier University IL USA

Velasco Thomas Southern Illinois University Carbondale IL USA

Villacis González José University San Pablo­CEU Madrid Spain

Vogel Thomas K. Stetson University FL USA

Wachirathamrojn J. University of the Thai Chamber of Commerce Thailand

White David Roosevelt University IL USA

Williams James Kutztown University PA USA

Wiwatthanathorn T. University of the Thai Chamber of Commerce Thailand

Wu Chih­Wen National Chung Hsing University Taichung Taiwan

Yin Jianjun Jackson State University MS USA

iv

LIST OF INSTITUTIONS, STATES AND COUNTRIES

Institution State Country

Asian Institute of Technology Thailand

Athens State University AL USA

Auburn University Montgomery AL USA

Azusa Pacific University CA USA

Boontavorn Co.Ltd. Thailand

Brookhill Elementary School AL USA

Coppin State University MD USA

Emory University GA USA

Florida A&M University FL USA

Florida Institute of Technology FL USA

Georgia Southwestern State University GA USA

Hawaii Pacific University HI USA

Huntingdon College AL USA

Jackson State University MS USA

Jacksonville State University AL USA

Judy Piece, Montserrat, BWI

Kennesaw State University GA USA

Kirkwood Community College IA USA

Kutztown University PA USA

Metropolitan State College of Denver CO USA

Mitchell College CT USA

Multimedia University Malaysia

National Chung Hsing University Taichung Taiwan

New York City College of Technology NY USA

Nipissing University ON Canada

Pace University NY USA

Paine College GA USA

v

LIST OF INSTITUTIONS, STATES AND COUNTRIES (CONTINUED)

Portland State University OR USA

Roosevelt University IL USA

Saint Leo University FL USA

Saint Xavier University IL USA

Southern Illinois University Carbondale IL USA

Southern Methodist University TX USA

Southern Polytechnic State University GA USA

Stetson University FL USA

Tennessee State University TN USA

Troy University Global Campus USA

Troy University AL USA

Turabo University Gurabo Puerto Rico

Universiti Tenaga Nasional Malaysia

University of Alabama AL USA

University of Georgia GA USA

University of North Florida FL USA

University of Shanghai for Science and Technology China

University of South Carolina SC USA

University of Tennessee at Chattanooga TN USA

University of Tennessee at Martin TN USA

University of Texas at Arlington TX USA

University of the Thai Chamber of Commerce Thailand

University of the Virgin Islands VI USA

University San Pablo­CEU Madrid Spain

Vanderbilt University TN USA

Western Carolina University NC USA

vi

TABLE OF CONTENT

LIST OF AUTHORS ...... I LIST OF INSTITUTIONS, STATES AND COUNTRIES ...... V SECTION 1: BUSINESS & MANAGEMENT What is the Asian­American Consumer Behavior towards Green Marketing? J.D. Williams and Lois Lanaria ...... 2 Economic Value Creation from Technology Entrepreneurship: A Comparative Analysis of Sales Growth of High­Technology Industries Listed in the Fortune 1000 Rankings from 2006­2009 Simon S. Mak and Stephen Szygenda ...... 20 Sustainable Development in Tourism Industry Context in Taiwan Chih­Wen Wu ...... 26 Do Changes in Regulation have an Impact on the Number of Bank Failures? Harrison C. Hartman ...... 31 An Unpublished Letter of Keynes and its Relevance for Macroeconomics: Classification System B22 José Villacís González ...... 39 Ten Big Emerging Markets and the Small Firm Effects William Cheng ...... 51 An Examination of Empirical Relationship Between Investment Decisions and Capital Structure Decisions Su Tang and Binjie Liu ...... 58 Part I ­ The Four Factors of Quality: Achieving the Circle of Acceptance and Satisfaction Avis J. Smith ...... 69 Foundations of Work Motivation: An Historical Perspective on Work Motivation Theories Kimberly Johnson and Christine W. Lewis ...... 73 Glass Ceilings and Gender Gaps: A Survey Lesley Mace and Ken Linna ...... 84 Body Art: The Question of Hiring Employees with Visible Body Art William. J. Carnes and Nina Radojevich­Kelley ...... 96 United States versus Japan: Are there Myths Associated with Cross­cultural Sales Negotiations? J.D. Williams ...... 102 Combinatorics in the Theory of Production José Villacís González ...... 123 The U.S. Economic Crisis: Ideology versus Realities Reza Varjavand ...... 133 NAFTA’s Main Objectives Included the Achievement of Economic Growth & Development in the First Fifteen Years: Were These Goals Realized? Michael M. Campbell ...... 137 Do Excessive IPO’s and ‘’ Drive or Hinder New Innovation? Laura Blake ...... 138 The Performance of Pipeline System in the Supply Chain of Water for Industry in Thailand J. Wachirathamrojn and S. Adsavakulchai ...... 144

Risk Management Framework for Agro­Food Supply Chain: A Case Study of Agro­Food Supply Chain in Thailand T. Wiwatthanathorn and M. Baramichai ...... 145 Comparing Warehouse Management System between Retail and Wholesale Business in Thailand S. Adsavakulchai and K. Juthamanee ...... 146 Web Application of Preventive maintenance for Private Bus in Bangkok S. Kitti and S. Adsavakulchai ...... 147 Agro – food supply chain management in Developing countries A. Sutcharitrungsee and M. Baramichai ...... 148

SECTION 2: SCIENCE & TECHNOLOGY Establishing the Existence of Localized Structure using Variational Dynamics Thomas K. Vogel ...... 150 Development of an Expert System for Gem Identification Kiattisak Phongkusolchit and Tomas Velasco...... 163 GEM and the Leptonic Width of the J(3097) D. White ...... 178 Force­Modeling Theory: Melodic Motion and the Real­World Attributes of Tones Michael D. Jones ...... 182 Current Methods for the Trace Analysis of Phenoxy Acid, Triazine and Phenyl Urea Herbicides in Water Stephen Kariuki and Matthew Edwards ...... 199 Assessing Information Society Indicators: The Puerto Rico Case Edgar Ferrer ...... 204 Information and Communication Technology Impact on Asia and the Pacific Shahram Amiri, Swen Harke and Ryan Bauer ...... 208 The Impact of E­Technology on the Healthcare Management Environment Ralph L Harper and Wayne Brown ...... 214 Software Development Standards for Medical Devices: Evolution and Improvement Hisham M. Haddad and David Battista ...... 222 Should Physical Therapists Consider Pulmonary Function in Asthmatic Children when Implementing an Aerobic Exercise Program? Alan Bunger, Dana Davis, Sunaina Dhawan, LaNedra Lee and Edilberto A. Raynes ...... 231

GEM and the (2S) D. White ...... 239 A Novel Extended ANFIS: Application in a Control System J. Hossen, A. Rahman, K. Anayet ...... 244 Green office with Electronic Document System Technologies S. Chanput, Patcharee Chantanabubpha and S. Adsavakulchai ...... 246 Global Warming: Science or Ideology? Kenton Fleming ...... 247 Microcontroller for Automatic Microscope Slide P. Ueatrongchit ...... 248 The Correlation between Spectral Reflectance Data and Water Quality in Kung Krabaen Bay N. Boonmanang, E. Theamsumrid, K. Riyabuth, S. Adsavakulchai and R. Shrestha ...... 256

SECTION 3: EDUCATION & SOCIAL SCIENCES AACSB Accreditation and the Homogeneity of the Business Educational Experience Brian Heshizer and Curtis C. Howell ...... 258 Highly Qualified and Culturally Competent: Is It Too Much to Expect of Public School Teachers? Rodney Davis ...... 265 Preservice Teachers’ Awareness of Cyberbullying Issues Mumbi Kariuki and Thomas Ryan ...... 273 Perceptions of Online and On­Campus Business Programs: Implications for Marketing Business Programs Ramaprasad Unni and L.P. Douglas Tseng ...... 278 An Examination of the Careers of Adjunct Faculty in Higher Education Intellectual Curiosity – Migrant Laborers: How Adjunct Teaching Services are Utilized and Valued William Howard Kazarian ...... 287 Self­Concept, Behavior and Citizenship Status; Relationships and Differences between Adolescents Self­ Concept, Behavior and Citizenship Status in Montserrat, BWI Donnette Bagot­Allen, Suzy Harney and Joane W. McKay ...... 299 Undergraduates Selection Towards Islamic Banking :How Does Gender Affect Their Selection Norzamri bin Ishak, Mohd Rizuan Abd Kadir, Khairul Nizam Surbaini and Juliana Anis Bte. Ramli ...... 311 The Correlation between Conflict and Job Satisfaction Within Nurse Units Tina Y. Cardenas ...... 320 Pre and Post Writing Test Assessment: Determining Rater Reliability Betsy B. Alderman, Stephynie C. Perkins, S. Camille Broadway and Lucas Milrod ...... 331 The Effect of Distance Education Lecture Format on Student Application Wendy Cowan, Yvette Bolen, Prentice Chandler, Bruce Thomas, Kathy Buck and Lisa Hyde ...... 341 Improving the Delivery of Online Business Courses: A Continuous Improvement Process Robert W. Robertson...... 346 The AristoLeslian Model for Ethical Decision Making: Proposing a Model for Teaching Ethical Decision Making in Communication Kingsley O. Harbor ...... 347 The Effect of Summer Language Intervention Program on Vocabulary Development of ESL Third Grade Student in an Urban Mississippi School District Vickie Latham, Jianjun Yin, Vivian Taylor and Linda Channell ...... 348 The First Year of College: Beginning the Transition from Adolescent to Adult Brian A. Griffith ...... 349 A comparison of Two Types of Social Support for Mothers of Mentally Ill Children Kathleen Scharer, Eileen J. Colon, Linda Moneyham, Jim Hussey, Abbas Tavakoli and Margaret Shugart ...... 350 “Discipline and Confinement: Crime and Punishment in Colonial Sierra Leone” Ibrahim Kargbo ...... 351 Burnout among Female Club Volleyball Players Jeff Eyanson, Malia S. Lawrence and Joseph K. Mintah ...... 352 “Soda Consumption in Overweight and At­Risk Elementary Children” Yvette Bolen, Bruce Thomas, Ben Heatherly and James Reid ...... 353

INTELLECTBASE INTERNATIONAL CONSORTIUM Intellectual Perspectives & Multi-Disciplinary Foundations

BESTMAPS

EDUCATION

BUSINESS SCIENCE

MULTI-DISCIPLINARY

SOCIAL FOUNDATIONS TECHNOLOGY & PERSPECTIVES

POLITICAL MANAGEMENT ADMINISTRATION

A Commitment to Academic Excellence.

www.intellectbase.org

T

SECTION 1 BUSINESS & MANAGEMEN BUSINESS

What is the Asian- American Consumer Behavior towards Green Marketing?

WHAT IS THE ASIAN­AMERICAN CONSUMER BEHAVIOR TOWARDS GREEN MARKETING?

J.D. Williams and Lois Lanaria Kutztown University, USA

ABSTRACT The Asian­American consumer group is thought to be the fastest growing market in the United States. Asian­Americans are considered to be well­educated, generally affluent, and geographically concentrated. However, significant cultural and language differences among Asians subgroup are often overlooked as being problematic in conducting domestic marketing studies. As such, the Asian communities have been long been ignored for their individuality and ethnic diversity, particularly when it comes domestic marketing segmentation, much less being a viable element for green marketing center of attention.

PURPOSE This study has presented a beginning era into comparative consumer behavioral green marketing examinations of the three different Asian­American groups, Chinese, Filipinos, and Indians. The study has developed implications for American industries emerging with green marketing strategies. The Asian­American (Asians residing in United States) consumer group is thought to be the fastest growing market in the United States. Asian­Americans, in general meet, if not exceed, all marketing segmentation target characteristics, inclusive of being geographically concentrated. However, significant cultural and language differences among Asian subgroups have often been overlooked. The paper aims to provide an understanding of the Asian­American consumer and their respective behavior towards environmental consciousness and existing green marketing programs/products.

Hypotheses have been developed to address the relationship between Asian­American socio­demographics characteristics and their knowledge­behavior towards green marketing and green products. These hypotheses have tested a subset of Chinese, Filipinos, and Indians consumers residing in the United States and conclusion were drawn on the utility of socio­ demographic variables for profiling green consumers.

INTRODUCTION Effective green marketing has necessitated the application of superior marketing elementals to craft green products as desirable goods for consumers. For the most part, green markers have conducted a very limited marketing effort with equally inadequate results­ experiencing market growth, but at a snails’ pace. However, what has been green marketing’s future? This may have been due to historically, green marketing having been a misunderstood concept. Many business scholars viewed ‘greening’ as a ‘fringe’ topic, given that environmentalism acceptance and conservation have not yet meshed well with marketing’s traditional axioms of ‘give costumers what they want’ and ‘sell as much as you can’ practices. Marketing has lagged behind rising energy prices, mega­growth rates in pollution, increased resource consumption in Asia, and political pressure to address climate change, which have been driving global innovation toward healthier, more­efficient, high­ performance products and services. As such, a vast number of marketers have lifted the banner towards green marketing (Ottman, Stafford, & Hartman, 2006). Yet the road towards success for green marketing has been riddled with institution blunders and consumer distrust.

Green marketing came into prominence in the late 1980s and early 1990s though it was first discussed much earlier. The American Marketing Association (AMA) held the first workshop on “Ecological Marketing” in 1975. The proceedings of this workshop resulted in one of the first books on green marketing entitled “Ecological Marketing. Since that time a number of other books on the topic have been published (Polonsky, An Introduction to Green Marketing, 1994).

The AMA workshop attempted to bring together academics, practitioners, and public policy makers to examine marketing’s impact on the natural environment. At this workshop, ecological marketing was defined as the study of positive and negative aspects of marketing activities on pollution, energy depletion, and non­energy resource depletion (Polonsky, An Introduction to Green Marketing, 1994).

2

J. D. Williams and L. Lanaria Volume 7 – Fall 2009

Green marketing is the byproduct of this new vision. It has spawned from a global awareness that environmental management accountability is at present, a significant component of the overall scheme of well planned operations and marketing. The main theme of green management is that all environmental efforts must create some impact on the corporate balance sheet. Institutional questions to these government supported efforts are strategically essential:

How much do these efforts cost? Are these saving the company money? What is the payback period the company can see the return on investment?

If all these actions do not improve the bottom line, then they will be short­lived in the global corporate arena. New international standards, political forces, and concerned consumers will demand accountability from companies in their green marketing and marketing efforts. A growing body of research is proving that environmental improvements translate into profits (Wasik, 1996).

One of the most powerful shifts in institutional thinking is that of a growing relationship between ecology and economics. Those institutions that understand this relationship are not only reducing their operational costs, they are also improving productivity while increasing their profit. The new millennium of economics and ecology focuses on some often­ignored bastions of corporate operations. While it has been no secret that corporations can increase their bottom lines by cutting the amount of raw materials and energy consumption, many of these companies are now seeing cost cutting effectiveness from an ecological perspective.

Environmental concern has increased substantially in Western countries over recent decades. This environmental concern correlates well with consumers’ stated intentions to purchase environmentally friendly products. However, these intentions do not translate directly into changed consumer behavior by way of adoption of green products. Some psychographic based research has identified consumers’ perceived effectiveness of their actions as a significant determinant of environmentally friendly consumer behaviors (Yusuf & Brooks, 2004).

LITERATURE REVIEW Green Marketing or Environmental Marketing Green marketing consists of all activities designed to generate and facilitate any exchanges intended to satisfy human needs or wants, such that the satisfaction of these needs and wants occurs, with minimal detrimental impact on natural environment (Polonsky, 1995); (Hooley, Saundeers, &. Piercey, 2008); (Costa & Bamossy, 1995); (Pride & Ferrell, 2007); (Kotler & Keller, 2008); and (Kerin, Hartley, & Rudelius, 2008). Similar terms used for green marketing are environmental or ecological marketing. This definition incorporates much of the traditional components of the marketing definition that is “All activities designed to generate and facilitate any exchanges intended to satisfy human needs or wants (Polonsky, An Introduction to Green Marketing, 1994).” Therefore it ensures that the interests of the organization and all its consumers are protected, as voluntary exchange will not take place unless both the buyer and the seller mutually benefit (Armstrong &Kotler, 2008)); (Ferrell & Hartline, 2007); and (Solomon, Marshall, & Stuart, 2006).

Terms like phosphate free, recyclable, refillable, ozone friendly and environmentally friendly are some of the things consumers most often associate with green marketing. (Peter & Donnelly, 2008); (Horn, 2006); (Christie Matheson); and (Kotler & Lee, 2007). While these terms are green marketing claims, in general marketing is a much broader concept, one that can be applied to consumer goods, industrial goods and even services. Example for services is that resorts are beginning to promote themselves as “eco­tourist” facilities where their facilities specialize in experiencing nature or operating in a fashion that minimizes their environmental impact (Kotler & Lee, 2007); (Kotabe & Helsen, 2007); and (Lamb, Hair, & McDaniel, 2007). Thus green marketing incorporates a broad range of activities, including product modification, changes to the production process, packaging changes, as well as modifying advertising (Polonsky, 1994).

Green Products and Marketing Green or environmental marketing consists of all activities designed to generate and facilitate any exchanges intended to satisfy human needs or wants. The satisfaction of these needs and wants occurs with minimal detrimental impact on the natural environment (Polonsky, 1994); (Armstrong &Kotler, 2008)); (Martin & Simintiras, 1995); and (Lamb, Hair, & McDaniel, 2007).

Green product and environment product are terms commonly used to describe those that strive to protect or enhance the natural environment by conserving energy and/or resources and reduction or elimination of the use of toxic agents, pollutants

3

What is the Asian- American Consumer Behavior towards Green Marketing? and wastes. Green marketing must satisfy two objectives: improved environmental quality and customer satisfaction (Hooley, Saundeers, &. Piercey, 2008) and (Ottman, Stafford, & Hartman, 2006).

A few green products have become so common and widely distributed that many consumers may no longer recognize them as green products because they buy them for non­green reasons. For instance are widely available supermarkets and discount retailers ranging from energy­saving Tide Coldwater laundry detergent to non­toxic Method and Green cleaning products? Use of recycled or biodegradable paper products such as plates, towels, napkins, coffee filters and other goods, is also widespread. Organic food market segment has also increased 20 percent annually since 1990. Five times faster than the conventional food market, spurring the growth of specialty retailers such as Whole Foods Market and Wal­Mart too (Ottman, Stafford, & Hartman, 2006).

The marketing of successfully established green products showcases non­green consumer value, and there are at least five desirable benefits commonly associated with green products: efficiency and cost effectiveness; health and safety; performance; symbolism and status; and convenience (Ottman, Stafford, & Hartman, 2006); (Costa & Bamossy, 1995); (Pride & Ferrell, 2007); (Kotler & Keller, 2008); and (Kerin, Hartley, & Rudelius, 2008).

Marketing Demographic Segmentation Theory to Application A number of past studies have made attempts to identify demographic variables that correlate with environmentally conscious attitude and/or consumption patterns. Such demographic variables offer easy and efficient ways for marketers to initialize market segmentation through capitalizing on the current trends towards ‘going green’ attitudes and ‘buying green‘ behaviors (Roberts & Straughan, 1999).

Age Early studies of ecology or environment and green marketing, age have been explored by a number of researchers. Some researches resulted into findings that younger individuals are likely to be more sensitive to environmental issues. And as those who have grown up in a time period in which environmental concerns have been a salient issue at some level are more likely to be sensitive to these issues (Roberts & Straughan, 1999). This specific demographic aspect has been researched within this paper regarding age factoring of the Asian research groups.

On the other hand, a good deal of research has determined that older folks tended to be more serious towards green behaviors, in some cases, more than younger consumers. The ‘matures’ and ‘echo boomers’ have been more likely to have bought an energy­efficient appliances (52 percent vs. 32 percent); purchased more locally grown food (35 percent vs. 23 percent); and stopped using bottled water (27 percent vs. 23 percent) than younger consumers (Dolliver, 2008). However, some research findings have been somewhat equivocal. Some researchers have explored age as a correlate to green attitudes and behavior­ the intriguing findings revealing that were non­significant relationships (Roberts & Straughan, 1999).

Gender Sex has been an actively research demographic variable. The development of unique sex roles, skills, and attitudes has lead most researchers to argue that women are more likely than men to hold attitudes consistent with the green movement. Theoretical justification for this comes from Eagly (1987), who contends that women will reflect more social development and sex role differences; which have enabled them to more carefully consider the impact of their actions on others (Roberts & Straughan, 1999). Considering the social impact of green marketing, one may have deduced the broader implications of women leading the way towards cleaner, healthier, safer products.

Ethnicity One of the demographic profiling in marketing is ethnicity. The ethnic consumption perspective takes into account the effect of culture on consumer behavior. Culture is defined as the configuration of learned behavior and results of behavior whose components are shared and transmitted by the members of a particular society (Linton, 1945). Another definition of culture by Kroober and Parson (1958), culture is transmitted and created content and patterns of values, ideas, and other symbolic­ meaningful systems as factors in the shaping of human behavior and the artifacts produced through behavior. Together these definitions stress two important aspects of culture, (1) culture is shared by the members of a given society, and (2) culture is by its very nature, dynamic and transmissible.

4

J. D. Williams and L. Lanaria Volume 7 – Fall 2009

Education and Income The hypothesized relationship of education attained levels has been fairly consistent across research studies. Specifically, education has been expected to replicate positive correlations with environmental concerns and behavior. Although the results of studies examining education and environmental issues have been quite consistent, a statistically definitive relationship between the two variables has not yet been established­ behavioral deviations have been rampant. Interesting differences have occurred in two divergent studies by Samdahl and Robertson (1989) whom found the opposite correlation(s)­ that education was negatively correlated with environmental attitudes. The second study conducted by Kinnear et al. (1974) found no significant relationship (Roberts & Straughan, 1999).

Income has been generally thought to be positively correlated to environmental attitudinal sensitivity. The most common justification for this conviction was that individuals can, at higher income levels, bear the marginal increases in costs associated with supporting green causes and favoring green products offerings. Numerous studies have addressed the role of income as a predictor of green consumer behavior and attitude. A study conducted by Newell and Green (1997) had an interesting hypothesis involving income stems. They postulated that income and education help to moderate the consequence that ethnic origin and nationality have played on shaping their respective views towards environmental concerns. Their study results showed that differences between the perceptions of black and white consumers with respect to environmental issues decrease as both income and education go up. Other studies have shown a non­significant direct effect of income on environmental awareness while some studies have resulted to a positive correlation and negative relationships between income and environmental concerns and behaviors (Roberts & Straughan, 1999) and (A., C., 2000).

GREEN MARKET CONSUMER BEHAVIOR RESEARCH

Economics begat marketing­ which begat consumer behavior research­ which begat the new consumer behavior research­ which has most recently begat green marketing consumers. Do you get the begat (smile)?

The development of an academic discipline of consumer behavior within the marketing departments of colleges of commerce and business began in 1950s. Earlier on in the past 20th Century, North America and Europe advertising and marketing research firms began to study the consumers for the purpose of marketing consumer goods more effectively. The academic marketing departments that developed during that era seemed to have garnered a great deal of applied and behavioral focus to which emerged the latest research thrust, green marketing (Belk, 1995).

The founding of the Association for Consumer Research in 1969 and the establishment of the Journal of Consumer Research in 1974 proved to be ideal avenues to explore consumer behavior from both esoteric and practical application modes. Green market research, although being a rather new focus of attention from many marketing schools and institutions, has none­the­ less already touched the minds and economic hearts of our President Obama, a multitude of state governors and literally hundreds of entrepreneurs (Belk, 1995) and (Bohlen, Diamantopoulos, & Schlegelmilch, 1996).

SYMBOLISM AND STATUS According to many automobile analysts, the Prius, Toyota’s gas­electric hybrid, has become an epitome of a “green chic”. The cool­kid cachet that comes with being an early adopter of quirky­looking hybrid vehicle trend continues to partly motivate sales. To appeal to young people, conservation and green consumption need the unsolicited endorsement of high­profile celebrities and connection to cool technology.

Another example in the business where office furniture symbolizes the cachet of corporate image and status is the ergonomically designed “Think” chair. The chair is marketed as the chair “with a brain and conscience” and embodies the latest in “cradle to cradle” (C2C) design and manufacturing. C2C described as a product that can be ultimately returned to technical or biological nutrients, encourages industrial designers to create products free of harmful agents and processes that can be recycled easily into new products such as metals and plastics, or safely returned to the earth such as a plant­based materials.

The ‘Think’ chair is 99 percent recyclable, it disassembles with basic had tools in about five minutes, and parts are stamped with icons showing recycling options. The concept ‘Think’ chair has been positioned as symbolizing the smart, socially responsible office. Therefore in sum, green products have been and will likely be positioned as status symbol, in some cases, commanding higher competitive prices (Ottman, Stafford, & Hartman, 2006).

5

What is the Asian- American Consumer Behavior towards Green Marketing?

SOCIAL RESPONSIBLITIES Society has been thankful that a growing number of institutions have begun to believe they have a moral obligation to be more socially responsible (Polonsky, 1994). Many firms have begun to realize that they are members of the wider community and therefore, must behave with an environmentally responsible mindset. This has translated into an emergence of firms whom have formulated strategic efforts to achieve environmental goals as well as profit related objectives. Governmental regulations relating to environmental marketing have been designed to protect consumers in several ways:

 Reduce production of harmful goods or by­products,  Reduce the import of either harmful and illegal products,  Modify consumer and industry’s use and/or consumption of harmful goods or

Ensure that all types of consumers have the ability to evaluate the environmental composition of goods (Polonsky, 1994).

The expanded aspects of this corporate effort have resulted in firms integrating environmental issues within their corporate culture. Firms in this situation can take on one of three action plans: 1) they can use the fact that they are environmentally responsible as a marketing tool or 2) they can become responsible without promoting this fact, or 3) creatively design an option that may incorporate certain components of both 1) and 2). An example of an organization that does not promote its environmental initiatives would be Coca­Cola. This firm has invested large sums of money in various recycling activities, as well as modifying their packaging to minimize its environmental impact. While being concerned about the environment, Coke has not used this concern as a marketing tool. Thus many consumers may not realize that the Coca Cola Corporation has been a very environmentally committed organization (Polonsky, 1994) and (Kama, Hansen, & Heikki, 2001).

VALUE OF GREEN COMSUMERS Green consumers make decisions based on the ‘earth friendliness’ of a product. Back in 1990, according to a Gallup survey, nine out of ten respondents said they were willing to make special effort to buy products from companies trying to protect the environment. Those who were polled who said that they would buy the product also said that they would be willing to pay more for green products and even give up more convenience to have them in their household (Finisterra do Paço & Raposo, 2008) and (Minton & Rose, 1997). This and othe studies have clearly suggested that ther exists substantial marketing worth in incoporating ‘green’ into these firms’ production, operations and marketing programs.

Consider this, has the 21st Century typical Americans consumer reached the point that he/she would pose minimum difficulty in linking his/her environment thoughts with how they would shop for goods and services. With the numerous messages, labeling programs, claims, and warnings, it would still be quite difficult to evaluate the totality of information. And then, when one considers that the number of products making green claims has mushroomed over the years [Table­ 1], it would still seem to be most difficult undertaking (Roberts, 1990/1996) and (Wasik, 1996).

Table 1: Green Product Introductions (As a percentage of total product introductions) Year Total Foods Beverages Household Pets 1986 1.10 1.40 2.30 2.70 0.90 1987 2.00 2.10 2.30 7.40 3.90 1988 2.80 3.40 5.40 9.40 1.40 1989 4.50 4.90 10.10 15.70 1.60 1990 11.40 9.20 11.40 25.90 11.60 1991 13.40 9.30 13.40 32.90 22.90 1992 11.50 8.70 8.90 30.50 18.20 1993 13.10 10.40 15.10 29.60 15.3 Source: Productscan, copyright 1994, Marketing Intelligence Service, Ltd., Naples, NY.

It would seem that a green product overload may have also reduced the effectiveness of green marketing campaigns. A study, in the mid­1990’s, showed that of 300 green advertisements, many green ads failed to make the consumer connection between what the company was doing for the environment and how it desired to affect its consumer groups. The green backlash, therefore, may be the failure of firms, domestic and international, to link green issues with their products and

6

J. D. Williams and L. Lanaria Volume 7 – Fall 2009 operations as well as the inability to communicate this relationship. To some extent, the inflated price of green goods may have also generated consumer apprehension if not more absolute barriers to purchase their products (Wasik, 1996).

The challenged marketers’ gloomy face may be partly due to a component of the consumers’ own equivocal commitment towards their environment. Quite possibly of no shock to most, has been the results of previous surveys, which have shown consumers with strong support for environmental protection. When it came to specific green behavior, polling trends tended to confirm that mainstream consumers have learned to talk the talk, but are still taking baby­steps towards environment issues. A recent survey by the TNS Group, determined that just 26 percent of Americans are saying that they actively seek environmentally friendly products.

Green Consumers as a Growth Markets The United States is one of the largest markets for green products. More than 37 million consumers consider themselves ‘true­blue greens’ and 20 million say they are ‘socially conscious (Wasik, 1996).’ With the global economy becoming more difficult towards policies regarding pollution, ozone depletion, global warming, and acid rain, environmental issues are expected to occupy not only President Obama’s prime directive but the world’s center stage for decades to come (Wasik, 1996).

Drawing from past researches and analysis of the strategic marketing appeals towards green products that have either succeeded or failed in the marketplace over the past decade; some important lessons have emerged for crafting effective green marketing programs and product strategies. Successful green products campaigns have been able to appeal to mainstream consumers or lucrative market niches while frequently commanding price premiums (Ottman, Stafford, & Hartman, 2006) and (Stoneman, Turner & Wang, 2005).

Green Market Information and Availability Many of the successful green products employ compelling, educational marketing messages and slogans that connect green product attributes with desired consumer value. Successful marketing programs effectively calibrated consumer knowledge to recognize the green product’s consumer benefits. In some instances, the environmental benefits of the green products were positioned as secondary, if mentioned at all. Some compelling marketing communications educate consumers to recognize green products as “solutions” for their personal needs and the environment. In practice, the analysis conducted from studies suggests that advertising that draws attention to how the environmental product benefit should deliver desired personal value, which may be able to broaden consumer acceptance of green products (Ottman, Stafford, & Hartman, 2006). Table­2 has illustrated examples of successful marketing messages that have educated consumers towards deriving inherent values from green marketing.

Table 2: MARKETING MESSAGE CONNECTING GREEN PRODUCTS with DESIRED CONSUMER VALUE VALUE MESSAGE and BUSINESS/PRODUCT “The only thing our washer will shrink is your water bill.” – ASKO Efficiency and “Did you know that between 80 and 85 percent of the energy used to wash clothes comes from heating the cost effectiveness water? Tide Coldwater – The Coolest Way to Clean.” – Tide Coldwater Laundry Detergent “mpg ” – Toyota Prius “20 years of refusing to farm with toxic pesticides. Stubborn, perhaps. Healthy, most definitely.” – Earthbound Health and Safety Farm Organic “Safer for You and the Environment.” – Seventh Generation Household Cleaners “Environmentally friendly stain removal. It’s as simple as H2O.” – Mohawk EverSet Fibers Carpet Performance “Fueled by light so it runs forever. It’s unstoppable. Just like the people who wear it.” – Citizen Eco­Drive Sports Watch “Think is the chair with a brain and a conscience.” – Steelcase’s Think Chair Symbolism “Make up your mind, not just your face.” – The Body Shop Convenience “Long life for hard­to­reach places.” – General Electric’s CFL Flood Lights Bundling “Performance and luxury fueled by innovative technology.” – Lexus RX400h Hybrid Sports Vehicle Source: Compiled by J.A. Ottman, E.R. Stafford, and C.L. Hartman, 2006

7

What is the Asian- American Consumer Behavior towards Green Marketing?

BARRIERS TO ‘GREENING’ Bonini and Openheim field tested 7,751 consumers around the world and uncovered five barriers to buying green at every stage of the purchase:

1. Lack of awareness, 2. Negative perceptions, 3. Distrust, 4. high prices, and 5. Low availability (Bonini & Oppenheim, 2008).

Lack of awareness of all of advantages and disadvantages of buying green products can cause confusion and frustration for consumers. On the one side, although many consumers would want to support environmental issues, they may not quite understand how to act on their ‘green’ impulses. On the other side, many companies have attempted to label green products with meaningless and/or bewildering data for consumers to absorb. For example, a labeling program for carbon footprint has indicated how much carbon dioxide was emitted within the product’s production process, packaging, and shipment. Calculating carbon footprints requires some very fancy math and the calculated results have only a limited audience, certainly not the average consumer. Therefore, when a bag of chips has been labeled to contain 75 grams of carbon, do you know what it means (Bonini, 2008)?

The second barrier hampering ‘greening’ has been negative perception. Some green products such as hybrid automobiles have become status symbols. But many green products still suffer from misdirected or misinformed image problems. According to a 2007 GfK Roper Green Gauge Study (Reprinted in the Stanford Social Innovation Review, Fall 2008) of more than 2,000 Americans, fully 61 percent believed that green products performed worse than conventional products. For example, early versions of the CFL light bulbs were slow to light up and had a weak light when they illuminate and, in some cases did not fit properly into normal light fixtures. Such negative images can cause consumers to shy away from green products. Clearly, companies that market green products have to re­fabricate the value­perceptions for these products and prove that these products can perform as well as conventional products (Bonini, 2008).

The third barrier for consumers has been distrust. Consumers not only doubt the quality of the green products but also their very greenness. According to a GfK Roper Survey, although consumers believe the environmental claims of scientists and environmental groups have stated, these same consumers have tended to not believe the claims of media, government, and businesses. A 2007 study by the TerraChoice Environmental Marketing Inc. (Reprinted in the Stanford Social Innovation Review, Fall 2008) examined 1,753 environmental products claims and found that all but one were misleading or just plain false.

The age­old bug­a­boo, high price has logged in as the fourth barrier to going green. Indeed, price has been the largest barrier to buying green products according to a 2007 survey made by U.K. Department for Environment, Food, and Rural Affairs (Reprinted in the Stanford Social Innovation Review, Fall 2008). Close to half of the survey participants desired a two­ year return on the premium prices they paid for the green product. The situations was exacerbated even more when one considers that, nearly 70 percent of green appliances, including e energy efficient televisions, washers, and dryers, took longer to recoup their purchaser’s money (Bonini, 2008). In terms of the cost­to­benefit analysis that many astute consumers have been conducting towards most home capital goods, such as appliances, the long term value of green products might end up finding itself on the wrong side of the justification curve.

The last of the five barriers would be low availability. Many consumers who had decided to be green­oriented consumers have just not found green products to be sufficiently cost effective enough to incorporate them into the weekly family budget. And, of course, the same consumer mind set has found it even more challenging to justify the purchasing of more expensive capital items, such as automobiles home heating/air­conditioning systems, or even high R­rated insulation systems. Most all other products offer a wide range of price­to­performance and can be purchased at many shopping centers near by. Green products are not so readily available which presents a comparative selection nightmare for consumers. For example, In a survey conducted in Chicago and San Francisco area, out of the 23 groceries, fewer than half offered green products besides organic foods and CFLs. Not to mention that among the minority of these groceries that offered eco­friendly non­food products, only about 10 percent stocked more than one product (Bonini, 2008).

8

J. D. Williams and L. Lanaria Volume 7 – Fall 2009

To increase the sales of environmentally friendly products, companies should fully assess the conditions that spawned these barriers and determine effective marketing paths around as many of these unhealthy conditions as possible. In other words, companies must:

 Increase consumer awareness of green products,  Improve consumers’ perception of eco­product quality,  Strengthen consumers’ trust in both product(s) and company,  Offer more price­to­value for these green products, and  Increase green product availability through extending distribution channels (Bonini, 2008).

Due to consumers being largely unaware of green alternatives, companies must expand their lesson plans beyond the basic product fundamentals by educating consumers on larger environmental issues and the benefits of going ‘green.’ If possible, nonprofit and government agencies would need to share in educational responsibilities and take up the cause for green propagation.

Aside from educating consumers, companies must also build better products. Consumers still value product performance, reliability, and quality as much or more than a product’s ecological soundness. To resolve the distrust issue, companies must rebuild public trust. Companies must inform the public about their products true environmental impacts. Telling consumers to act green when the company itself is making little effort to improve its operations is one of the factors of the consumers’ distrust.

STUDY OBJECTIVE AND METHODOLGY To examine Asian consumer profiles based on ethnicity, age bracket, gender, regional location, and educational level demographic characteristics in relationship to recognition and consumption of green marketing in the United States. Based on ethnicity, this study has focused upon three Asian nationalities: Chinese, Filipino, and Indian with certain age brackets. The prime objective of this research was to determine selected Asian consumer behaviors and attitudes towards their respective participation and support of environmentally friendly activities, specifically green products recognition and purchase.

Sampling Procedures and Data Collection The population targeted for this study consisted of subjects of Asian ethnicity specifically Chinese, Filipino, and (India) Indians residing in United States. First sample set was derived from United States citizens from the above three ethnic origins and obtained via online research query systems. Second sample set will be derived from field surveys distributed by hand in Pennsylvania, California, North Carolina, Boston, and Arizona. A minimum of sixty ethnicity questionnaires along with three age subdivision samples targets will be gathered.

Demographic Variables Three demographic variables were investigated: age bracket, ethnicity, and educational level. The age was grouped into three samplings: 18­26, 27­35, and 36­50. The ethnicity independent variable focused on three Asian nationalities: Chinese, Filipino and Indians.

Geographic Assessment A series of stratified samplings were conducted based upon 4 regions in the U.S. (East Coast, West, Midwest, and South). Surveys were administered by hand and an Internet Web survey.

Measures A structured questionnaire was designed to gather the data required for the study. The questionnaire was divided into two parts: 1st) Identity branching questions and 2nd) statements that were designed to measure: 1) the respondent’s awareness of green marketing, 2) attitudes and behaviors of consumers towards green products, and 3) their willingness to pay more for green products over non­green products.

Survey Instrument Both short­frame answers and seven­point Likert scale were designed into the survey.

9

What is the Asian- American Consumer Behavior towards Green Marketing?

SCOPE AND LIMITATIONS This paper had limited its study towards 3 ethnic Asian consumer groups currently residing in United States: Chinese, Filipino, and (India) Indians. The authors of this study realize that there are many other Asian groups to be considered for future studies as well as various Spanish­ speaking groups and African Americans.

Although this study attempted to questions the targeted groups throughout the U.S., the survey respondents were not a statistically balance sub­group spread between the 4 regions in America. In addition the total number of respondents was not a large enough statistical sampling

In light of these ethnically­driven research findings compiled in this paper, the environmental awareness and consciousness of today’s U.S. population are relatively complex. For example in profiling target respondents by geographical location, while most respondents residing in different U.S. regions may show moderate to high awareness of environmental issues around them, it does not necessarily correlate to their consumption behavior in buying green products. It would follow that a robust in size, yet well delineated, profile of green consumers should be constructed in the future.

HYPOTHESES The hypotheses guiding this study were related to three overriding objectives:

1. To examine the consumer profile based on socio­demographic characteristics on age bracket, gender, regional location, and educational level, 2. To examine on the consumer behavior based on ethnicity focused on three Asian­American population: Chinese, Filipino, and Indians, and 3. To examine these green consumers behavior towards their likelihood to participate and support environmentally friendly activities and green products.

The hypotheses were categorized according to their different demographic characteristics. Each category included the selected Asian consumer scaled measures of environmental knowledge and awareness, environmental attitude, and purchasing behavior:

Culture: H1.1: Consumer behavior is strongly influenced by culture. Chinese, Filipino and Indians, respond differently in terms of consumer behavior. Hypothesis H1.1 will be tested using Survey Questions Q­6 and Q­7 (see Appendix for Survey).

Gender: H2.1: Chinese males, more than females, are more knowledgeable about environmental issues, specifically green marketing. H2.2: Filipino males, more than females, are more knowledgeable about environmental issues, specifically green marketing. H2.3: India Indians males, more than females, are more knowledgeable about environmental issues, specifically green marketing. Hypotheses H2.1 – H2.3 will be tested using Survey Question Q­8. (see Appendix for Survey). H2.4: Chinese, Filipino, and Indians females, more than their respective culture males, are more likely to participate in green marketing activities through purchasing green products. H2.5: Filipino female consumers, more than Chinese and Indian female consumers, are more likely to buy green products. Hypotheses H2.4 – H2.5 will be tested using Survey Questions Q­14, Q­16, Q­19, Q­21 and Q­33 (see Appendix for Survey).

Age: H3.1: Young Chinese (ages 18 to 26) are more concerned and aware on environmental issues than the older Chinese population. H3.2: Young Filipinos (ages 18 to 26) are more concerned and aware on environmental issues than the older Filipinos population.

10

J. D. Williams and L. Lanaria Volume 7 – Fall 2009

H3.3: Young India Indians (ages 18 to 26) are more concerned and aware on environmental issues than the older Indians population. Hypotheses H3.1 – H3.3 will be tested using Survey questions Q­8 and Q­9. (see Appendix for Survey). H3.4: Chinese, Filipinos, and Indians (ages 27 to 35) are more likely to actively support and buy green products. H3.5: The older Chinese population, more than older Filipinos and Indians, would tend to be more likely to actively support and buy green products. Hypotheses H3.4 and H3.5 will be tested using Survey Questions Q­10, Q­17, Q­19, and Q­33 (see Appendix for Survey).

Education: H4.1: Indians, having attained a high educational level (higher than the two other Asian groups) have a greater propensity to supporting green activities by buying green products. Hypothesis H4.1 will be tested using Survey Questions Q­19 (see Appendix for Survey).

Geographic Location: H5.1: Chinese, Filipinos, and Indians, situated in the West region, are more aware of environmental issues then all other regions of the U.S. Hypothesis H5.1 will be tested using Survey Question Q­8. (see Appendix for Survey). H5.2: Chinese, Filipinos, and Indians population, situated in the West and Northeast regions, are more likely to support and buy green products. Hypothesis H5.2 will be tested using Survey Questions Q­14, Q­17 and Q­19 (see Appendix for Survey).

RESEARCH IMPLICATIONS, ANALYSIS AND FINDINGS In light of the importance of the Asian segment in the U.S. market, Asian consumers would represent prime marketing segments. Thus far very little study on the underlying models, concepts, and views of Asian consumer segmentation has been conducted. Asian consumer behavior and their respective perceptions towards green products represent a new era of marketing tests. The results of such investigation into these and other Asian cultures may lead toward further marketing penetration of these rather unknown and untapped ethnic groups.

Data Set Segmentation The research data was analyzed to determine the extent to which the U.S. residing Chinese, Filipino and Indian consumers reflected distinct attitudes and perceptions towards environment conscious programs, green marketing activities and green products. The tested sample consisted of 195 individuals­ the Chinese respondents numbered the highest 61 (31.28%). Most of the respondents were within the 18­26 years old age bracket, with a total of 87 (44.62%). Unfortunately, due a localized bias of both researchers living in the East coast, most of the respondents were from the Northeast, numbering 80 (41.03%). The detailed segmentation survey has been presented below in Table­3:

11

What is the Asian- American Consumer Behavior towards Green Marketing?

Table 3: Demographic Characteristics of Respondents Identity Branching Response Response Ethnicity Percent Count Chinese 31.28% 61 Filipino 36.92% 72 Indian 31.79% 62 Age Group 18­26 years old 44.62% 87 27­35 years old 27.69% 54 36­50 years old 27.69% 54 Gender Male 49.23% 96 Female 50.77% 99 Location Northeast 41.03% 80 West 29.74% 58 Midwest 10.26% 20 South 18.97% 37 Educational Level Highs School /GED Diploma 15.38% 30 2­4 year degreed in College 46.67% 91 Master's Degree or Above 37.95% 74

Statistical analyses were conducted on the data gathered to test the hypotheses presented on this research.

Analysis Set­ Hypotheses 1 The test was conducted using Survey Questions 6 and 7. The objective was to gather the target respondents’ perceptions and behaviors towards purchasing green products. The data, categorized by ethnicity, illustrated that Chinese respondents, with an average rating of 4.16, were the least likely to believe that their buyer behavior was influenced by their culture. While Filipinos (5.35) and Indians (5.38) were most likely believe that their respective cultures influenced their buyer behavior because of its high mean values. The P­value for Q6 and Q7 are 1.82E­10 and 0.012 respectively. Both P­values are less than the significance value of 0.05, so we assume that the three ethnic groups affected their consumer and purchasing behaviors. Also, the F values of Q6 and Q7 are 25.26 and 4.50 respectively which are both greater than critical F value of 3.043 thus their variances were not all equal. Two­factor ANOVA with replication was also tested for the culture and ethnicity’s influence on consumer (Q6) and purchasing (Q7) behavior and the three different ethnic groups with an alpha of 0.05. The results showed that the P­value for Q6/Q7 is less than alpha (8.92E­05 < 0.05) so then means are not the same. As for the P­ value for the different ethnicity, 1.34E­09 is less than alpha 0.05, so the means are all different for each group. The P­value of the interaction of Q6/Q7 and the ethnicity is greater than the alpha (0.196 > 0.05), therefore we can say that the culture and ethnicity’s influence on consumer and purchasing behavior both affect the Chinese, Filipino and Indian sample population. As Table 4.1 and Table 4.2 points out, there seems to be ample evidence to support hypothesis H1.1, therefore H1.1 would be accepted.

Table 4.1: Respondents’ ethnicity and their belief in influence on their consumer behavior and purchasing behavior Mean for SD for Mean for SD for Composite Ethnicity Q6 Q6 Q7 Q7 Mean Chinese 4.28 1.70 4.03 1.62 4.16 Filipino 5.85 1.08 4.86 1.56 5.35 Indian 5.84 1.47 4.92 2.31 5.38 Total 5.35 1.59 4.62 1.88 4.99 (SD = Standard deviation; Kurtosis: Q­6= 0.943 and Q­7 = ­0.840, reflecting low inverted peak levels)

12

J. D. Williams and L. Lanaria Volume 7 – Fall 2009

Table 4.2: Two­factor ANOVA with replication results on the respondents’ ethnicity and their belief in influence on their consumer behavior and purchasing behavior Source of Variation SS df MS F P­value F critical Q6/Q7 44.06830601 1 44.06830601 15.70701 8.92E­05 3.867419 Chinese/Filipino/Indian 121.3825137 2 60.69125683 21.63182 1.34E­09 3.0208 Interaction 9.18579235 2 4.592896175 1.637019 0.196004 3.0208 Within 1010.032787 360 2.80564663 Total 1184.669399 365

*Analysis Set­ Hypotheses 2 The second demographic variable examined was a gender profile amongst the three Asian groups. The development of unique sex roles, skills and attitudes has led this research to hypothesized that different levels of awareness exists between males and females concerning environmental issues, specifically green marketing. Hypothesis H2.1 to 2.5 was focused on the gender variations.

Hypotheses H2.1, H2.2, and H2.3 would be rejected. Hypothesis H2.4 would be rejected. Hypothesis H2.5 was rejected.

* Analysis Set­ Hypotheses 3 Referring back to a previously mentioned green marketing study, conducted on the general, none ethnic specific sample U.S. populations, [Roberts & Straughan, 1999] the demographics of age had been examined to determine the age relevance.. The general belief has been that younger adults (resembling this study’s age grouping of (18 to 26)) were likely to be more sensitive to environmental issues.

Hypothesis H3.1­H3 would be rejected. Hypothesis H3.4 and H3.5 would be accepted.

Analysis Set­ Hypotheses 4 The fourth set of hypotheses states that there is a positive correlation between higher education (ex. a masters degree) and knowledgeable with concern about environmental issues. The assumption that would follow would be that such correlated evidence would show that these groups would more likely participate in green activities as well as supporting green products. The results, from Table 11, have indicated that the Masters degreed respondents’ statistical means positively correlated across all three ethnicity. Chinese and Indian populations, with masters’ degrees or above, had the same higher mean score of 6.88. Overall education level of consumers does seem to correlated with their respective attitudes towards green products. Based on the research findings consolidated in Table 11, hypotheses H4.1­H4.3 was accepted.

Table 11: Respondent’s Ethnicity & Educational Level ­ Knowledge of Environmental Issues & Support for Green Products Educational Chinese Filipino Indian Level High School 6.19 None 6.25 or GED 2­4 year 5.81 6.32 6.60 College Degree Masters or 6.88 6.48 6.88 above

Table 12 shows that Indian respondents, with higher educational levels, reflected a greater propensity to supporting green activities by buying green products. Indians with educational level of masters’ degrees or above have mean scores of 6.36, substantially higher than the other two ethnic test subjects. Therefore, the hypothesis H4.4 was accepted.

13

What is the Asian- American Consumer Behavior towards Green Marketing?

Table 12: Indian Respondent’s Educational Level – Propensity to Support Green Products Indians Average of Q19 SD of Q19 High school or GED 5.00 0.00 2­4 Yr. College Degree 5.00 1.56 Masters & above 6.36 1.16 Total 5.61 1.41

* Analysis Set­ Hypotheses 5 Geographical location has been another variable of interest for green marketers. Studies have considered the correlation between place of residence and environmental concerns in previous studies, some these have been identified within this paper.

Hypothesis H5.1 and H5.2 would be accepted.

* Details and calculations not available due to manuscript page limitations. Please contact the primary author for a copy of the complete manuscript ([email protected]).

DISCUSSION AND CONCLUSION This research has shown quite unequivocally that Asian definitely have opinions concerning the environment and green marketing activities. Although some of the hypotheses were rejected, there were measurable responses for all hypotheses­ the Asian respondents were willing to participate in this marketing study. The businesses in American must find ways to take advantage of these and other Asian’s interests.

Previous research has been predominantly conducted without consideration of the person’s ethnicity. This study’s finding suggests that most Chinese, Filipino and Indian persons consider their cultural to influence their consumer behavior. As such, product managers and marketers might consider including specific ethnic materials in their demographic profiling and proportioning of green consumers. Understanding the Asian consumption patterns and purchase decision process, may be able to assist marketers to penetrate these heavily populated ethnic groups.

Green products and services are only a niche market today but are poised for strong growth in the years to come (Bonini, 2008). Companies should not only focus on the environmental investment of their green products towards the American multi­ethnic consumers, but, also expend serious effort towards the cost­benefit and consumer value­perceptions. When Asian and non­Asian consumers find it easy to track their savings from using a product, it would follow that they are more willing to re­buy and even try new green products. Of course, these companies would need to focus their energies towards increasing green product availability. How can companies make claim for new and enhanced safety, healthy and/or more cost effective to and for consumers if the consumers are unable to find their products? This would be particularly true for most Asians as they tend to congregate in large metropolitan areas.

Effective green marketing requires effectively applying appropriate marketing fundamentals to generate green product desirable outcomes. As national and international populations becoming more concerned about their environment and begin to more actively acknowledge their individual responsibility to care for the environment, it would follow that purchase behavior would likely change, and for the better. The research in this document has revealed that three traditionally untapped ethnic consumers in the America, Chinese, Filipino and Indians, have modified their purchasing behavior due to environmental concerns. As demands change, organizations should respond to these changes as a potentially growing marketing niche opportunity.

Given the increasing media coverage and political attention towards ‘green’ issues in the U.S., it would seem that environmental awareness and concern has become a socially accepted norm, and in some households, a daily conversation. With the current ecological conditions facing the U.S. and world, heightened levels of environmental consciousness are now being funded by our federal government in terms of energy and preemptive health programs. This set of surveys revealed a strong commitment from the American­Asian population towards their environmental responsibility. But because of the current economic conditions the U.S. is facing at present, the Asian consumer study results indicated average mean scores for these consumers regarding their willingness to spend more on green products and services. Though these Asians may not buy

14

J. D. Williams and L. Lanaria Volume 7 – Fall 2009 large quantities of green products due to high prices and availability, their green product buying commitment mat none­the­ less be sustained by their beliefs that greening is a worthwhile cause.

WORKS CITED A., C. (2000). Facing the Backlash: Green Marketing and Strategic Reorientation in the 1990s. Journal of Strategic Marketing. 8 (3). Pg. 277­296. Anonymous. (2008, June 16). Why does Berkeley have so many Priuses? Green.view. Economist.com / Global Agenda . London: The Economist Newpaper Ltd. Armstrong, Gary and Philip Kotler (2008). Marketing: An Introduction. Prentice Hall. Ed. 9. Belk, R. (1995). Studies in the New Consumer Behaviour. In D. Miller, Acknowledging Consumption: A Review of New Studies. Routledge. Pg. 58­95. Bergeron, J., Barbaro­Forleo, G., & Laroche, M. (2001). Targeting Consumers Who are Willing to Pay more for Environmentally Friendly Product. Journal of Consumer Marketing. 18 (6). Pg.. 503­520. Bloom, P., Hoeffler, S., Keller, K. L., & Basurto Meza, C. (2006). How Social­Cause Marketing Affects Consumer Perceptions. MITSloan Management Review , 47 (2). Pg. 40­55. Bohlen, G. M., Diamantopoulos, A., & Schlegelmilch, B. B. (1996). The Link between Green Purchasing Decisions and Measures of Environmental Consciousness. European Journal of Marketing. 30 (5). Pg. 35­55. Bonini, S., & Oppenheim, J. (2008). Cultivating the Green Consumer. Stanford Social Innovation Review. Pg. 57­61. Covin, J. G., & Miles, M. P. (2000). Environmental Marketing: A Source of Reputational, Competitive, and Financial Advantage. Journal of Business Ethics. 23. Pg. 299­311. Diamantopoulos, A., Schlegelmilch, B., Sinkovics, R. R., & Bohlen, G. M. (2003). Can Socio­demographics still Play a Role in Profiling Green Consumers? A Review of the Evidence and an Empirical Investigation. Journal of Business Research. United Kingdom: Elsevier. Pg. 465­480. Dolliver, M. (2008, May 12). Deflating a Myth. Brandweek. Pg. 30­32. Ferrell, O. C. & Michael Hartline (2007). Marketing Strategy. South­Western College Pub. Ed. 4. Hooley, Graham, John Saundeers, and Neigel F. Piercey (2008). Marketing Strategy and Competitive Positioning. Prentice Hall. Ed. 4. Horn, Greg (2006). Living Green: A Practical Guide o Simple Sustainability. Freedom Publishing Company. Grunert, S. C., & Juhl, H. J. (1995). Values, Environmental Attitudes, and Buying of oOrganic Foods. Journal of Economic Psychology. 16. Pg. 39­62. Karna, J., Hansen, E., & Heikki, J. (2003). Social Responsibility in Environmental Marketing Planning. European Journal of Marketing. 37 (5/6). Pg. 848­871. Kerin, Roger, Steven Hartley, & William Rudelius (2008). Marketing. McGraw­Hill/Irwin. Ed. 9. Masaaki Kotabe and Kristiaan Helsen (2007). Global Marketing Management. Wiley. Ed. Kotler, Philip and Kevin Keller (2008). Marketing Management. Prentice Hall. Ed. 13. Kotler, Philip & Nancy R. Lee (2007). Social Marketing Influencing Behaviors for Goods. Sage Publications, Inc. Ed. 3. Lamb, Charles W., Joseph F. Hair, & Carl McDaniel (2007). Marketing. South­Western College Pub. Ed. 9. Lunt, P. (1995). Psychological Approaches to Consumption. In D. Miller, Acknowledging Consumption: A Review of New Studies. Routledge. Pg. 238 ­263. Matheson, Christie (2008). Green Chic: Saving the earth in Style. Sourcebooks, Inc. Martin, B., & Simintiras, A. C. (1995). The Impact of Green Product Lines on the Environment: Does what They Know Affect how They Feel? Marketing Intelligence & Planning. 13 (4). Pg. 1623. McDaniel, S., & Rylander, D. H. (2001). Strategic Green Marketing. Journal Consumer Marketing. Pg. 4­10. Miller, D. (1995). Acknowledging Consumption: A Review of New Studies. Routledge. Minton, A. P., & Rose, R. L. (1997). The Effects of Environmental Concern on Environmentally Friendly Consumer Behavior: An Exploratory Study. Journal of Business Research. 40 (1). Pg. 37­48. Ottman, J., Stafford, E., & Hartman, C. (2006, June). Avoiding Green Marketing Myopia. Environment. Heldref Publications. 48 (5). Pg. 23­36. Peter, J. Paul &, James Donnelly (2008). Marketing Management. McGraw­Hill/Irwin. Ed. 9. Polonsky, M. J. (1995). A Stakeholder Approach to Designing Environmental Marketing Strategy. Journal of Business & Industrial Marketing. 10 (3). Pg. 29­46. Polonsky, M. J. (1994, November). An Introduction to Green Marketing. Electronic Green Journal . Pride, William M. & O. C. Ferrell (2007). Marketing. South­Western College Pub. Ed. 14. Roberts, J. A. (1996). Green Consumers in the 1990s: Profile and Implications for Advertising. Journal of Business Research. 36 (3). Pg. 217­231.

15

What is the Asian- American Consumer Behavior towards Green Marketing?

Roberts, J. A., & Straughan, R. D. (1999). Environmental Segmentation Alternatives: A Look at Green Consumer Behavior in the New Millenium. The Journal of Consumer Marketing , 16 (6), Pg. 558. Shrum, L. J., McCarty, J. A., & Lowrey, T. M. (1995). Buyer Characteristics of the Green Consumer and their Implications for Advertising Strategy. Journal of Advertising , 24 (2), Pg.. 71­82. Solomon, Michael R., Greg Marshall, & Elnora Stuart (2008). Marketing: Real People, Real Choices. Prentice Hall. Ed. 5. Stoneman, P., Turner, W., & Wong, V. (2005). Marketing Strategies and Market Prospects for Environmentally­Friendly Consumer Products. British Journal of Management. 7 (3). Pg. 263­281. Wasik, J. F. (1996). Green Marketing and Management: A Global Perspective. Blackwell Publishing. Wikipedia. (2008, September 27). Supermarkets in the United States. Retrieved October 01, 2008, from http://en.wikipedia.org/wiki/Supermarkets_in_the_United_States Wiser, R., & Pickle, S. (1997, September). Green Marketing, Renewables, and Free Riders: Increasing Customer Demand for a Public Good. Berkeley, California. Yusuf, F., & Brooks, G. (2004, December). An Emperical Examination of Domestic Fuel and Power Consumption in New South Wales: Marketing Implications for Greenhouse Gas Reduction. The Australian and New Zealand Marketing Academy Conference . Wellington. Yusuf, F., & Brooks, G. (2004, December). An Empirical Examination of Domestic Fuel and Power Consumption in New South Wales: Marketing Implications for Greenhouse Gas Reduction. The Australian and New Zealand Marketing Academy Conference . Wellington.

16

J. D. Williams and L. Lanaria Volume 7 – Fall 2009

APPENDIX

Consumer Survey Questionnaire

“How Green Marketing Affects Asian­American Consumer Behavior”

1. Which one best describes your nationality or ethnicity? (Circle one)

A. Chinese B. Filipino C. Indian D. Other*

*If answer to question 1 is “D. Other”, you may discontinue answering the survey.

2. Which age group do you belong? (Circle one)

A. 18­26 years old B. 27­35 years old C. 36­50 years old D. Other*

*If answer to question 2 is “D. Other”, you may discontinue answering the survey.

3. Which gender group do you belong? (Circle one)

A. Male B. Female

4. Which one best describes your location in the United States? (Circle one)

A. Northeast B. West C. Midwest D. South

5. Which one best describes your educational level? (Circle one) A. High School/ GED Diploma B. 2­4 year degreed in College C. Master’s Degree or Above

17

What is the Asian- American Consumer Behavior towards Green Marketing?

Instructions: Please rate the following statements below accordingly based on your personal preference. Check the appropriate column that best describes your response to the statements.

Question ongly ongly Strongly Disagree Disagree Somewhat Disagree Neither Nor Agree Disagree Somewhat Agree Agree Str Agree Not Applicable

I believe that my culture plays a role in my 6 consumer behavior. I believe that my purchasing behavior is 7 influenced by my ethnicity. I am aware of environmental issues around 8 me. I am concerned with the environmental 9 quality (air quality, pollution, etc.) in my community. I am willing to participate in environmental 10 friendly activities in my community. I make my own decision in purchasing a 11 product. I am familiar with the concept green 12 products. I would be more likely to buy green products 13 if I know or felt that other people are also supporting it. I am willing to support and buy products 14 from companies and businesses that are environmentally responsible. I refuse to buy products from companies 15 who are accused of being unfriendly to the environment (polluters). I would be more likely to buy green products if I read information about what makes a 16 product or service more environmentally friendly. I am willing to spend more money on green 17 products and services. I am aware that green products are more 18 environmentally friendly. I try to choose green products whenever 19 available. I realize that the products and services I buy 20 may affect the environment. I think that my green consumer behavior will 21 make a difference in saving the environment. I am proud to be a green consumer in the 22 supermarket. The places I shop make it easy for me to 23 identify green products. Advertising make it easy for me to identify 24 green products.

18

J. D. Williams and L. Lanaria Volume 7 – Fall 2009

I believe that people are not informed 25 enough about green products being environmentally friendly. I believe that companies, in general, are not 26 advertising enough about their green products. I believe that the local government should be responsible for educating consumers 27 about green products being environmentally friendly. I believe that the academia should be responsible for educating consumers about 28 green products being environmentally friendly. I believe that the company itself should be responsible for educating consumers about 29 green products being environmentally friendly. I try to convince other people close to me to 30 buy green product because of its benefits to the environment. I buy green products because of social 31 status. 32 I buy green products because it’s the fad. I buy green products because I believe its 33 cause.

19

Economic Value Creation from Technology Entrepreneurship: A Comparative Analysis of Sales Growth of High-Technology Industries Listed in the Fortune 1000 Rankings from 2006-2009 ECONOMIC VALUE CREATION FROM TECHNOLOGY ENTREPRENEURSHIP: A COMPARATIVE ANALYSIS OF SALES GROWTH OF HIGH­TECHNOLOGY INDUSTRIES LISTED IN THE FORTUNE 1000 RANKINGS FROM 2006­2009

Simon S. Mak and Stephen Szygenda Southern Methodist University, USA

ABSTRACT This paper examines the economic impact and value creation of technology entrepreneurship. Specifically in this paper we compare and contrast the sales growth rates of firms in two high­technology industry categories (Internet Services and Retailing and Network and Other Communications Equipment) with firms in “Other” (approximately 50) industry categories that were listed in the Fortune 1000 Top Industries: Fast Growers rankings from 2006 to 2009. This data set was chosen for two reasons: 1) to eliminate the bias of rapid growth rates at the startup and early stages of a firm and 2) since the companies are listed as a Fortune 1000 company, the company has established itself as a viable on­going business and thus the likelihood of success moving forward is more probable than a firm at the startup phase. The analysis shows that from 2006­ 2009, the average annual sales growth rates in the Internet Services and Retailing (19.5%) industry and in the Network and Other Communications Equipment (14.6%) industry categories were statistically significant when compared to the Other (7.5%) industry category. In addition, the analysis also shows that technology startups that have reached Fortune 100 status in the Internet Services and Retailing and Network and Other Communications Equipment industry categories achieved an annual average sales growth rate at the 90% level of confidence of 5.8% to 18.0% and 4.9% to 9.2% greater than the average sales growth rate of firms in the Other industry category, respectively. When measured in relative terms, i.e., proportional growth rate versus the average for Other firms, Internet Services and Retailing industry firms created economic value between 77% (5.8/7.5) to 240% (18.0/7.5) faster than firms in the Other category, and Network and Other Communications Equipment firms created economic value between 65% (4.9/7.5) to 123% (9.2/7.5) faster than firms in the Other category. The data seems to indicate that economic value is created at a faster rate for high­technology firms than non­high­technology firms. This should not be a surprising conclusion for technology firms at the startup and early stages of growth. However, it is unexpected that this faster growth rate continues even after the technology startup has achieved Fortune 1000 status. The long­term implications are clear ­ for technology entrepreneurs as well as venture investors, policy makers, and academia ­ technology startups provide a very efficient and rapid vehicle for creating economic value, and they continue to do so at a rate significantly faster than their non­technology counterparts even after reaching “steady­state” status. Therefore, much more attention should be given to encouraging high­growth technology startup formations and their subsequent establishment as a Fortune 1000­like company. This includes fostering a spirit of entrepreneurship with engineering and science students and faculty as well as creating economic incentives for investors and entrepreneurs to pursue high­technology startups.

Keywords: Technology Entrepreneurship, Economic Value Creation, Economic Development, Company Growth

INTRODUCTION This paper examines the economic impact and value creation of technology entrepreneurship, with a focus on sales growth rate. In doing so, we analyze industry data from the 2006 to 2009 Fortune 1000 Top Industries: Fast Growers rankings. This data set was chosen for two reasons: 1) to eliminate the bias of rapid growth rates at the startup and early stages of a firm and 2) since the companies are listed as a Fortune 1000 company, the company has established itself as a viable on­going business and thus the likelihood of success moving forward is more probable than a firm at the startup phase.

Technology entrepreneurship is widely recognized as playing a key role in increasing local, regional, and even national wealth and competitiveness (Boocock, Frank, Warren, 2009). Even so, there is very little research which analyzes and quantifies the economic impact and value creation of technology firms. A goal of this paper is to encourage policy makers (both public and private) to give serious consideration to proactively encouraging and supporting the startup of technology firms. For the engineer or technologist, our hope is to inspire them to see the positive and highly efficient impact that is possible through the creation of a successful technology enterprise, including the creation of wealth for the technology entrepreneur, employees, and the local economy. For academic institutions, this paper is a call to be more proactive in teaching engineering and science students the fundamental concepts of entrepreneurship as these students will be the key to creating technologies that that can radically change and improve society.

20

S. S. Mak and S. Szygenda Volume 7 – Fall 2009

THE DATA SET The 2006­2009 Fortune 1000 Top Industries: Fast Growers Rankings For this paper, we will be using industry data rather than specific company data that is available from the Fortune 500 website. Specifically we will analyze data in the section of the website for Top Industries and select the Fast Growers option. For example, the 2009 industry data for fast growers is listed at http://money.cnn.com/magazines/fortune/fortune500/2009/ performers/industries/fastgrowers/ and is shown in Table 1. There are analogous data for 2008, 2007, and 2006. Note that for 2009 and 2008 there are 52 industry categories and for 2007 and 2006 there are 50 industry categories. Also, throughout these four years, some industry categories have been removed/added/renamed, but for our analysis, no adjustments were needed. We then selected two high­technology industries to analyze ­ Internet Services and Retailing and Network and Other Communications Equipment. This decision was made primarily through researching high­technology venture capital websites and both of these industry categories were very prominent investments types. There were other high­technology industries such as Computer Software which did not make it on the Fast Growers list and it is unclear why this is the case. Nevertheless, the methodology is to analyze the average sales growth rates for these 2 high­technology industry categories and compare them to the average sales growth rates of the approximately 50 remaining industry categories which we will refer to as “Other”. Table 2 and Figure 1 show a summary of the data.

Table 1: Sales Growth Data from 2009 Fortune 1000 Top Industries: Fast Growers Rankings Rank Industry % 1 Pipelines 27.3 2 Engineering, Construction 26.8 3 Petroleum Refining 25.2 4 Mining, Crude­Oil production 23.9 5 Oil and Gas Equipment, Services 19.8 6 Energy 16.4 7 Construction and Farm Machinery 16.1 8 Metals 16.1 9 Food Production 15.9 10 Industrial Machinery 13.3 11 Network and Other Communications Equipment 13.2 12 Railroads 12.6 13 Health Care: Insurance and Managed Care 12.1 14 Financial Data Services 11.8 15 Health Care: Pharmacy and Other Services 11.6 16 Internet Services and Retailing 11.3 17 Medical Products and Equipment 9.9 18 Electronics, Electrical Equipment 9.3 19 Food Services 9.3 20 Food Consumer Products 9.1 21 Food and Drug Stores 9.0 22 Household and Personal Products 9.0 23 Chemicals 7.5 24 Scientific, Photographic, and Control Equipment 7.1 25 Utilities: Gas and Electric 7.0 26 Pharmaceuticals 7.0 27 Aerospace and Defense 6.9 28 Health Care: Medical Facilities 6.9 29 Wholesalers: Health Care 6.8 30 Information Technology Services 6.7 31 Wholesalers: Electronics and Office Equipment 6.1

21

Economic Value Creation from Technology Entrepreneurship: A Comparative Analysis of Sales Growth of High-Technology Industries Listed in the Fortune 1000 Rankings from 2006-2009

Table 1: Sales Growth Data from 2009 Fortune 1000 Top Industries: Fast Growers Rankings (Continued) 32 Airlines 5.4 33 Wholesalers: Diversified 4.8 34 Telecommunications 4.8 35 Specialty Retailers 4.2 36 Beverages 4.2 37 Entertainment 3.1 38 Computers, Office Equipment 2.2 39 Packaging, Containers 1.0 40 Securities 0.9 41 Insurance: Life, Health (mutual) ­1.2 42 Semiconductors and Other Electronic Components ­2.2 43 General Merchandisers ­2.9 44 Motor Vehicles and Parts ­4.4 45 Commercial Banks ­5.0 46 Hotels, Casinos, Resorts ­5.2 47 Insurance: Life, Health (stock) ­7.8 48 Home Equipment, Furnishings ­9.2 49 Real Estate ­11.1 50 Automotive Retailing, Services ­11.1 51 Insurance: Property and Casualty (stock) ­12.6 52 Diversified Financials ­15.9

Table 2: Average Sales Growth Data from 2006 to 2007 Fortune 1000 Top Industries: Fast Growers Rankings

Year­Over­Year Sales Growth (%) Yearly Average Fortune 1000 Top Industries: Fast Growers 2006 2007 2008 2009 2006­2009 Internet Services and Retailing 23.8 24.2 18.5 11.3 19.5 Network and Other Communications Equipment 15.8 13.9 15.6 13.2 14.6 Others (Average for Approx. 50 Industries) 6.6 10.1 7.3 6.2 7.5

Figure 1: Chart: Average Sales Growth Data from 2006 to 2007 Fortune 1000 Top Industries: Fast Growers Rankings

22

S. S. Mak and S. Szygenda Volume 7 – Fall 2009

ANALYSIS OF SALES GROWTH RATES DATA The analysis of company sales growth data will utilize mathematical techniques for testing the statistical significance between two average means using analysis of variance (ANOVA), and then estimating the confidence interval for the difference between the means of two populations for small sample size (n<30).

Test of Statistical Significance Using ANOVA The first analysis is to show that the average sales growth rates for the Internet Services and Retailing and Network and Other Communications Equipment industry categories are statically significant when compared to the overall average sales growth rate of the Other approximately 50 remaining industry category. Table 3 shows the ANOVA tables for comparing the A) Internet Services and Retailing and B) Network and the Other Communications Equipment industries with the Other industry, respectively. Based on the two ANOVA tables we see that the F values for both high­technology industries are greater than the F critical value of 5.98, thus statistical significance is demonstrated. It is also interesting to note that the F value for the Network and Other Communications Equipment industry of 41.3 is much greater than the F value of 14.34 for the Internet Services and Retailing industry. This is due to the high variance in the Internet Services and Retailing industry average.

Table 3: Analysis of Variance (ANOVA) for Comparing the Internet Services and Retailing and Network and the Other Communications Equipment industries with the Other industry, respectively.

Calculation of Confidence Interval For The Difference Of Two Means We begin with calculating the key statistical parameters mean and variance for all three datasets, where for the Internet Services and Retailing industry data, for the Network and the Other Communications Equipment industry data, and for the Other industry data. Thus we obtain:

For the Internet Services and Retailing industry:

19.45 and = 36.27, where =4

For the Network and the Other Communications Equipment industry:

14.63 and = 1.63, where =4

For the Other industry:

7.54 and = 3.22, where =4

Assuming normally distributed population and the variances of the population are equal, we can calculate the confidence interval for the difference between the means of the Internet industry and the Other industry.

23

Economic Value Creation from Technology Entrepreneurship: A Comparative Analysis of Sales Growth of High-Technology Industries Listed in the Fortune 1000 Rankings from 2006-2009

 Pooled Estimator of = 19.74

 Point Estimator of = 3.14

 At the 90% level of confidence, ( = 1.943, where degree of freedom for = 6

 Therefore, difference in means confidence interval is = = 19.45 – 7.54 ± 1.943 (3.14) = [5.79, 18.01]

We now perform the same calculation for the Network and the Other Communications Equipment industry and the Other industry.

 Pooled Estimator of = 2.42

 Point Estimator of = 1.10

 At the 90% level of confidence, ( = 1.943

 Therefore, difference in means confidence interval is = = 14.63 – 7.54 ± 1.943 (1.10) = [4.94, 9.22]

SUMMARY OF RESULTS The analysis shows that from 2006­2009, the average annual sales growth rates in the Internet Services and Retailing (19.5%) industry and in the Network and Other Communications Equipment (14.6%) industry categories were statistically significant when compared to the Other (7.5%) industry category. In addition, the analysis also shows that technology startups that have reached Fortune 1000 status in the Internet Services and Retailing and Network and Other Communications Equipment industry categories achieved an annual average sales growth rate at the 90% level of confidence of 5.8% to 18.0% and 4.9% to 9.2% greater than the annual average sales growth rate of firms in the Other industry category, respectively. When measured in relative terms, i.e., proportional growth rate versus the average for Other firms, Internet Services and Retailing industry firms created economic value between 77% (5.8/7.5) to 240% (18.0/7.5) faster than firms in the Other category, and Network and Other Communications Equipment firms created economic value between 65% (4.9/7.5) to 123% (9.2/7.5) faster than firms in the Other category.

The results of this research provide a basis for investigating additional questions, including:

 Analyzing all high­technology industries to determine which industries demonstrate the fastest growth just within the high­ technology sector, since our primary interest is in technology entrepreneurship and the economic impact from these endeavors  Analyzing company age information to derive statistics for determining how fast a technology startup typically achieves Fortune 1000 status and thus estimating the “payback period” for investments in economic value creation  Expanding the Fortune 1000 dataset to include global, i.e., none U.S., firms. It is clear that global technology entrepreneurship is an economic driving force across geographic boundaries, hence, how this is taking place and what could enhance success of these efforts becomes a research of major importance globally.

24

S. S. Mak and S. Szygenda Volume 7 – Fall 2009

CONCLUSIONS AND LONG­TERM IMPLICATIONS The data seems to indicate that economic value is created at a faster rate for high­technology firms than non­high­technology firms. This should not be a surprising conclusion for technology firms at the startup and early stages of growth. However, it is unexpected that this faster growth rate continues even after the technology startup has achieved Fortune 1000 status. In 2009, this meant that the technology firm generated sales of over $1.72 billion (Fortune 500 rankings website).

The long­term implications are clear ­ for technology entrepreneurs as well as venture investors, policy makers, and academia ­ technology startups provide a very efficient and rapid vehicle for creating economic value, and they continue to do so at a rate significantly faster than their non­technology counterparts even after reaching “steady­state” status. Therefore, much more attention should be given to encouraging high­growth technology startup formations and their subsequent establishment as a Fortune 1000­like company. This includes fostering a spirit of entrepreneurship with engineering and science students and faculty as well as creating economic incentives for investors and entrepreneurs to pursue high­technology startups.

REFERENCES Boocock, G., Frank, Regina., and Warren, L. (2009), ‘Technology­based entrepreneurship education: meeting educational and business objectives’, The International Journal of Entrepreneurship and Innovation, Vol 10, pp43­53. Fortune 500 Rankings website, http://money.cnn.com/magazines/fortune/fortune500/2009/index.html

25

Sustainable Development in Tourism Industry Context in Taiwan

SUSTAINABLE DEVELOPMENT IN TOURISM INDUSTRY CONTEXT IN TAIWAN

Chih­Wen Wu National Chung Hsing University, Taiwan

ABSTRACT The purpose of this research was to develop and test a conceptual framework for sustainable development in tourism industry context to address the integration of social, economic, and ecological elements of sustainable development and the contextual nature of sustainable development. Resource­based view theory was used to model the driving force, state and response indicators of sustainability development for tourism industry. Data was collected by the official census developed by the Taiwan Tourism Bureau. Indicators such as employment in the tourism industry, expenditures attributed to the tourism industry, air and water quality, tourism service, and hotel issues were used in the study. Structural equation modeling using AMOS software platform was used to generate and analyze the hypothesized relationships. The research results will be discussed.

Keywords: Sustainable Development, Tourism Industry, Resource­Based View Theory, Structural Equation Modeling

1. INTRODUCTION A common criticism of sustainable tourism development is that there is no consistently agreed upon theoretical framework from which a scientific understanding can be built (Cocklin, 1995). Without guidance from theory that is verified through testing, the theoretical framework or model can be used inappropriate and lead to poor planning. Any tourism destination without an adequate plan for development that addresses the economic as well as social and environmental functions of the industry is under­prepared for the impacts of visitors, catastrophic events, and enforcing market forces.

Without an understanding of these potential impacts on the environment­economic fabric of a industry, the sustainability of that industry is questionable. Therefore, a need exists to understand the complex interplay between the economic, environmental and social dynamics of a industry. Furthermore, Brundland (1987) warned that a persistent ignorance of the inseparability of these elements would constitute a mistake by the global community, and human needs must understand. Many approaches, however, tend to focus on only an aspect of a system overall sustainability, either environment or economic sustainablility (Cooper and Vargas, 2004). Mathieson and Wall (1982) recognized the scope of tourism impacts to exist in the economic, physical and social arenas. Cocklin (1995) the efforts have been superficial and omit reference to the social dimensions of sustainability. Twining­Ward (1999) interpreted this apparent lack of attention to the social aspects of sustainable tourism development as an impediment to move sustainability from principles to policy making.

Efforts to create universal principles of sustainable tourism development have also come under criticism as understanding. Sustainable systems, according to Meyer and Helfman (1993) are not generalized at the global scale, but are adaptable to local conditions. The study addresses the contextual nature of sustainable tourism development. This is done by using a Delphi­method developed set of sustainability indicators by the World Tourism Organization to examine various issues of sustainability at a destination. Taiwan was examined using these indicators because the country has a number of attractive destinations within its borders and infant tourism product development.

Resource based view theory guides the formation of a conceptual framework that is fashioned from two concepts. Past research on tourism development has focused primarily on a perspective of economic or social impacts. Although this approach has resulted in a wealth of knowledge, any interconnectedness between economy and environment is only assumed. Looking at sustainable tourism development from a resource perspective is more complex, an empirical study for knowledge contribution is very important.

26

C.-W. Wu Volume 7 – Fall 2009

2. LITERATURE REVIEW: Sustainable development has received intense attention in both academic research and mass media in the late 20 years. The concept grew out of dissatisfaction with entrenched policies of continuous economic growth and unequal distribution of benefits and costs (Bramwell and Lane, 1993; Hardy, Beeton and Pearson, 2002). Similarly, sustainable development for tourism industry is found to be difficult to define( Swarbrooke,1999), The term could be defined as a form of tourism sustained over a period time (Butler, 1999). Accordingly, tourism industry that meets the needs of today’s tourists without taking away from the future generations the resource necessary to fulfill their own needs. Thus, the controversy exists over a definition for sustainable tourism development.

The most widely quoted definition of sustainable development is the one provided by the Brundtland Report, the report says that “sustainable development is development that meets the needs of present without compromising the ability of future generations to meet their own needs,” (World Commission on Environment and Development, 1987, p.43). According to Wall (1997) and Hunter (1995), two important ingredients are included in this statement: human needs and environmental limitations. For the World Commission, the major objective of development is to satisfy human needs and aspirations for a better quality of life for all people.

In other words, sustainable development means long term economic sustainability within a framework of long term ecological sustainability plus the issue of equity (Woodley, 1992). Indeed, the tension between economy and natural environment was the dominant dilemma addressed by the Brundtland Report (Ding and Pizam, 1995; Garrod and Fyall, 1998; Wall, 2002). However, there are other dimensions which deserve to be sustained such as culture (Craik, 1995; Wall, 1997; Butler, 1998). Farrel (1992) also understood sustainable development as the need to find a balance in the development system between economy, environment and society.

Miltin (1992) suggested that sustainable development has two components: the meaning of development and conditions necessary for sustainability. Social, moral, ethical and environmental concerns (Ingham, 1993) and local empowerment (Wall, 1993) were incorporated into the concept. Today, sustainable development is generally viewed as a process that improves people’s living conditions (Bartelmus, 1986). It involves broader concerns about the quality of life, such as life expectancy, educational attainment, access to basic freedoms, nutritional status and common welfare (Pearce, Barbier and Markandya, 1990). Accordingly, the definition of development has been broadened to encompass a continuous and global process of human development guided by the principle of self­reliance, which embrace economic, socio­cultural, environment as well as ethical considerations (Wall, 1997; Sharpley, 2002).

Coccossis (1996) suggested that sustainable development for tourism is being understood variously based on different perspectives. It can be regarded as “ economic sustainability of tourism” in which basic goal is the viability of tourist activity. Here, the emphasis is placed on the need to achieve a balance between commercial and environmental interests for the sake of ensuring the perpetuation of tourism itself (Butler, 1993). Tourism is not the only user of resources. The appropriation of resources in the narrow interests of the tourism industry may not be compatible with the best interests of the broader community (Wall, 1997).

Sustainable development adopting a multi­sector perspective to development, requires holism and an appreciation of the interconnectedness of phenomena (Wall, 1997; 2002). It may incorporate tourism as part of the strategy to achieve sustainability (Tosun, 2001). This implies that the tourism industry should not seek for its own perpetuity at the cost of other sectors. Tourism development should be made consistent with the general tenets of sustainable development by determining specific principles (Stabler and Goodall, 1996; Twining­Ward, 1999). In other words, specific principles should be developed to guide tourism operation in a sound direction.

3. PROPOSED MANAGEMENT The most current forms of sustainability are based on ideas of resource management that preclude excessive consumption in order to promote inter­generational equity and responsibility. The view has increasingly has been negated as both empirical evidence demonstrates the serious social, environmental and economic impacts tourism can bring to a nation or community (Dogan, 1989; King, Pizam and Milman, 1993; Wang and Miko, 1997). Tourism is an ancient human activity. Some industry academicians see the tourism industry when accounting for every sector and sub­sector with a role to play in providing services to the tourist as the largest industry in the world (Goeldner and Ritchie, 2003). In summary, nation may find it necessary to limit the extent of the negative impacts associated with tourism.

27

Sustainable Development in Tourism Industry Context in Taiwan

Tourism industry has been identified as one of the largest and fastest growing industries (Miller, 1990; Hunter, 1995; McMinn, 1997). For some developed or developing countries, tourism industry makes up a critical component of local, regional and national economies, contributing significantly to employment creation, GDP growth and foreign exchange earnings. The notion of sustainable development was raised common concern of policy makers, academic researchers and industry practitioners (Hunter, 1995). As a result, increasingly at all levels development is being remodeled along the lines of sustainable development (Farrell, 1999).

Empirical studies show that sustainable development strategies provide opportunities to multinational companies(MNCs) to deal with complicated issues as well as to gain competitive advantage. Sharma and Vredenburg (1998) examined how proactive environmental firms can build organizational capabilities to gain competitive advantages. Rondinelli and Berry (2000) demonstrates how environmental programs contribute to pursue sustainable development objectives. Sharma, Vredenburg and Westley (1994) used a case study approach to describe the role of a MNCs in host country’s development while Moser (2001) used statistical analysis to demonstrate how the incorporation of sustainable business practice can benefit both firms and host countries in Latin American.

4. METHODOLOGY Structural equation modeling will be employed as a statistical approach to analyze hypothesized relationships between variables and indicators that is used in the research to provide a mean of testing the relationships outlined in the conceptual framework. This method of statistical analysis allows researchers to understand where important relationships exist. Each indicator recommended by the World Tourism Organization and available from Taiwan Tourism Breaus for study in the study were assigned to the driving force, which were defined in the structural equation model as latent or unobserved variables and their relationships were examined through structural equation modeling. These indicators include but are not limited to: employment in the tourism industry, the ratio of individuals employed in the tourism industry to overall employment, air quality, drinking water quality, availability of tourism services in Taiwan, hotel issues and demographic information.

5. EVALUATION OF PROPOSED MANAGEMENT The research results of the empirical analysis illustrate where important relationships between elements of a tourism destination (Taiwan) exists and how this influences common concepts of sustainable tourism development managing a tourism product. Therefore, the research problems was addressed two prevalent sustainable tourism development issues: the lack of a theoretical framework that enables the researcher to incorporate the economic, social and environmental elements of a system, and the contextual nature of sustainable tourism development. These issues are addressed with the application of resource based view theory as a theory as a theoretical framework for understanding the interactions of sustainable tourism development.

This study is seeking to investigate the sustainable tourism development with the expectation that it becomes a useful solution in addressing the negative impacts of tourism. Results from this investigation will be used to identify and categorise resource­based tools/applications and describe their potential uses in destination management for sustainable tourism. Additionally, these resource­based tools/applications can be used will be identified and the current approaches in destination management for the using to these tools will be investigated and evaluated. A research framework will also be developed that will guide destination managers in the best selection of management strategy and public policy for their respective destinations. Finally, it is anticipated that these results will be used by destination managers and destination management organisations as part of their strategy in dealing with sustainability issues of tourism destinations.

6. CONCLUSION In summary, the objective of this study was to address the apparent lack of a theoretical framework for sustainable tourism development, attempt to the integrate social, economic, and ecological elements of sustainable tourism development and address the contextual nature of sustainable tourism development. Resource based view theory was used as a foundation for examining the tourism industry in a holistic manner. General indicators of tourism sustainability, as suggested by the World Tourism Organization, were operationalised as a realistic model for Taiwan tourism industry context in an effort to from the beginning of sustainable tourism development. Ideally, the conceptual framework and theoretical model developed for this study will some implications for tourism business planners and policy decision makers. If the tourism industry planners or those charged with participating in economic development and growth associated with a tourism product have a simple and easy to use model, they may better be able to inform decisions made about development actions. Future intention of modeling

28

C.-W. Wu Volume 7 – Fall 2009 sustainable tourism development will serve to verify the framework and move it from a reflection of reality to a predict approach.

REFERENCES Bartelmus, P. (1986) Enviroment and Development. Boston: Allen and Unwin. Bramwell, B. and Lane, B. (1993) Sustaining tourism: An evolving global approach. Journal of Sustainable Tourism 1 (1), 1­5. Butler, R. (1990) Alternative tourism: pious hope or trojan horse? Journal of Travel Research Vol 28 (3), 91­96. Butler, R.W. (1991) Tourism, environment and sustainable development Environmental Conservation 18 (3), 201­9. Butler, J.R. (1992) Ecotourism: Its Chanding Face and Evolving Phiposophy­ paper presented to the IUCN IV World Congress on National Parks and Protected Areas. Butler, R.W. (1993) Tourism­An Evolutionary Perspective. In J. G. Nelson, R. Butler& G. Wall(eds.) Tourism and Sustainable Development: Mornitoring. Planning, Managing, 26­43. Waterloo: Heritage Resources Centre, University of Waterloo. Butler, R.W. (1995) Seasonality in tourism: issues and problems in Seaton, A V et al (eds) Tourism: the State of the Art Wiley, London. Butler, R.W. (1998)Sustainable Tourism­looking Backward in Order to Progress? In C.M. Hall& A.A. Lew(eds.) Sustainable Tourism: A Geographical Perspective,25­34. New York: Addison Wesley Longman Ltd. Butler, R.W. (1999) Sustainable Tourism: A State­of­the­Art Review. Tourism Geographies1( 1), 7­25. Clarke, J. (1997) A framework of approaches to sustainable tourism. Journal of Sustainable Tourism Vol 5, 224­233. Coccossis, H. (1996) Tourism and Sustainability: Perspectives and Implications. In G.K. Priestley, J.A. Edwards& H. Coccossis(eds.) Sustainable Tourism? European Experiences, 1­21, Wallingford: CAB International. Cocklin, C.R. (1995) Methodological problems in evaluating sustainability. Environmental Conservation, 16 (4), 343­351. Cooper, P.J. and Vargas, C.M. (2004) Implementing Sustainable Development: From Global Policy to Local Action. Rowman and Littlefield. Craik, J. (1995) Are There Cultural Limits to Tourism? Journal of Sustainable Tourism 3 (2), 87­98. Ding, P.& Pigram, J. (1995) Environmental Audits: An Emerging Concept in Sustainable Tourism Development. The Journal of Tourism Studies 6 (2), 2­10. Dogan, H.Z. (1989) Forms of adjustment: Sociocultural inpacts of tourism. Annals of Tourism Research 16 (2), 216­36. Farrell, B. (1992) Tourism as An Element in Sustainable Development: Hana, Maui. In V. Smith& W. Eadington (eds.) Tourism Alternatives,115­132. Philadelphia: University of Pennsylvanis Press. Farrell, B.H. (1999) Conventional or Sustainable Tourism? No Room for Choice. Tourism Management 20, 189­191. Fyall, A. and Garrod, B. (1998) Heritage tourism, pricing and the environment. Insights Vol 9, A155­159. Garrod, B. and Fyall, A. (1998) Beyond the rhetoric of sustainable tourism? Tourism Management Vol 19 (3), 199­212. Goeldner, C. and J. Ritchie (2003) Tourism: principles, practice, philosophies. New Jersey: John Wiley & Sons, Inc. Hardy, A. L., Beeton, S. J. R., & Pearson, L. (2002). Sustainable Tourism: An Overview of the Concept and its Position in Relation to Conceptualisations of Tourism . Journal of Sustainable Tourism, 10, 475­496. Hjalager, A. (2002). Repairing innovation defectiveness in tourism. Tourism Management, 23, 465­474. Hunter, C. (1995) On the need to re­conceptualise sustainable tourism development. Journal of Sustainable Tourism 3 (3), 155­65. Hunter, C. (1997) Sustainable tourism as an adaptive paradigm. Annals of Tourism Research 24 (4), 850­67. Ingham, B. (1993) The Meaning of Development: Interactions Between New and Old Ideas. World Development 2, (11), 1803­ 1821. King, B. Pizam A. & Milman A. (1993). Social Impacts of Tourism: Host Perception. Annals of Tourism Research, 20,650­665. Liburd, L. J. (2005). Sustainable tourism and innovation on mobile tourism services. Tourism Review International, 9, 107­ 118. Mathieson, A& Wall, G (1982) Tourism: Economic, physical and social impacts Longman Scientific and Techical, Essex. Mathieson, A& Wall, G (1986) Tourism: Economic, physical and social impacts. New York: Longruan, Ins. McMinn, S. (1997) The challenge of sustainable tourism. The Environmentalist,17 (2), 135­141. Meyer, J.L. and G.S. Helfman. (1993) The ecological basis of sustainability. Ecology Applied 3 (4), 569. Miller, G (2003) Consumerism in sustainable tourism: a survey of UK consumers, Journal of Sustainable Tourism Vol 11 (1), 17­39. Miltin, D. (1992) Sustainable Development: A Guide to the Literature. Environment and Urbanization 4, 111­124. Moser, P., (2001), "Glorification, Disillusionment or the Way into the Future? The Significance of Local Agenda 21 Processes for the Needs of Local Sustainability", Local Environment, vol. 6, no. 4, pp. 453­467. Pearce, D. Barbier, E. & Markandya, A. (1990) Sustainable Development, Economics and Environmental in the Third World. Aldershot: Edward Elgar.

29

Sustainable Development in Tourism Industry Context in Taiwan

Rondinelli D.and Berry M. (2000) Environmental citizenship in multinational corporations: Social responsibility and sustainable development. European Management Journal, 18 (1), 70­84. Schianetz, K., Kavanagh, L., & Lockington, D. (2007). Concepts and tools for comprehensive sustainability assessments for tourism destinations: a comparative review. Journal of Sustainable Tourism, 15, 369­389. Sharma, S. and H. Vredenburg (1998) ‘Proactive Corporate Environmental Strategy and the Development of Competitively Valuable Organizational Capabilities’, Strategic Management Journal 19, 729­753. Sharma, H. Vredenburg and F. Westley: 1994, ‘Strategic Bridging: A Role for the Multinational Corporation in Third World Development,’ Journal of Applied Behavioral Science 30 (4), 458­476. Sharpley, R. (2000) Tourism and Sustainable Development: Exploring the Theoretical Divide Journal of Sustainable Tourism 8 (1), 1­17. Sharpley, R. (2002) Sustainability: A Barrier to Tourism Development? In R. Sharpley & D.J. Telfer (eds.) Tourism and Development: Concepts and Issues. Clevedon: Channel View Publication. Stabler, M. & Goodall, B. (1996) Environmental Auditing in Planning for Sustainable Island Tourism. In L. Briguglio et al. (eds.) Sustainable Tourism in Islands and Small States: Issues and Policies,170­196. London: Pinter. Swarbrooke, J. (1999). Sustainable tourism management. Oxon: CAB International. Tosun, C. (1998) Roots of Unsustainable Tourism Development at the Local Level: the Case of Urgup in Turkey. Tourism Management 19, (6), 595­610. Tosun, C. (2001) Challenge of Sustainable Tourism Development in the Developing World: the Case of Turkey. Tourism Management 22, 289­303. Twining­Ward, L. (1999) Towards Sustainable Tourism Development: Observations from A Distance. Tourism Management 20, 187­188. UNWTO (2007). Another record year for world tourism. World Tourism Organisation. Madrid: Spain. UNWTO (2004). Indicators of sustainable development for tourism destinations: a guidebook. Madrid: World Tourism Organisation. Wall, G. (1993a) International Collaboration in the Search for Sustainable Tourism in Bali, Indonesia. Journal of Sustainable Tourism 1, (1), 38­47. Wall, G. (1993b) Towards A Tourism Typology. In J.G. Nelson, R. Butler & G. Wall(eds.) Tourism and Sustainable Development: Monitoring, Planning, Manageing,45­58. University of Waterloo: Heritage Resource Centre. Wall, G (1997) Is Ecotourism Sustainable? Environmental Management 21 (4), 483­491. Wall, G. (1998) Impacts of tourism: theory and practice. Tourism Reaction Research Vol 22 (2), 57­58. Wall, G. (2002) Sustainable Development: Political Rhetoric or Analytical Construct? Tourism Recreation Research 27 (3), 89­ 91. Wang, C.Y.,& Miko, P. S.(1997)Environmental impacts of tourism on U.S. National Parks. Journal of Travel Research . 35(winter), 31­37. Woodley, A. (1992) Tourism and Sustainable Development: the Community Perspective. In J.G. Nelson, R. Butler & G. Wall(eds.) Tourism and Sustainable Development: Monitoring. Planning and Managing,135­147. Univerity of Waterloo: Heritage Resources Centre. World Commission on Environment and Development (1987) Our Common Future. Australian edition. Melbourne: Oxford University Press. World Tourism and Travel Council (WTTC) World Tourism Organization (WTO) and Earth Council (1995) Agenda 21 for the Travel and Tourism Industry: Towards Environmentally Sustainable Development. London: WTTC.

30

H. C. Hartman Volume 7 – Fall 2009

DO CHANGES IN REGULATION HAVE AN IMPACT ON THE NUMBER OF BANK FAILURES?

Harrison C. Hartman1 University of Georgia, USA

ABSTRACT This paper finds some evidence supporting the hypothesis that deregulation of depository institutions leads to an increase in the number of institutions failing, with a delay or a lag. After reviewing some major changes in regulation and discussing how changes in regulation could have led to changes in the number of institutions failing, I present some preliminary regressions testing whether deregulation leads to an increase in the number of bank failures.

Keywords: Regulation, Deregulation, Financial Crises, Banking

INTRODUCTION According to Burton and Lombra (2006), the goals of financial regulation are to promote a smooth, efficient financial system allowing financial organizations to earn profits while simultaneously trying to prevent financial crises and minimize the damage during any crises that occur. Over the last century, the United States has experienced many changes in the regulation of financial services. During the Great Depression, Congress introduced new regulations responding to circumstances that were judged to contribute to the financial crisis at that time. The early 1980s brought an era of deregulation in financial services. The deregulation was followed by unsatisfactory results for many commercial banks and savings and loans associations (S&Ls). Around the same time that a wave of commercial bank and S&L failures occurred, Congress enacted a wave of increasing regulation in the late 1980s and early 1990s. After the wave of failures subsided, Congress again allowed deregulatory changes. At the time of this writing, the United States is experiencing another financial crisis, and calls for greater regulation are being made.

At this point, a few questions arise. What were some of the changes in the laws regarding the regulation of depository institutions? How could these changes have created or solved problems for depository institutions and their customers? And are changes in the number of bank failures predictable? The second section of the paper discusses changes in regulation. The third section offers statistical analysis, while the fourth section concludes.

AN OVERVIEW OF CHANGES IN THE REGULATION OF DEPOSITORY INSTITUTIONS The material in this section owes a great deal to Burton and Lombra (2006). Considerable information is also in Mishkin (2004).

In the 1920s, Congress weighed the advantages and disadvantages of a banking market structure with many smaller­sized banks or fewer but larger banks. The advantages of smaller­sized banks included, according to the thinking at the time, an environment of more competition leading to better service and more financial innovations. Additional perceived advantages were higher rates paid to depositors and lower rates charged on loans, plus less damage per bank failure should a bank become insolvent. For a market structure with fewer banks that tend to be larger in size, advantages were perceived to be a lower probability of a bank failure due to higher rates charged on loans and lower rates paid to depositors. However, if a larger bank would happen to fail, there could be much more damage per failure. Congress passed the McFadden Act of 1927 which prohibited banks from operating branches in more than one state (with the exception of some states allowing state­chartered banks not part of the Federal Reserve System to operate in more than one state) and forced banks to obey the laws of the states in which they operated.

At least some of the experiences following the repeal of the McFadden Act contradict the concerns of the legislators in Congress who passed the McFadden Act. For example, bank profit margins may have actually decreased due to entry of banks and other organizations from other states and other countries into the market.

1 Do not quote without the written permission of the author.

31

Do Changes in Regulation have an Impact on the Number of Bank Failures?

The Glass­Steagall Act of 1933 made three fundamental changes in an effort to solve perceived problems with the United States financial system which were blamed for either causing or at least exacerbating the financial crisis that occurred during the Great Depression. Although some subsequent studies question the following analysis, the thinking at the time was that banks competed so strongly for deposits, that banks offered higher and higher interest rates on deposits to attract customers. Banks must earn a higher interest rate on loans than the interest rate that banks pay their depositors (in the absence of fees much greater than current bank fees in the U.S.) to have a chance at earning profits. Thus, analysis in the 1930s reasoned that banks were forced to seek borrowers willing to pay high interest rates. Those borrowers could have been stock market speculators who suffered tremendous losses between 1929 and 1932, when the Dow Jones Industrial Average fell from 381 to 41 (Burton and Lombra, 2006). Thus, the Glass­Steagall Act (a) separated commercial banking (checking deposits and business loans) from investment banking (initial offerings of stocks and bonds), (b) imposed interest rate ceilings, and (c) established the Federal Deposit Insurance Corporation (FDIC). Although the legislators in Congress in the 1930s may not have known it, years later some research suggests that the Glass­Steagall Act may have caused unintended problems decades later. The reader should note, though, that despite any problems that may have resulted from the Glass­Steagall Act, the financial system in the U.S. created by the Glass­Steagall Act lasted without a large increase in the number of bank failures until the 1980s.

Jumping ahead, the Depository Institutions Deregulation and Monetary Control Act (DIDMCA) of 1980 was passed in an effort to solve problems, some of which could have been caused by the Glass­Steagall Act. As Burton and Lombra (2006) point out, with the creation of money market mutual funds, savers with less than $10,000 (who could thus not purchase T­Bills) were allowed to earn a market­determined interest rate not subject to the Glass­Steagall interest rate ceilings and yet remain relatively liquid. This resulted in commercial banks and other depository institutions losing deposits. Additionally, some federally­chartered banks began to change to state charters to take advantage of (1) lower reserve requirements imposed by state banking authorities and (2) at times less strict regulation. Thus, at least some banks perceived greater profit opportunities with state charters rather than federal charters. Moreover, many depository institutions such as S&Ls were in financial difficulty due in part to bad loans. Thus, DIDMCA began to phase out interest rate ceilings (to help curb the loss of deposits), established uniform and universal reserve requirements (to reduce the incentive for banks to switch charters), and increased asset and liability options for depository institutions (possibly with the hope of increasing profits). This landmark piece of legislation represented the beginning of a period of financial deregulation.

Another critical piece of deregulatory legislation was the Garn­St. Germain Act of 1982. Among other things, this act allowed the issuance of money market deposit accounts and allowed savings and loans associations (S&Ls) to issue junk bonds. Many S&Ls had negative net worth at the time, and legislators opted to allow S&Ls the chance to earn greater profits by taking greater risks with the hope of restoring positive net worth rather than choosing a taxpayer bailout at the time. Around this time, the required minimum percentage of capital on S&Ls balance sheets was also lowered with the same goal. Unfortunately, these strategies failed as S&Ls and other depository institutions later faced even greater losses as S&Ls grew more involved in areas in which they did not have expertise. For more on the S&L crisis, see Burton and Lombra (2006). An important question to ask is “Did the changes in regulation in the early 1980s lead to a predictable, quantifiable change in the number of bank failures?”

As United States bank failures trended upward from ten per year in 1979 and 1980 to more than 200 in 1987 (www.fdic.gov) and even greater numbers in 1988 and 1989, the Basel Accord of 1988, a twelve­country agreement, created international capital requirements for banks and initiated an era of greater regulation. The capital requirements were greater than those that had been enforced on U.S. banks prior to the accord.

The Financial Institutions Reform, Recovery, and Enforcement Act (FIRREA) of 1989 continued the response to the S&L crisis and the wave of depository institution failures which intensified in the late 1980s. Among the accomplishments of this act, the FIRREA terminated the Federal Home Loan Board (the former regulator of S&Ls) and created the Office of Thrift Supervision to regulate S&Ls. The director of the Office of Thrift Supervision was placed on the FDIC’s board. Another change made by the FIRREA was the dissolving of the Federal Savings and Loans Insurance Corporation (FSLIC) and the creation of the Savings Association Insurance Fund (SAIF). SAIF was given $50 billion and placed under the supervision of the FDIC. This act also restricted commercial real estate loans, terminated junk bond investing, and established stricter capital requirements for S&Ls. Yet another feature of the FIRREA was that the federal government in the U.S. became legally responsible for the solvency of the FDIC for the first time (although Congress may have voluntarily opted for a taxpayer bailout of the FDIC had the FDIC run out of funds prior to the passage of the FIRREA). For more on the FIRREA, see Burton and Lombra (2006).

32

H. C. Hartman Volume 7 – Fall 2009

The FDIC Improvement Act of 1991 attempted to solve the problem of moral hazard in financial intermediation. In general, according to Burton and Lombra (2006), moral hazard occurs when a borrower uses loan proceeds for a purpose that is riskier than the purpose that the borrower gave to the lender when applying for the loan. Moral hazard in financial intermediation occurs because depository institutions can use deposits for projects with greater risk than they would in the absence of deposit insurance. An unintended consequence of the FDIC is that depositors with insured accounts do not monitor banks’ activities as carefully as they would if there were no deposit insurance.

Assuming that depositors allow depository institutions to engage in greater risk taking due to deposit insurance, the greater risk taking may lead not only to higher profits when borrowers repay loans but also greater numbers of insolvencies when borrowers default. Thus, one of the many provisions of the FDIC Improvement Act was to require higher risk depository institutions to pay larger insurance premia. Additionally, greater restrictions were placed on banks with insufficient capital. Moreover, the FDIC Improvement Act ended the FDIC’s doctrine of “too big to fail” for restoring solvency unless one of two conditions held. “Too big to fail” meant that the FDIC would use the purchase and assumption method, where the FDIC would arrange for another depository institution to buy a troubled institution with the FDIC paying the excess of liabilities over assets. After the FDIC Improvement Act, the FDIC would use the other method of restoring solvency (called the payoff method, where the FDIC would pay depositors up to the insurance limit and then close the institution) unless the purchase and assumption method was less expensive than the payoff method or the failure of an insolvent institution would threaten the entire financial system. Burton and Lombra (2006) offer more extensive coverage of the FDIC Improvement Act of 1991.

An initial step toward deregulation may have been the Interstate Banking and Branching Efficiency Act (IBBEA) of 1994 which eliminated most of the restrictions on interstate bank operations established by the McFadden Act. Given that (1) the creation of banking holding companies provided a way around interstate banking restrictions established by the McFadden Act, and (2) a Supreme Court decision in 1985 allowed interstate branching by banks subject to regional banking pacts, the deregulatory effect of the IBBEA was likely not nearly as great as the deregulatory effect of the Financial Modernization Act of 1999 (also known as the Gramm­Leach­Bliley Act) or the Commodity Futures Modernization Act of 2000. For the purpose of the present study, the main feature of the Financial Modernization Act was the official removal of the Glass­Steagall separation of commercial banking from investment banking. (The reader may wish to note that banking holding companies also enabled financial organizations to circumvent some of the Glass­Steagall restrictions on the blending of commercial banking and investment banking activities.) For more on the IBBEA of 1994 and the Financial Modernization Act of 1999, see Burton and Lombra (2006). According to 60 Minutes (2009), the Commodity Futures Modernization Act of 2000 terminated any federal regulation of derivatives and effectively also terminated any state regulation of derivatives. The segment on 60 Minutes explained how the lack of regulation exacerbated the current financial crisis in the United States when sellers of credit default swaps apparently did not set aside funds to cover potential claims and then claims against the sellers soared because people had purchased credit default swaps to collect funds if home­buyers defaulted on mortgages.

Is it mere coincidence that roughly ten years after the passage of the Financial Modernization Act and the Commodity Futures Modernization Act financial organizations are receiving large sums of bailout funds and greater regulations are being imposed, just as roughly ten years after the DIDMCA and the Garn­St. Germain Act the United States entered the Basel Accord and passed the FIRREA and the FDIC Improvement Act and paid bailout funds? Although the econometric analysis in the next section cannot offer proof, it supports the hypothesis that deregulation could have led to increases in the number of bank defaults in the U.S. in the late 1980s and at the present time.

ECONOMETRIC MODEL Work [for example, Barr, Seiford, and Siems (1994)] has been completed on estimating the probability of one bank failing. However, to the knowledge of the author, not much has been done in terms of forecasting the total number of bank failures. Aside from the issue that insured depository institutions may have vastly different levels of deposits and thus there could be great differences in the liabilities of the FDIC for different institutions failing, assessing the responsibility for changes in the law on the number of bank failures should be of great interest to the FDIC and policymakers as the federal government now must guarantee the solvency of the FDIC since the passage of the FIRREA of 1989.

The sample period begins in 1955 and ends in 2008 based on data availability. The reader can request data on the number of bank failures in the United States (50 states and the District of Columbia), also called FAIL in this study, at www2.fdic.gov/hsob/SelectRpt.asp?EntryTyp=30.

Large numbers of bank failures can occur during a financial crisis. According to Burton and Lombra (2006), some factors that can cause a financial crisis (and thus be associated with an increase in defaults on loans made by banks) include a drop in

33

Do Changes in Regulation have an Impact on the Number of Bank Failures? price levels when borrowers have fixed interest rate loans, in part because borrowers have to pay back loans in dollars that have more buying power. Lower price levels could also imply lower wages and salaries, which reduce borrowers’ ability to repay loans. Similarly, an increase in interest rates when borrowers have adjustable­rate loans can also increase the probability of a financial crisis. Another factor that can lead to a financial crisis would be a decline in asset values as lower asset values would reduce the liquidity that borrowers could generate from selling those assets if they encountered difficulty. The degree of layering of financial claims can make a financial crisis worse. For example, if borrowers default on loans from Bank X, Bank X may be forced into bankruptcy. This could create problems for Bank Y if Bank Y loaned funds to Bank X. This could also create problems for Bank Z if Bank Z loaned funds to Bank Y. Each loan adds an extra layer of financial claims.

I define a dummy variable LAW equal to zero from the beginning of the sample period in 1955 through 1979, from 1989 through 1998, and in 2008. LAW equals one from 1980 through 1988 and from 1999 through 2007. The values set to zero correspond to times when greater regulations were in place. The values set to one refer to years trending toward a more deregulated environment. For example, LAW first takes on a value of one in 1980 with the passage of DIDMCA. It remains equal to one until 1989, the year of the Financial Institutions Reform, Recovery, and Enforcement Act. LAW again becomes equal to one in 1999 with the passage of the Financial Modernization Act and remains equal to one until 2008, the year of large financial bailouts and calls for greater regulation.

DEBT, or the natural log of total credit market debt from economagic.com, serves as a proxy for the degree of layering of financial claims. Intuitively, the greater is total debt in the economy, the more likely it is that a greater percentage of loans are associated with multiple layers. To convert the quarterly series to an annual series, I use the average of the values from the fourth quarters of each year.

The real interest rate, REAL, is defined as the ten­year constant maturity rate on federal government debt minus the inflation rate. Both the ten­year rate and the inflation rate are expressed as whole­number percentages. Monthly ten­year interest rate data, found at research.stlouisfed.org/fred2, are converted to annual data by averaging the twelve months of data in each year. Inflation rates are calculated from the CPI for all urban consumers (all items) with monthly CPI data found at bls.gov. The inflation rate for a month is defined as 100 multiplied by the difference between the natural log of the CPI in that month less the natural log of the CPI twelve months ago. Monthly inflation rates are then converted to annual rates by averaging the twelve monthly inflation rates.

The ten­year constant maturity interest­rate data are also used to calculate a yield curve, YC, defined as the ten­year rate less the federal funds rate, both expressed as whole­number percentages. Both of the monthly series for the ten­year interest rate and the federal funds rate are converted to annual interest rates by averaging the twelve monthly rates in each year.

Because each variable likely exhibits a unit root, regressions are run with the variables in first difference form to avoid the spurious regression problem. Thus, the variables in regressions are D_FAIL, D_LAW, D_DEBT, D_REAL, and D_YC (with the letter “D” representing the first difference.) The dependent variable is D_FAIL, the change in the number of bank failures. For parsimony, two lags of each variable except the change in the dummy law variable (D_LAW) are used on the right­hand­side. I initially use ten lags of D_LAW because the deregulations in the early 1980s and the late 1990s could have had an impact on the number of bank failures with a substantial delay or lag. I also use a constant term because it is often statistically significant at the ten per cent level.

Table 1 below displays the results with two lags of each variable except D_LAW on the right­hand­side. For that variable, ten lags are used, given that it may take years for changes in regulation to impact annual failures. Note that roughly 80 per cent of the right­hand­side variables are statistically insignificant at the ten per cent level. Also note that the adjusted R­squared is relatively low at about 0.25. However, this is not a complete surprise given that the data are differenced.

34

H. C. Hartman Volume 7 – Fall 2009

TABLE 1

Dependent Variable: D_FAIL LAW Equals One In 1988 Method: Least Squares Included observations: 43 after adjustments

Variable Coefficient Std. Error t­Statistic Prob.

C ­78.37990 47.02781 ­1.666671 0.1086 D_FAIL(­1) 0.014597 0.212927 0.068554 0.9459 D_FAIL(­2) ­0.120935 0.197841 ­0.611276 0.5468 D_LAW(­1) 18.25571 38.83829 0.470044 0.6426 D_LAW(­2) ­5.133266 46.70246 ­0.109914 0.9134 D_LAW(­3) 86.53216 41.29703 2.095360 0.0469 D_LAW(­4) 54.50763 35.46624 1.536888 0.1374 D_LAW(­5) 37.34752 37.16423 1.004932 0.3250 D_LAW(­6) ­2.392378 36.24904 ­0.065998 0.9479 D_LAW(­7) ­10.77428 34.35107 ­0.313652 0.7565 D_LAW(­8) ­18.27119 33.21362 ­0.550111 0.5873 D_LAW(­9) 93.42871 31.81213 2.936890 0.0072 D_LAW(­10) ­100.4985 50.70046 ­1.982202 0.0590 D_DEBT(­1) ­1038.236 1010.415 ­1.027535 0.3144 D_DEBT(­2) 1854.378 1045.641 1.773437 0.0888 D_REALRT(­1) 4.773914 8.192031 0.582751 0.5655 D_REALRT(­2) ­11.21567 7.806067 ­1.436788 0.1637 D_YC(­1) ­13.21855 8.971955 ­1.473319 0.1537 D_YC(­2) 13.89778 7.826396 1.775757 0.0885

R­squared 0.581278 Durbin­Watson stat 1.974348 Adjusted R­squared 0.267237

When presenting regression results, I employ a less strict level of statistical significance for D_LAW than the other variables due to a strong prior belief that I have about the role of regulation in bank failures and due to changes in the law being the focus of this study. After several iterations of removing variables that are clearly not significant at or near the twenty per cent level for D_LAW and at the ten per cent level for other variables, regression results in Table 2 suggest that except for the tenth lag of D_LAW, the legal variable has a positive impact on the change in failures. That is, all other things equal, a deregulated banking environment leads to an increase in failures with a delay or a lag. For example, a switch toward deregulation nine years ago causes an increase in bank failures of approximately 95 at both the ten per cent and five per cent levels, ceteris paribus. However, note that the tenth lag has a negative coefficient, implying that more deregulation ten years ago causes a decrease in failures ten years later. Also note that the real interest rate variable, the yield curve variable, and lagged changes in the number of failures have been eliminated from regressions because of test statistics and associated probabilities not statistically significant at the ten per cent level. That is, these variables do not appear to predict future numbers of failures after accounting for changes in the law and changes in the debt variable.

I kept the second lagged difference of the debt variable (in natural log form) in the regression because it would be significant at the ten per cent level. Interpreting the results, given that this variable is in natural log form, a one hundred per cent increase in the debt variable leads to an increase in the number of bank failures by about 784 two years later, all other things equal. Alternatively, a one per cent increase in total debt causes approximately 7.8 bank failures with a delay of two years.

35

Do Changes in Regulation have an Impact on the Number of Bank Failures?

TABLE 2

Dependent Variable: D_FAIL LAW Equals One In 1988 Method: Least Squares Included observations: 43 after adjustments

Variable Coefficient Std. Error t­Statistic Prob.

C ­73.98294 30.66926 ­2.412284 0.0209 D_LAW(­3) 35.51809 28.38485 1.251304 0.2187 D_LAW(­4) 44.95213 28.52118 1.576097 0.1235 D_LAW(­9) 95.04873 28.85394 3.294134 0.0022 D_LAW(­10) ­88.24887 34.85255 ­2.532064 0.0157 D_DEBT(­2) 784.0120 333.8185 2.348618 0.0243

R­squared 0.448857 Durbin­Watson stat 2.089842 Adjusted R­squared 0.374378

In Table 3, I present regression results insisting that lagged variables be significant at the fifteen per cent level for D_LAW and at the ten per cent level for the other variables and the constant term. Thus, I eliminated the third lag of D_LAW. The results are qualitatively similar to those in Table 3 in that a change toward a deregulated environment tends to lead to more bank failures with a lag, except for the tenth lag of D_LAW.

The negative estimated coefficient on the tenth lag of D_LAW following mainly positive estimated coefficients on earlier lags may indicate an estimated length of financial crises. For example, a more deregulated environment may produce an increase in failures within three or four years. Given that the dependent variable is in first difference form, the estimated change in failures could last around that level but then jump to a higher level after nine years. Finally, in the tenth year, failures begin to subside. This matches to some extent empirical observations. Roughly 9.000 banks failed in the U.S. from 1930 through 1933, a span of five years (Mishkin 2004). After the creation of the FDIC, another surge occurred from 1935­1942, but this surge was much smaller, with no more than 75 failures per year during that period (www.fdic.gov). Yet another wave of failures lasted approximately from 1982 through 1993 (www.fdic.gov). It appears that another wave of failures has already begun at the end of the sample period. Although the first two waves of failures in the 1930s and 1940s occurred before the beginning of the sample period, I mention them at this point to give an estimate of the duration of the waves of bank failures in recent U.S. history. The shortest wave discussed above lasted four years, while the longest wave lasted twelve years. Hence, the negative estimated coefficient on the tenth lagged value of D_LAW could indicate approximately when a surge in bank failures would end following deregulatory legislation, and the estimated positive coefficient on the fourth lag could estimate the beginning of a wave of failures.

36

H. C. Hartman Volume 7 – Fall 2009

TABLE 3

Dependent Variable: D_FAIL LAW Equals One In 1988 Method: Least Squares Included observations: 43 after adjustments

Variable Coefficient Std. Error t­Statistic Prob.

C ­75.12009 30.88316 ­2.432397 0.0198 D_LAW(­4) 44.46300 28.73001 1.547615 0.1300 D_LAW(­9) 94.41155 29.06341 3.248468 0.0024 D_LAW(­10) ­88.53862 35.11027 ­2.521730 0.0160 D_DEBT(­2) 806.1769 335.8205 2.400618 0.0214

R­squared 0.425534 Durbin­Watson stat 1.996273 Adjusted R­squared 0.365064

A note of caution is in order in that the decision of when to change the regulation dummy variable back to zero could influence the results. The regressions presented in Tables 1 through 3 assume that LAW changes back to zero in 1989 with the passage of the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA). The results are somewhat different if the change in LAW is made in 1988, coinciding with the Basel Accord. Table 4 displays results using a similar lag elimination procedure to the one used in generating Tables 1 through 3 but LAW is set back to zero in 1988 rather than 1989. Notice that the first lagged difference of the law variable is statistically significant, and the estimated coefficient is negative. This implies that a change toward deregulation is followed by a decrease in the number of bank failures in the next year. This regression may be capturing statistical correlation or reverse causality in the sample period. Congress passed deregulatory legislation when bank failures were relatively low in the early 1980s before the S&L crisis reached its peak and again in the late 1990s when the number of failures was relatively low. Low or rapidly declining numbers of annual bank failures could foster a political environment conducive to passing deregulatory legislation.

Another important difference is that the variable for the second lagged difference in real interest rates is significant at the ten per cent level (and fairly close to significant at the five per cent level). Unlike the prediction made above, higher real interest rates are followed by a decrease in failures rather than an increase. Certainly, higher ex post real interest rates on loans would allow banks to earn greater profits as long as the higher interest rates do not lead to greater defaults. The variable D_DEBT in the regressions may already capture the impact of an increase in the probability of borrowers defaulting. Thus, an increase in D_REALRT may show the impact of a greater “profit margin” from bank loans on the number of bank failures.

Overall, the balance of the evidence suggests, with a few caveats, that greater deregulation tends to be associated with more failures in the future. The results may be sensitive to the specification of the model. Including data from a stock market index or an estimate of average home values to see if decreased asset values lead to more failures could possibly lead to different conclusions about the impact of changes in regulation on future bank failures. Uncertainty about which variable (or both) to use precludes their use in the present study. To the extent that home values and stock market index values are correlated with output in the economy, real GDP could serve as a proxy. Given that the decision of whether to change the dummy variable from one back to zero in 1988 (the year of the Basel Accord) or in 1989 (the year of the FIRREA) has at least a slight impact on the results (where the case of deregulation leading to bank failures may be clearer if LAW equals one in 1988); the inclusion of home values, stock prices, or real GDP could impact the results. Additionally, given the need to use annual data as dictated by the frequency of bank failure data, the relatively small sample size could influence the results.

37

Do Changes in Regulation have an Impact on the Number of Bank Failures?

TABLE 4

Dependent Variable: D_FAIL LAW Equals Zero In 1988 Method: Least Squares Included observations: 43 after adjustments

Variable Coefficient Std. Error t­Statistic Prob.

C ­72.92266 27.46275 ­2.655330 0.0120 D_LAW(­1) ­82.78536 26.17737 ­3.162479 0.0033 D_LAW(­3) 61.54379 25.96473 2.370284 0.0236 D_LAW(­4) 37.79819 25.11376 1.505079 0.1415 D_LAW(­5) 47.80838 25.53445 1.872309 0.0698 D_LAW(­9) 76.50927 26.14713 2.926106 0.0061 D_LAW(­10) ­82.77181 29.80115 ­2.777471 0.0089 D_DEBT(­1) 774.3332 300.3244 2.578323 0.0144 D_REALRT(­2) ­9.088951 4.790309 ­1.897362 0.0663

R­squared 0.627791 Durbin­Watson stat 2.012969 Adjusted R­squared 0.540213

CONCLUSION After reviewing some major changes in the law, I present econometric results regarding the relationship between deregulation and the number of bank failures. Findings tend to support the hypothesis that deregulatory changes have led to increases in the number of bank failures. Future work could add average home prices, stock market index values, or real GDP to the analysis. It could also develop a theoretical model of the number bank failures per year based on the information presented here. Given that the goal of financial regulation is ostensibly to allow financial organizations to earn profits while trying to prevent financial crises, future work could also analyze which changes in regulation were least responsible for spurring financial crises and may have actually helped to avoid crises. It could also analyze which changes in regulation were most detrimental to the financial system and the overall economy and further study the reasons for the damage in an effort to avoid similar problems in the future. Future econometric work may acknowledge that structural changes could have impacted the economy over the sample period, thus altering the size of the impact of changes in the law and changes in the other variables on the number of bank failures. Thus, future regressions may divide the sample period into multiple parts for hypothesis testing.

REFERENCES Barr, R.S., Seiford, L.M. and Siems, T.F. (1994) Forecasting Bank Failure: A Non­Parametric Frontier Estimation Approach, Recherches Economiques de Louvain, 60(4), pp. 417­29. Burton, M. and Lombra, R. (2006) The Financial System and the Economy: Principles of Money and Banking, Mason, OH: Thomson Southwestern, pp. 231, 237, 246­7, 258, 283­9, 292­4, 307, 310­21. Mishkin, F. (2004) The Economics of Money, Banking, and Financial Markets. Boston, Pearson Addison Wesley, pp. 231, 270­1. 60 Minutes (2009) broadcast August 30, Available online at www.cbsnews.com/stories/2008/10/26/60minutes/main4546199.shtml?tag=contentMain;contentBody

38

J. Villacís González Volume 7 – Fall 2009

AN UNPUBLISHED LETTER OF KEYNES AND ITS RELEVANCE FOR MACROECONOMICS: CLASSIFICATION SYSTEM B22

José Villacís González University San Pablo­CEU, Spain

ABSTRACT In 1934 a young Spanish economist, Lucas Beltrán Flóres, wrote a long letter to John Maynard Keynes, receiving a brief letter in reply. Given that it would be a further two years before the Keynes’s General Theory of Employment, Interest and Money appeared, this hitherto unpublished letter is of considerable interest, as it gives various hints as to the scientific and even psychological condition of the two researchers at that very interesting point in time. As the correspondence shows, dogmatic approaches to full employment and macroeconomic equilibrium were beginning to give ground to a new more flexible and, some would say, intrepid approach which would later lay the basis of modern macroeconomics and its essentially monetarist basis. In his reply to Beltrán’s missive Keynes talks of the Spanish and Indian currencies, which despite not being tied to gold had not suffered major depreciation. This reaffirmed him in his belief that, well managed, a fiduciary currency, floating freely against the price of gold, was not condemned to losing its value. More than this, he says that in a work which he hoped would see the light of day within a year’s time, he wanted to explain a fundamental theory that would follow a line somewhat different from that expounded in his previous A Treatise on Money. At bottom, he adds, the essential ideas are the same. The work he was referring to is the General Theory …. Keynes foresees that on the whole there would be considerable resistance to these new ideas but he is convinced that in time both public and academic opinion would come round to the new approach.

Keywords: Devaluation, General Theory, Deterioration, Profit Inflation, New Approach.

INTRODUCTION This brief commentary is set up as follows: first, the two letters, the initial letter from Lucas Beltrán in November 1934 and Keynes’s reply in the same month. The second part discusses the conventional approach to macroeconomics prior to 1934. The third part looks at the progress then being made on the book Keynes mentions and which appeared in 1936.

The essay sticks closely to the facts, i.e. the objective information provided in the letters and the relation between the two letters and the state of economic thought at that time, particularly in relation to the ideas of Keynes. When Beltrán wrote to Keynes, economic theory was poised on the edge of macroeconomics, a branch of the science of which Keynes was the precursor with his three previous works: Indian Currency and Finance, A Tract on Monetary Reform, and A Treatise on Money. Thus the correspondence reproduced here touches on the scientific backdrop, referred to quite extensively by Beltrán, and underlines the new thinking which Keynes was working on and which would see the light of day in the General Theory ....

It should be noted that when Beltrán wrote to Keynes, there were no faculties of economics in Spain. He addressed his comments to Keynes in the latter’s role as a professional economist of international fame and prestige in both the academic world and in political arena. Beltrán knew Keynes’s work well.

From these two letters, the longish letter from Beltrán and the shorter reply from Keynes, it is possible to glean an idea of what Keynes was thinking at the time. Oddly enough, this is easier to grasp from Beltrán’s letter than from that of Keynes himself. As Beltrán presents his breakdown of the key features of the Spanish economy of the day, he cites the thinking of Keynes, with which Beltrán is a closely in agreement. “I have become a convinced follower of your theories,” he says at the outset of his letter.

In his reply Keynes says that he has embarked on a line of research which is “rather different” from that which Beltrán has read. In actual fact, the differences are more apparent than real, because A Treatise on Money, at that time Keynes’ most recent book, published in 1930, sets the scene and identifies the key factors of what would be the major opus, the General Theory …

39

An Unpublished Letter of Keynes and its Relevance for Macroeconomics: Classification System B22

BIOGRAPHICAL AND ACADEMIC BACKGROUND OF LUCAS BELTRÁN. Lucas Beltrán Flórez was born in Alcanar in the province of Tarragona in 1911. Until the mid 1940s no Spanish university boasted a faculty of economics so to pursue his studies Beltrán had to travel. He went to England where he became a student and researcher at the London School of Economics, a crucial experience which explains his familiarity with the ideological and scientific thought of the day.

One of Beltrán’s great merits was his ability, thanks to his scientific training, to separate the ideological and normative aspects of economic research – in the case of the UK mainly liberal in orientation – from the practical workings of a macroeconomic structure. This ability made him particularly responsive to the conceptual structure that Keynes was trying to erect.

After Keynes, the situation changed. Out­and­out liberals held firm to their belief in free enterprise, while interventionists, of broadly Keynesian convictions, were sceptical of progress driven by market forces alone. In time Beltrán, too, stuck out for the liberal standpoint he had taken to as a youth though, like Keynes, in the special circumstances of the Great Depression he, too, favoured the intervention of the State. The exchange of letters documented in this article took place in 1934, i.e. before the appearance of the General Theory … meaning that we can safely say that the Spanish researcher acted on all occasions in accordance with scientific principles as opposed to political motives.

At the level of research the communication between the London School of Economics, Beltrán’s place of learning, and Cambridge, where Keynes taught, was very fluid, to the point where one could almost call them sister organisations, although there were, it is true, certain ideological differences. Coming from this background it is easy to see how Beltrán received his academic training and how attractive he found the objective truths revealed by Keynes.

BELTRÁN’S LETTER TO KEYNES November 17th, 1934

Mr. John Maynard Keynes King’s College Cambridge

Dear Sir, For several years I have been studying your works and I have become a convinced follower of your theories. Now I am working on a paper on the application of these theories to Spanish modern economic history and to the present Spanish economic situation. This paper will be my thesis for the degree of Doctor in Law, and will be published by the “Institut d’Investigacions Econòmiques” (“Institute for Economic Research”) of this town [Barcelona] I dare to write to you to explain very briefly my views of the question. I should be extremely thankful if you were kind enough to read them and to give me your advice. If you needed any additional information, I should be very glad to send it to you. The Spanish modern monetary history affords a good example of the functioning of a purely national standard during a long tract of time. After its creation in 1868, the peseta remained an international standard only until 1881. At this date it began to depreciate and has been since then fluctuating, and more often than not depreciating. That has been considered, by the Spanish economists, as a misfortune, while through the application of your theories, I see it, on the whole, as beneficial. Our purely national and fluctuating standard has allowed our prices to follow very often a different course from that of the world prices. We have had a course of prices of our own. This course has been, until the end of the Great War, almost always upwards. Consequently Spain has lived from 1868 to 1922 in a state of chronic Profit Inflation. This Inflation was rather mild before the War, and very violent during the war and the postwar time. Has that been beneficial? On the whole, I believe it has. In 1868 Spain was a country very poor in fixed capital, and it has been therefore worthwhile to sacrifice a part of current consumption for the improvement of investment. Thanks to this Profit Inflation, Spain has been able to modernize her industrial and her agricultural equipment, and to purchase many Spanish and foreign securities which belonged to foreigners. That is to say, she has increased both her Home Investment and her Foreign Investment. The years from 1868 to 1914 may be divided into two periods, 1898 being the end of the first. During this first period Profit Inflation was caused by the increase in the quantity of money. This increase was created by the Governments to cover their budget deficits, and took the form of silver coins (with a nominal value greater than their intrinsic value) and of bank­notes (issued through the Bank of Spain). Prices rose and the peseta depreciated. Unfortunately we cannot measure the rise of prices prior to 1890 because there are no index numbers before this date. (The lack of past statistical data and the unreliability of the existent is a great obstacle to every economic study about Spain). From 1890 there is a rough index number based on the wholesale prices of seventeen commodities. This index number shows a fairly steady rise from 1890 to 1898. There is every reason to believe that this rise had been going on many years before 1890.

40

J. Villacís González Volume 7 – Fall 2009

The Spanish and foreign economists who have studied this period have agreed to condemn and bewail the financial policy of the Governments. They are right from their point of view, as they considered the preservation of an international standard at a fixed parity as the necessary aim of any monetary policy. But the application of the theories of your “Treatise on Money” tells us that this policy did much more good than harm. Thanks to it we had a Profit Inflation instead of the Profit Deflation we should have experienced if our currency had been linked to gold, as world gold prices fell from 1873 to 1895. In 1898 we had the war with the U.S.A. The heavy note issues (always through the Bank of Spain) and the financial panic drove the peseta to a very low level. This provoked a reaction on the part of our authorities and, after the War, a new period began characterized by financial “virtue”. The Governments wanted to raise the exchange value of the peseta, and with this aim, they reduced the fiduciary issue through the reduction of their indebtedness towards the Bank of Spain. Fortunately this policy was not carried as far as its supporters wished. At the same time there was a repatriation of capital owned by Spaniards formerly living in the lost Colonies and now coming home. Consequently the total note issue diminished slightly in the first years of the century, but later on increased till 1914. The increased quantity of money enabled our prices to rise but moderately. As the world gold prices were rising more steeply than ours, this difference occasioned an improvement of our exchanges. Our purely national standard rendered our Profit Inflation, during this period, less violent than that developing in the outside world. During the war and the post­war years (1914­1920), Spain experienced an acute Profit Inflation. The needs of the belligerent countries and the rise of prices in them determined a big increase of our Foreign Balance. That is to say, our Foreign Investment, and therefore our Total Investment, increased much. The volume of Savings did not certainly rise as much. We had then a Profit Inflation caused by a boom in Foreign Investment. This Profit Inflation needed an increased quantity of money, and our Banking authorities agreed to supply it; the Bank of Spain (with the Government’s permission) increased several times the legal maximum of the note issue. The fundamental difference between the Profit Inflation of the years 1868­1914 and that of the years 1914­1920 lies in the fact that in the former the initial impulse was “on the side of money”, while in the latter it was “on the side of investment”. In 1921 prices fell heavily. The world normalisation reduced our Foreign Investment, and we had a short Profit Deflation. After that, until the beginning of the present world slump, prices and exchanges suffered fluctuations attributable to different influences, but in 1929 they were more or less at the same level as in 1922. The industrial and trade situations during these years were fairly normal. When in 1929 the fall in the world prices began, the peseta began to depreciate at approximately the same pace, with the happy result of preserving relative stability of our internal prices and the normality of our industrial life. We were doing unconsciously (rather against our will, as the fall of the peseta was then considered a misfortune) what you recommend to do in the Chapter 21 of your “Treatise on Money”. That lasted until 1932.At this date the peseta exchange ceased to follow the direction of the world prices. It rose and occasioned a fall of the internal prices. Then the effects of the depression began to be felt in Spain. Turning to the normative side of the question, I should propose as remedies to get over the present depression, some measures tending to raise the price level. These are: the lowering of the bank rate which now stands as high as 5.5 per cent; a monetary policy tending to lower the exchange of the peseta if world prices fall any further; open market operations to expand the volume of circulating money; and a programme of Public Works to be carried by the Government. As a permanent policy when the depression will be over, I should propose the stabilisation of our prices. If England or any other leading financial country should adopt such a policy, we could link our currency to hers, as you suggest in your “Tract on Monetary Reform”. If no country attempted to stabilize her prices, Spain ought to carry out this policy by herself, through the methods you propose. But previously it would be necessary to make some reforms in our monetary and banking systems. Silver should be demonetized and the peseta should be no longer a bimetallic standard. And the Statute of the Bank of Spain ought to be reformed in order to charge our Bank of issue with all the duties of a Central Bank. I must recognize than none of the measures I propose are likely to be put into practice, given the state of the Spanish monetary opinion. However, I deem it useful to propose and to defend them.

Yours sincerely Lluc Beltrán

My address: Lluc Beltrán­ Institut d´Investigacions Econòmiques­Laietana, 18­Barcelona (Spain)

COMMENTARY ON BELTRÁN’S LETTER. It is important to situate the letter from Beltrán and Keynes’s reply, both written in 1934, as reflecting the state of economic thinking before the appearance of Keynes’s General Theory … in 1936. Beltrán had studied in England, a country which was as familiar with Keynes’s previous work as any other. Moreover, Beltrán says that his thesis is primarily an application of Keynes’s theories to the Spanish economy. In A Tract on Monetary Reform Keynes had begun to break away from the mechanics of a purely quantitative approach, preferring monetary intervention designed to achieve price stability on one hand and increased competition on the other. Beltrán’s work focused on the Spanish currency, the peseta, and how it had

41

An Unpublished Letter of Keynes and its Relevance for Macroeconomics: Classification System B22 performed in the period since its adoption in 1868 to 1881. Subsequently its value had fluctuated significantly, depreciating more often than not. Beltrán, following Keynes’s ideas, says that he saw this depreciation as an advantage.

Beltrán says that in the open market there had been an increase in prices that had led to profit inflation, and that this allowed the Spanish economy to modernise in the period from 1868 to 1922. Modernisation had manifested itself in an increase in fixed capital and the acquisition of foreign securities. On this last point, we see confirmation of the benefits accruing from opening up an economy to the outside world, benefits that Keynes had underlined in 1923 in the third chapter of A Tract on Monetary Reform and had gone on to connect with investment and domestic and international interest rates in A Treatise on Money. Spanish governments had funded their budget deficits by printing money, a practice that led to inflation and the rapid devaluation of the peseta. Beltrán says that in consequence Spain suffered profit inflation as opposed to the profit deflation that would have occurred if the peseta had been tied to the price of gold, given that the price of gold declined in the period between 1873 and 1895. This argument is a combination of the ideas of Keynes and Hume.

He then goes on to refer to the exchange rate, specifically to revaluation, when he explains that the war between Spain and the United States obliged the Bank of Spain to print banknotes in order to finance its budget deficit and that the financial panic that arose as a result produced a massive devaluation of the peseta. Faced with this, the Spanish authorities tried to reduce the fiduciary issue of money in the hope of encouraging the peseta to revalue. He says clearly that, fortunately, this attempt was not taken to the extremes originally proposed, a stance that fits in with Keynes’s ideas on the role of money in world trade and its performance in generating rents. These ideas Keynes had formulated almost four years previously in A Treatise on Money.

The explanations Beltrán gives in the following lines are of extraordinary interest because they clearly reflect the arguments of Keynes, to such an extent, in fact, that they could quite easily have been stated by the English economist himself. Beltrán refers to the marked profit inflation occurring in Spain between 1914 and 1920, from which Spain obtained a significant advantage in its balance of payments, combined with an increase in prices, as a result of supplying the needs of the belligerents in the Great War, as a consequence of which the country was able to increase its foreign investment and, with it, its overall fixed investment. Despite this, the level of savings did not rise in synchrony. The argument that, to our mind, is typically Keynesian is when Beltrán says that there is a difference between the two periods of profit inflation, that between 1868 and 1914, and that between 1914 and 1920. The difference, says Beltrán, “lies in the fact that in the former the initial impulse was ‘on the side of money’, while in the latter it was ‘on the side of investment’.”

When world prices began to fall in 1929 the peseta fell with them and a certain price stability set in. That was the year of the start of the Great Depression, but in Spain up until 1932 prices remained relatively stable and the exchange rate fell. In this downturn in the exchange rate, says Beltrán, we (Spaniards) did “what you recommend to do in the Chapter 21 of your ‘Treatise on Money’”. Subsequently, domestic prices began to fall, too, and Spain began to suffer the effects of the worldwide slump.

This proposal of how to cope with a major depression is startling, as it goes to the very heart of Keynesian economics, which Keynes had already formulated to some extent but would go on to develop fully and clearly two years later in 1936 with the publication of his General Theory of Employment, Interest and Money. What Beltrán is saying in this letter of November 1934 is as follows. First, you reduce the bank rate. Next, you adopt a monetary policy designed to depress the exchange­rate of the peseta. At the same time you undertake open­market operations designed to expand the volume of money in circulation. Up to this point Beltrán is giving a list of measures typical of monetary policy. He then lists quite openly fiscal measures such as beginning a public works programme financed by government. At this stage there cannot be the slightest doubt that Lucas Beltrán is totally convinced of the merits of the Keynesian concept of economic theory and politics. Consequently, this letter is a faithful reflection of the state of economic thought at that time.

Lastly, Beltrán proposes government intervention in the monetary system to stabilise prices in Spain, measures that will require a reform of the system. Silver should be abandoned as the monetary benchmark and the peseta should cease to be bi­metallic standard. In addition, the statutes governing the Bank of Spain should be amended to ensure it functions as a central bank. Beltrán ends the letter proposing a monetary authority to exercise monetary policy. These are all ideas, techniques, policies and schemes that we find in the books of J.M. Keynes written before 1936, the year of publication of the General Theory ....

42

J. Villacís González Volume 7 – Fall 2009

KEYNES’S REPLY TO BELTRÁN 46 Gordon Square Bloomsbury

Señor Lluc Beltrán November 29, 1934 Institut d´Investigacions Economiques Laietana, 18 Barcelona, Spain

Dear Señor Beltrán,

I am much obliged to you for [your] most interesting letter of the 17th November. I think it very likely that the theory of mine to which you refer would find an excellent application and illustration in the Nineteenth Century History of the Spanish Currency; very much on the same lines as the application which I myself made very briefly to the case of the Indian currency. Both Spain and India offer examples of currencies which, on one hand, were not tied to gold, yet, on the other hand, were never seriously debauched, as in the case of the South American currencies.

I find nothing which I am competent to criticise in the outlines of your argument which you give me in your letter. I would, however, mention that in a work of mine which will probably come out in about a year´s time I deal with the underlying theory on what, at any rate on the surface, would appear to be lines rather different from those adopted in my Treatise on Money. Under the surface, however, the essential ideas are the same.

I expect you are right in anticipating considerable resistance to ideas of this order in your country. It naturally takes a few years for both public and academic opinion to accustom itself to a new line of approach. But once a beginning is made, it is remarkable how rapidly opinion in these matters is capable of changing. The explanation doubtless is that conventional views on these matters, though tenaciously held, are really rooted in very little.

Very truly yours. JMKeynes

COMMENTARY ON KEYNES’S LETTER. Keynes included with his reply a copy of his work of 1913, when he was 30 years old: Indian Currency and Finance, which he sees as having significant parallels with the Spanish currency in the 19th century. The parallel lies in a currency that was not tied to the gold standard but whose supply was managed in such a way as to maintain a balance with gold. Though neither the Indian nor the Spanish currency was a slave to gold, the currency was never “seriously debauched”. This is significant, as we shall see below. Firstly, because the supply of a fiduciary currency is capable of being managed perfectly well. Secondly, by not being tied to gold, the currency can be used as a tool to facilitate buying or selling, or what we today would call aggregate demand, and for funding investment.

Keynes says there is nothing in Beltrán’s arguments that he is competent to criticise but does not elaborate on what those arguments are. However, this negation of competence is belied to a certain extent by his comment on a forthcoming work, due to be published about a year hence. Says Keynes (my italics): “I would, however, mention that in a work of mine which will probably come out in about a year´s time I deal with the underlying theory on what, at any rate on the surface, would appear to be lines rather different from those adopted in my Treatise on Money. Under the surface, however, the essential ideas are the same.”

Keynes adds a philosophical comment on ideas seen on a temporal plane and from the standpoint of method, i.e. that ideas, though correct, take time to establish themselves but once they do, public and academic opinion are quick to follow suit. Here we have one of Keynes’s real strengths, his ability to match the demands of science with the skills required of a public servant. Keynes was a man endowed with considerable flexibility and practical sense.

In 1934 when Keynes wrote this letter to Beltrán, the GDP of the United States had fallen in real terms by nearly 25%, unemployment was rife and the waves of depression had extended to Europe, including Britain. Keynes was obliged when completing his book to not only describe the nature and physiology of the economic phenomenon but also to come up with solutions. This is part and parcel of Keynes’s nature: his ability to construct a theory out of the need to solve real problems arising out of the structure of a scientific artefact.

Keynes appears perfectly confident that his work, the General Theory … will make headway, slowly at first, but at increasing speed until recognised both by scientists and by the man in the street. And he was quite right. His new book, which was

43

An Unpublished Letter of Keynes and its Relevance for Macroeconomics: Classification System B22 complex, novel, unorthodox, was literally devoured by the universities, while politicians and businessmen also tried to grapple with it. From reading these two letters we can recapture some of the importance of the publication of that book, then germinating in Keynes’s mind. As Keynes says, his A Treatise on Money had already made some inroads on a purely rentist theory: the generation of rents, spending on consumption and on investment, on the merely monetary aspect of preferences on holding money, the anticipated profits of capital, etc. He was obviously immersed in attending the birth of the model for determining rent contained in the General Theory ….

The fields that management of the money supply can act on are those of generating and modifying rents, as is shown by Beltrán who, at the end of his letter, proposes that interest rates should be reduced and public works used as a substitute for the resulting decline in private investment. This is the key to the question: the alchemy by which money transmutes into rent and production, as Beltrán proposes by adopting monetary and fiscal policies to achieve this. If Beltrán is capable of imagining these modern and unorthodox remedies, it is because he has read Keynes. He makes no bones about it: For several years I have been studying your works and I have become a convinced follower of your theories.

CONCLUSION The contents of this essay comprise the correspondence of a young Spanish economist who had read Keynes in the course of his studies in England. The letters were written in 1934, two years before the appearance of Keynes’s General Theory… but some years after publication of Indian Currency and Finance (1913), A Tract on Monetary Reform (1923) and A Treatise on Money (1930). Beltrán’s letter is a development of Keynes’s ideas, with which he is perfectly familiar and which he applies to the Spanish economy.

Keynes’s reply is short and, in appearance, general. What is very interesting, however, is to see how Keynes’s ideas had been so completely absorbed by Beltrán, whose understanding and approval of them is demonstrated by his proposals for abandoning a fixed exchange rate tied to gold, the benefits to be obtained from devaluing the currency, his belief in the advantages to be derived from profit inflation compared with deflation and his support for active monetary and fiscal policies, all given with quotations and references.

Seen in this light, Keynes’s reply, though apparently general, can be clearly understood when he says that he is thinking about a new publication on lines of thought rather different from those of A Treatise on Money but, in fact, the same ideas extended to new fields: the formation of rent, the preference for money in its multiple manifestations, the origin of interest, the effectiveness of investment, and the importance of government spending.

REFERENCE The above statement refers exclusively to the contents contained in this article based on the letter of Keynes Indian Currency and Finance. Londres; McMillan and Co.,Ltd.,1913. The Economic Consecuences of the Peace; Nueva York; Harcourt Brace and Howe, 1920. A Revision of the Teatry; Nueva York;Harcourt Brace and Co.,1922 The Economic Consequences of Sterlyng Parity; New York; Harcourt, Brace and Co.,1925. The Means to Properity, Nueva York; Harcourt, Brace and Company, 1933. Monetary Reform. Nueva York; Harcourt, Brace and Company, 1924. Can Lloyd George Do it? An Examination of the Liberal Pledge. Londres, The Nation and the Atheneaeum, 1929. A Treatise on Money, Nueva York; Harcourt, Brace and Company, 1930. Essay in Persuasion, Nueva York; Harcourt, Brace and Co. 1932. The Economic Consequences of Sterlyng Parity; New York; Harcourt, Brace and Co.,1925.

ABOUT THE AUTHOR José Villacís González, member of the American Economic Association, lecturer in macroeconomics of the University CEU­ San Pablo, Paseo Juan XXIII, 6, 28040 Madrid, Spain.

44

J. Villacís González Volume 7 – Fall 2009

APPENDIX Author Certification of Originality

45

An Unpublished Letter of Keynes and its Relevance for Macroeconomics: Classification System B22

Copy of Beltrán’s Original Letter to Keynes

46

J. Villacís González Volume 7 – Fall 2009

47

An Unpublished Letter of Keynes and its Relevance for Macroeconomics: Classification System B22

48

J. Villacís González Volume 7 – Fall 2009

49

An Unpublished Letter of Keynes and its Relevance for Macroeconomics: Classification System B22

Copy of Keynes’s Original Reply to Beltrán

50

W. Cheng Volume 7 – Fall 2009

TEN BIG EMERGING MARKETS AND THE SMALL FIRM EFFECTS

William Cheng Troy University/Global Campus, USA

INTRODUCTION Emerging markets have outperformed US stock market during the 1990's. The high performance and the increasing availability of information have led to an increased interest by both academics and practitioners. Divecha, et. al. (1992) present statistical evidence of performance and risk, and discuss the portfolio implications of investing part of your funds in emerging markets.

My analysis will concentrate on the performance of the ten big emerging markets2, as identified by the Department of Commerce (DOC) under Clinton Administration. These markets are Argentina, Brazil, China, India, Indonesia, Mexico, Poland, South Africa, South Korea and Turkey [See Business America (1995)].

There are logical reasons that a pattern observed in developed markets will also appear in emerging markets; however, the logical converse is also possible. There may be an ‘internationalization of markets’ consistent with technological advances, the ease of capital flows across borders, and improved information resources. Certainly for major markets, capital flows freely from one market to another with ease, so called hot money. Emerging markets, on the other hand, often place restrictions on capital flows, have their own rules of taxation that may discriminate against or discourage foreign investors, and in some cases discriminate between domestic and foreign investors through classes of shares3. These individual country factors may cause differences between patterns observed in developed markets versus those of developing or emerging markets. Further, there seems to be a behavioral pattern that may affect pricing and the risk/return tradeoff we have come to expect in developed markets. Investing in stocks is viewed as more akin to gambling than to investing by domestic investors in some emerging markets. Traditional patterns of saving involve placing money in an account similar to a savings account, coupled with an attitude of frugality as a means to accumulate wealth. If investors treat the stock market as a gambling arena, the link between risk and return may be broken. Lack of liquidity would then prevent arbitrage trading from reestablishing the link.

Table 1: Selected Market Indices of the Ten Big Emerging Markets and the DJIA and SPX indices for the US Country Index Ticker Argentina The Argentina Stock Market General Index ARSMGNRL Brazil The Brazilian Stock Markets I­Senn Index BZSMIBSN China The China CLSA Index B CLSACHB India The Bombay Sensitivity Index BSI Indonesia The Jakarta Composite Index JCI Mexico The Mexico Bolsa Index MEXBOL Poland The Warsaw Stock Exchange Equity Index PWSMWIG South Africa The Johannesburg All Market Index JOHMKT South Korea The Korea Composite Index KOSPI Turkey The Turkey Stock Market Indices Composite TKSMCOMP U.S. The Dow Jones Industrial Average DJIA U.S. The Standard and Poor's 500 Index SPX Source: Bloomberg

2 Divecha, et. al. define emerging markets as follows. They define an emerging market as one which (1) has securities that trade in a public market, (2) is not a developed market, (3) is of interest to global institutional investors, and (4) has a reliable source of data. 3 China and Indonesia provide examples of domestic versus foreigner discrimination. China issues B shares to foreign investors. Indonesia does not allow greater than 49% foreign ownership. When ownership reaches 49% they begin to issue foreign shares.

51

Ten Big Emerging Markets and the Small Firm Effects

TEN BIG EMERGING MARKETS: SOURCES OF DATA The Department of Commerce (DOC) under Clinton Administration has identified ten emerging powers having highest growth potential in next decade. Table 1 lists these ten big emerging markets and selected market indices, as well as the DJIA and SPX4 indices for US market. These selected indices are used to compare the market performance of emerging markets in the 1990s.

Data and analysis are obtained through Bloomberg's 20,000 international company universe. Bloomberg provides 24­hour, instant and current financial, economic and political information covering markets around the globe. It also provides analytics, historical data, up­to­minute news reports, economic statistics and political commentaries. Constant upgrades and enhancements of the system are some of the most valuable attributes of the Bloomberg service.

TEN BIG EMERGING MARKETS: PERFORMANCE Table 2 shows the total returns over the entire period 1990­1996, as well as the annualized returns, for selected indices carried on the Bloomberg in the ten biggest emerging markets (BEMs). Without considering inflation risk and exchange rate risk, Brazil ranked number one, with the highest return, out of the big ten BEMs for both total return and annualized return. Indonesia and China ranked 9 and 10, with negative returns in the 1990s of ­1.5% and ­20.2%, respectively.

Table 3 shows the annual returns of the selected ten big emerging market (BEMs) indices between 1990 and 1995. The data indicates that equity prices of the emerging markets are much more volatile than those of much matured markets such as the United States. Argentina's stock index experienced two years of 450% growth in 1990 and 1991, and then crashed to a negative 31% return in 1992. Poland's Warsaw Stock Market Index provides another example of roller coaster pricing in emerging markets. In 1993, Poland's market had an exceptional fourteen thousand percent increase, followed by a negative sixty­four percent drop, and then a bounce back to positive in 1995. Appendix B1 shows graphs demonstrating the price volatility of these indices. A visual comparison of the price charts for BEMs versus the 2 US indices shows substantially greater volatility for the BEMs, except for South Africa. Most markets exhibit impressive total returns during the 1990 to 1996 period; but these high returns are accompanied by high volatility.

Six developing countries were selected, which include Brazil, China, Indonesia, Mexico, South Africa, and Turkey. I then rank all firms from each six counties into quintiles by size. Table 4 shows some attributes of the formed portfolios from each country. The number of firms (securities) in each country portfolio, the average P/E ratio, the average beta, the average dividend yield, the average market capitalization, and the average twelfth month returns are reported. Table 5 shows some basic attributed of each portfolio ranked by market capitalization. First, we observed that the major firms in emerging markets have a market value (size) ranging between $10 million and $500 million. Second, there appears to be no relation between size and beta in some of the markets (for example, Turkey and Indonesia). Some markets show an inverse relation between beta and size (for example, South Africa). Third, only in South Africa does the portfolio of the small firms outperform the larger firms in total return. While both Indonesia and Brazil markets show some reversal of the size effect, the larger firms have better return than the smaller counterparts. Finally, we combine all firms in six markets into a portfolio and ranked them by size. All above three observations hold as shown in table 5­G.

4 The DJIA is the DOW Jones Industrial Average, an index of 30 of the largest US firms. The SPX is the Standard and Poors 500 index, an value weighted index of 500 large firms.

52

W. Cheng Volume 7 – Fall 2009

Table 2: Nominal Returns for 1990 to 1996 in the Ten Big Emerging Markets Total Return Annualized Number of Country Index Period (1990­1996) Equivalent Rank (Months) (%) Return (%) Argentina ARSMGNRL 4/90­3/96 71 710.6 42.4 4 Brazil BZSMIBSN 9/91­3/96 54 222,000 454.5 1 China CLSACHB 6/92­3/96 45 ­57.0 ­20.2 10 India BSI 4/90­3/96 71 323.4 27.6 6 Indonesia JCI 4/90­3/96 71 ­8.3 ­1.5 9 Mexico MEXBOL 4/90­3/96 71 404.5 34.85 5 Poland PWSMWIG 4/91­3/96 59 1103.3 65.8 3 South Africa JOHMKT 4/90­3/96 71 122.6 14.5 7 South Korea KOSPI 4/90­3/96 71 25.9 4.0 8 Turkey TKSMCOMP 2/92­3/96 49 1727.7 103.7 2 U.S. DJIA 4/90­3/96 71 110.3 13.4 U.S. SPX 4/90­3/96 71 95.1 11.9 Source: Bloomberg

Table 3: Annual Returns in the Ten Big Emerging Markets (1990­1995) Country Ticker 1990 1991 1992 1993 1994 1995 Argentina ARSMGNRL 450.6 498.5 ­30.9 58.1 34.3 8.9 Brazil BZSMIBSN n/a 154.8 616.7 9068.9 489.7 18.6 China CLSACHB n/a n/a ­20.6 ­8.4 ­33.9 ­9.6 India BSI 43.7 134.4 16.5 48.9 ­9.6 18.9 Indonesia JCI ­13.3 ­26.3 ­0.7 110.5 ­26.7 33.4 Mexico MEXBOL 40.1 160.6 1.8 67.8 ­24.7 44.9 Poland PWSMWIG n/a ­1.1 12.1 1483.1 ­63.6 69.2 South Africa JOHMKT ­19.9 41.0 ­4.8 38.3 6.3 35.9 South Korea KOSPI ­29.1 7.1 ­0.1 38.8 ­4.1 ­3.1 Turkey TKSMCOMP n/a n/a 19.6 354.9 25.5 96.2 US DJIA 5.6 17.8 2.7 20.0 ­3.4 40.4 US SPX 4.5 18.9 7.4 9.7 ­2.3 35.2 Source: Bloomberg

Table 4: Attributes of Selected Emerging Markets Portfolios Average Average 12 Average Number of AverageP/ Average Country Dividend Month Return(1) Capitalization firms E Beta Yield (%)* (MM of US$) Brazil 578 87.49 .0.2 9.85 14.30 604 China 403 ­12.32 n/a 1.71 ­0.12 200 Indonesia 419 8.17 0.57 2.64 53.09 969 Mexico 488 ­26.56 n/a 1.00 103.44 1.03 South Africa 567 n/a 0.85 2.23 57.78 1.94 Turkey 224 20.51 0.36 2.03 132.02 189 Source: Bloomberg (1) Between 4/23/95 and 4/22/96.

53

Ten Big Emerging Markets and the Small Firm Effects

EMERGING MARKETS: SIZE AND RETURN In order to further explore the relationship between size and return in the emerging markets, I run a regression based on the Fama­Mcbeth (1973) model. The model is specificities as follows:

Rij = α0 + α1 Betaij + α2 LSizij where:

th th Rij is the 12 month return of the j company in the i country. th th Betaij is the beta of the i company in j country. th th LSizij is the natural log of the market value of i company in j country.

We expect a negative association between returns and α2 ( <0) if there is a size effect in each country. We selected 155 firms from Indonesian market which have complete data needed for regression analysis.

Table 6 shows the preliminary results. A few interesting observations can be made. Contrary to most comparable statistics for studies of stock returns in the U.S. [Banz (1978), Keim (1983)], the R2 indicates little explanation from beta. Beta’s insignificance is probably due to the highly speculative nature of trading in most emerging markets. For instance, trading in some Asian markets, such as Indonesia and Taiwan, more resembles gambling than investing. Therefore, there is little or no relationship between risk and return.

Table 5: Annual returns of Selected Emerging Market Portfolios Ranked by Market Capitalization 5­A. BRAZIL Market Number of 12 Month P/E Dividend Price Beta Capitalization Capitalization Firms Return (%) Yield /Book (US$) Ranges (MM) 0.00 ­ l0M 9 ­22.85 ­4.15 0.70 7.56 NA 6.48MLN 10M ­500M 64 26.49 ­58.24 1.43 138.01 NA 181MLN 500M ­ 1B 6 60.09 70.27 1.44 238.26 NA 752MLN lB ­ 2B 5 98.57 14.02 12.64 694.04 NA 1.26BLN 2B ­Up 7 159.27 18.91 44.02 63.80 0.93 3.94BLN

5­B. CHINA Market Number of 12 Month P/E Dividend Price Beta Capitalization Capitalization Firms Return (%) Yield /Book (US$) Ranges (MM) 0.00 ­ l0M 1 ­3.53 4.70 12.12 0.00 NA 9.43MLN 10M ­500M 290 1.50 11.89 1.89 2.35 NA 133MLN 500M ­ 1B 10 ­8.23 20.32 1.70 3.61 NA 725MLN 1B ­ 2B 5 ­13.79 28.57 2.07 4.87 NA 1.43BLN 2B ­Up 1 53.64 8.47 4.55 2.13 NA 2.16BLN

5­C. INDONESIA Market Number of 12 Month P/E Dividend Price Beta Capitalization Capitalization Firms Return (%) Yield /Book (US$) Ranges (MM) 0.00 ­ lM 17 ­5.67 5.34 6.04 0.44 0.31 6.57MLN 10M ­500M 186 48.43 9.88 3.04 1.39 0.66 116MLN 500M ­ 1B 19 61.34 16.53 1.49 3.12 0.55 676MLN 1B ­ 2B 8 66.94 25.92 1.38 6.22 0.50 1.37BLN 2B ­ Up 6 103.06 39.07 0.71 8.56 0.55 5.94BLN

54

W. Cheng Volume 7 – Fall 2009

5­D. MEXICO Market Number of 12 Month P/E Dividend Price Beta Capitalization Capitalization Firms Return (%) Yield /Book (US$) Ranges (MM) 0.00 ­ l0M 10 ­23.66 ­0.76 0.80 0.20 NA 6.72MLN 10M ­500M 78 104.82 ­15.89 1.13 1.44 NA 196MLN 500M ­ 1B 15 99.91 ­33.87 0.67 1.67 NA 760MLN 1B ­ 2B 13 144.20 25.43 0.74 1.95 NA 1.55BLN 2B ­ Up 12 113.46 17.53 0.71 2.62 NA 3.38BLN

5­E. SOUTH AFRICA Market Number of 12 Month P/E Dividend Price Beta Capitalization Capitalization Firms Return (%) Yield /Book (US$) Ranges (MM) 0.00 ­ lM 97 176.96 NA 3.26 NA 0.62 4.23MLN 10M ­500M 259 43.06 NA 2.73 NA 0.68 172MLN 500M ­ 1B 36 36.82 NA 1.99 NA 0.77 753MLN 1B ­ 2B 27 43.70 NA 2.13 NA 0.99 1.52BLN 2B ­ Up 29 91.05 NA 2.04 NA 1.04 5.51BLN

5­F. TURKEY Market Number of 12 Month P/E Dividend Price Beta Capitalization Capitalization Firms Return (%) Yield /Book (US$) Ranges (MM) 0.00 ­ l0M 14 78.01 7.66 1.82 2.45 0.35 4.59MLLN 10M ­500M 107 92.75 22.71 2.04 6.91 0.36 128MLLN 500M ­ 1B 6 17.82 15.24 4.40 6.09 0.36 667MLN 1B ­ 2B 2 21.39 133.18 11.53 5.74 0.37 1.47BLN 2B ­Up 1 162.74 ­69.78 0.00 19.62 0.34 2.26BLN

5­G. ALL MARKETS Market Number of 12 Month P/E Dividend Price Beta Capitalization Capitalization Firms Return (%) Yield /Book (US$) Ranges (MM) 0.00 ­ l0M 149 63.19 ­4.43 2.20 2.58 0.19 5.53MLN 10M ­500M 983 30.57 ­100.9 1.65 12.62 0.15 177MLN 500M ­ 1B 92 53.63 89.10 1.60 5.44 0.75 749MLN 1B ­ 2B 61 75.58 15.17 6.93 14.99 0.95 1.40BLN 2B ­ Up 55 124.48 19.50 21.51 13.41 0.99 4.71BLN Source: Bloomberg

Moreover, contrary to the findings of U.S. markets (Keim, Banz, Reinganum), the significant t­statistic of positive size coefficient indicates a reversal of the small firm effect in Indonesia’s market. The size effect seems to increase with firm size rather than decline with size. This may be explained by institutional interest in large firms, due to liquidity and information availability. The positive size relationship and the lack of a risk/return relationship probably reflect the immaturity of the markets and may well disappear as the markets mature.

PRELIMINARY FINDINGS In the search for anomalies in the ten big emerging markets, we have potentially discovered some unusual results. Specifically, there seems to be no risk/return relationship as we would expect, and as exists in developed markets. Second, the relationship between size and rate of return, which has been shown to be consistently negative in developed markets, is positive for Indonesia. We initially interpret this to indicate market immaturity and likely to change as the market matures.

55

Ten Big Emerging Markets and the Small Firm Effects

We expect to find other supporting evidence as we analyze other markets. China, for example, will likely produce the same result; but South Africa, a more mature market, should show a positive risk/return and negative size/return relationships.

Table 6: Regression of Indonesia’s Stock Returns on Beta and Size Coefficients Standard Error t Stat Intercept ­23.11 17.27 ­1.34 Ln(Size) 13.51 3.57 3.78* Beta ­3.11 2.97 ­1.05

Regression Statistics Multiple R 0.30 R Square 0.09 Adjusted R Square 0.08 Standard Error 70.40 Observations 155 * Significant at the 0.01 level.

REFERENCES Amihud, Yakov, and Haim Mendelson. “Asset Pricing and the Bid­Ask Spread,” Journal of Financial Economics, 17, No. 2 (Dec. 1986), pp. 223­50. Arbel, Avner and Paul Strebel. "The Neglected And Small Firm Effects," Financial Review, 1982, Vol. 17, No. 4, pp. 201­218. Banz, Rolf W. “The Relationship Between Returns and Market Value of Common Stocks,” ,” Journal of Financial Economics, 9 (1981), pp. 3­18. Barry, Christopher B. and Stephen J. Brown. "Differential Information And The Small Firm Effect," Journal of Financial Economics, 1984, Vol. 13, No.2, pp. 283­294. Blume, Marshall E. , and Robert F. Stambaugh. “Biases in Computed Returns,” Journal of Financial Economics, 12 (1983), pp. 387­404. Brown, Keith C., W. V. Harlow, and Seha M. Tinic. “How Rational Investors Deal with Uncertainty (or, Reports of the Death of Efficient Market Theory Are Greatly Exaggerated),” Journal of Applied Corporate Finance, Fall 1989. Business America. “Selected Reports from the Big Emerging Markets,” Department of Commerce (Jan. 1995), pp 11­17. Chan, K. C., Nai­Fu Chen, and David A. Hsieh. “An Exploratory Investigation of the Firm Size Effect,” Journal of Financial Economics, 14 (1985), pp. 451­471. Chan, K. C., and Nai­Fu Chen. “Structural and Return Characteristics of Small and Large Firms,” Journal Of Finance, 46, No. 4 (Sept. 1991), pp. 1467­1484. Divecha, Arjun B., Jaime Drach, and Dan Stefek. “Emerging markets: A Quantitative Perspective,” Journal of Portfolio Management, 19, Fall, pp. 41­50. Fama, Eugene, and James MacBeth. “Risk, Return, and Equilibrium: Empirical Tests,” Journal of Political Economy, 81, No. 3 ((May/June 1973), pp. 607­636. Kato, Kiyoshi, and James S. Schallheim. “Seasonal and Size Anomalies in the Japanese Stock Market,” in Japanese Capital Markets: Analysis and Characteristics of Equity, Debt, and Financial Futures Markets, Edwin J.Elton and Martin J. Gruber, ed., Harper & Row, Publishers (1990). Keim, Donald B. “Size related Anomalies and Stock Return Seasonality: Further Empirical Evidence,” Journal of Financial Economics, 12 (1983). Lustig, Ivan L. and Philip A. Leinbach. "The Small Firm Effect," Financial Analyst’s Journal, 39, No. 3 (1983), pp. 46­49. Research: Evidence On The Small Firm Effect," Journal of Financial and Quantitative Analysis, 20, No. 4 (1985), pp. 501­516. Reinganum, Marc R. “The Anomalous Stock Market Behavior of Small Firms in January,” Journal of Financial Economics, 12 (June 1983), pp. 89­104. Reinganum, Marc R. “Misspecification of Capital Asset Pricing: Empirical Anomalies Based on Earnings Yields and Market Values,” Journal of Financial Economics, 9 (March 1981), pp. 19­46. Roll, Richard. "A Possible Explanation Of The Small Firm Effect," Journal of Finance, 36, No. 4 (1981), pp. 879­888. Roll, Richard. "On Computing Mean Returns And The Small Firm Premium," Journal of Financial Economics, 12, No. 3 (1983), pp. 371­386. Schultz, Paul. "Personal Income Taxes And The January Effect: Small Firm Stock Returns Before The War Revenue Act Of 1917: A Note," Journal of Finance, 40, No. 1 (1985), pp. 333­343.

56

W. Cheng Volume 7 – Fall 2009

Schultz, Paul. "Transaction Costs And The Small Firm Effect: A Comment," Journal of Financial Economics, 12, No. 1 (1983), pp. 81­88.

57

An Examination of Empirical Relationship Between Investment Decisions and Capital Structure Decisions

AN EXAMINATION OF EMPIRICAL RELATIONSHIP BETWEEN INVESTMENT DECISIONS AND CAPITAL STRUCTURE DECISIONS

Su Tang and Binjie Liu University of Shanghai for Science and Technology, China

ABSTRACT This article empirically investigates the interaction between firms’ investment, and capital structure decisions. Using S&P 500 companies, it examines in a real world with market imperfections, how and to what extent investment decisions and capital structure choices are interdependent. Extant literatures suggest that because of the existence of recapitalization costs, asymmetric information, taxes, and a default cost of debt, investment decisions and financial decisions are interactive, but empirical evidence on this proposition is very limited and the causality relationship is inconclusive. In this paper, we construct simultaneous equations models and employ two­stage least square (2SLS) methodology to examine whether investment decisions determine, or are determined by capital structure choices. The tests are undertaken at cross­sectional firm­specific level. Our findings support the view that capital structure choice is “causally prior to” and “exogenous with respect to” investment decisions, which can be interpreted from two aspects: first, investment decisions depend on capital structure; secondly, capital structure choices are independent of investment decisions.

I INTRODUCTION In their famous pioneering paper “ The Cost of Capital, Corporate Finance, and the Theory of Investment” (1958), Modigliani and Miller provide the “irrelevance proposition” of financial decisions5, which state that under the condition of perfect capital market assumptions, the instrument used to finance an investment is irrelevant to the question of whether the investment is worthwhile. Since then, a central question in financial economics has been whether market imperfections establish a linkage between these decisions. Myers (1974) first developed a model that in a real world with information asymmetries­­­managers often have more information than what is available to investors, high cost of issuing leads to “underinvestment problem”. Jensen and Meckling (1976), on the other hand, prove that for those “cash cow” companies, low debt ratios are always associated with agency problem and “overinvestment”. 40 years after M­M theorem, academic researches are consensus on the point that in a world with financing frictions, such as recapitalization costs, asymmetric information, taxes, and a default cost of debt, the investment decisions of firms depend on their financial decisions.

A number of papers discuss how these frictions affect the investment decisions. Some papers construct static models to examine the investment­financing linkage, i.e., investment and financing decisions are made at a single point in time and are irreversible, which refers to firms refrain from investing and recapitalizing during a period of time and adjust slowly.6 Examples include Bernanke and Gertler (1989, 1990), and Brennan and Schwartz (1978). Some other papers, develop models endogenized either the investment or financing decisions. Examples include Hite (1975), Dotan and Ravid (1985), Dammon and Senbet(1988), and Aivazian and Berkowiz (1991). Most recent literature has developed to focus on dynamic rather than static investment and financing decisions, and endogenize both of them in their models. David C Mauer, Alexander J. Triantis (1994) prove that productivity flexibility has a positive effect on the value of interest tax shield thus increases firm’s debt capacity; they also find that debt financing, in contrast, has a negligible impact on a firm’s investment and operating policies. Nathalie Moyen (2000) re­examines the debt­overhang problem7 in a framework where a firm is allowed to recapitalize at any point of time, and where investment level rather than binary operating policies is used to measure the size of the underinvestment caused by debt financing. She shows that debt overhang cost is likely to be small.8 Michael J Barclay, Erwan Morellec and Clifford W. Smith (2003) extend previous studies and prove negative relation between debt capacity and growth options. The empirical relationship between firms’ real decisions and the other aspect of financing decisions, dividend policies, is well examined by Fama (1974) and Smirlock and Marshall (1983). Fama (1974) uses the two­stage least square regression (2SLS) methodology, while Smirlock and Marshall (1983) employ causality tests. Both papers provide evidence on

5 Finacial decisions include capital structure decision and dividend decision, which seperately interpreted in M­M irrelevance proposition 2, and 3. 6 This discussion follows Dixit and Pindyck (1994) 7 Debt overhang problem is first introduced by Jensen and Meckling(1984) 8 The same conclusion was obtained by Parrino and Weisbach (1999)

58

S. Tang and B. Liu Volume 7 – Fall 2009

M­M’s view: even when imperfections are present in the capital market, empirical data still support the hypothesis that investment decisions are not influenced by dividend decisions.9 In contrast, there are two interesting phenomena associating with existing literature in examining the relation between investment decisions and capital structure choices: on the one hand, theoretical works are lack of support from empirical studies; on the other hand, most extant empirical works focus on the relation between a firm’s leverage choice and the composition of investment opportunity set rather than investment itself.10

Our paper aims to fill this gap, developing the hypothesis that firm level investment decisions and capital structure choices are interdependent in a real world with financial frictions existing, empirically re­examining the relationship to test of our interdependence hypothesis, and further concluding how and to what extent investment decisions are related with capital structure choices. We explore this research question by directly examining the interaction between investment decisions and capital structure choices of S&P 500 Composite Index member companies from 2000 to 200111. As to the methodology in this paper, since we are uncertain of the kind of causality relationships, we employ 2SLS methodology, simultaneously estimating investment and financing decisions models. Consistent with traditional wisdom, we find that a firm’s capital structure choice is independent of its investment decision, while investment decision is influenced by capital structure.

Our contribution to existing literature on interaction between investment and capital structure are two fold. First, we are the first to empirically investigate the correlation between firm’s capital structure and investment decision, taking uncertain causality into consideration. Our findings support the predication that investment decision may be distorted by a firm’s capital structure; that is, in a realistic world with financing frictions, investment decisions of firms depend on their financial decisions. This prediction derives from the framework of Nathalie Moyen (2000). Secondly, we extend previous studies of Fama (1974) and Smirlock and Marshall (1983), provide a clear and complete picture of empirical causality relationships between firm’s real and financial decisions, and thus simplify further study in this area.

Overall, our paper bridges the gap between empirical studies and theoretical studies of interaction between investment decisions and financial decisions. The remainder of this paper is organized as follows: Section II describes our variables selection procedure. Section III identifies our data set. Section IV discusses the methodology choice, model specification, empirical results and related robustness checks. Section V extends our empirical work. Finally, Section VI concludes this paper.

II. DETERMINANTS OF CAPITAL STRUCTURE AND INVESTMENT DECISION In this section, we present a brief discussion of the attributes we choose from a wide array of literature which are suggested to affect the firm’s debt­equity12 choice and investment expenditure decision. Their relation to the optimal decisions and their observable indicators are discussed below.

A. Firm Size A number of authors have suggested that leverage ratios, investment expenditures may be related to firm size. Waner (1977) and Ang, Chua,and McConnell (1982) suggest that larger firms should be more highly leveraged. Jensen and Meckling (1984) provide evidence that larger firms tend to investment more. We use the natural logarithm of total assets (LnTA) as indicator of firm size.13 The logarithmic transformation guarantees the size effect on small firms.

B. Industry Classification Myers (1977) suggests that technology firms choose low debt ratio and heavily invest. To measure this, we include a dummy variable as an exogenous determinant for these two kinds of firm level decisions, equaling to one if firms have SIC codes falling into 2833, 2834, 2835, 2836, 3571, 3572, 3575, 3577, 3578, 3661, 3663,3669, 3674, 3812, 3823, 3825, 3826, 3827, 3829, 3841, 3845, 4812, 4813, 4899, 7370, 7371, 7372, 7373, 7374, 7375, 7377, 7378, 7379 (technology firms)14, and equaling zero if otherwise.

9 This conclusion is against Dhrymes and Kurz(1967)’s result that investment and dividend decisions are interdependent. 10 Long and Malitz(1985), Smith and Watts(1992), Barclay, Smith, and Watts(1995) all document negative relation between growth option and market levage, which is measured as the value of debt divided by the market value of the firm. Rajan and Zingales (1995) report that this negative relation is robust across seven western countries 11 To consider the dynamic financial and investment decision, we also investigate the time period for 1995 to 1999. There are some inconsistencies of each year’s results. The regression results are ready upon requirement. 12 In most previous literatures, debt refers to long­term debt 13 Titman(1984) suggest both LnTA and LnS (logarithm of sales) as indicators of firm size and prove the high correlation between these two indicators (0.98). 14 This kind of classification is justified by Denis, David J. (2003)

59

An Examination of Empirical Relationship Between Investment Decisions and Capital Structure Decisions

C. Growth Agency problem, which is the problem of separation between “ownership” and “control”, suggests that firms with low debt ratio have the tendency to invest suboptimally to expropriate wealth from firm’s bondholders. The agency cost of debt tends to be higher for firms in growing industries, which have more flexibility in their choice of future investments. Jensen and Meckling (1976), Warner (1979), and Green (1984) proved the negative relationship between expected future growth and long­term debt levels. According to EMH, growth opportunity has been captured by the market, and so we use the ratio of market price to book value (PB) as the proxy for the growth opportunity.

D. Non­Debt Tax Shields DeAngelo and Masulis (1980), and Titman (1984) argue that tax deductions for depreciation and investment tax credits are substitutes for the tax benefits of debt financing. As a result, firms with large amount of non­debt tax shields use less debt. We use the ratio of depreciation and amortization amount to total assets (Depr/TA) as the indicator of non­debt tax shields.

E. Profitability Ben S. Bernanke concludes from a lot of studies that profitability is an important determinant of investment behavior and sizable empirical evidence suggest that firms with higher profit invest more heavily than less profitable firms. The indicator we selected as a proxy for profitability is the ratio of return on asset (ROA).

F. Internal Funds Myers (1976)’s pecking order theory suggests that firms prefer raising capital, first from retained earnings, second from debt and third from issuing new equity. And Jensen and Meckling (1984) provide the explanation that firms with free cash flow tend to over­invest because of agency problem. The literature suggests that internal funds are positively correlated with investment expenditures. Edmund Kuh, Dhrymes and Kurz propose a world, in which, because of market imperfections, internal funds are a cheaper source of financing for the firm than new security issues. We use free cash flow (FCF) and as the proxies of internal funds.

III. DATA SET The variables constructed in section II are analyzed for the time period from 2000 to 2001.15 The sources of all data are the Annual Compustat Industrial Files. Our sample firms are selected from S&P 500 Composite Index members. From the full sample, we delete all the observations that don’t have a complete record on the variables included in our model analysis. Furthermore, we require the sample firms to have non­negative investment expenditures every year and maintain normal leverage ratio at (0, 1). In total, 337 firms survivor.16 Table 1 and Table 2 give the descriptive statistics of our data set and the correlation matrix among variables separately, reflecting the sample characteristics.

15 1998 data is checked as out­of­sample test 16 S&P 500 company selection may bias our sample toward relatively large firms, and these two requirements exclude extreme cases from our sample.

60

S. Tang and B. Liu Volume 7 – Fall 2009

Table 1: Descriptive Statistics Panel A: Cross­sectional Investment Expenditures from 2000 to 2001 Time period Investment(I,00) Investment(I,01) Mean 0.056 0.060 Standard error 0.002 0.002 Median 0.045 0.049 Standard deviation 0.043 0.044 Sample variance 0.002 0.002 Kurtosis 3.881 4.709 Skewness 1.627 1.709 Panel B: Cross­sectional Debt ratio from 2000 to 2001 Time period DEBTRATIO01 DEBTRATIO00 Mean 0.214 0.199 Standard error 0.008 0.007 Median 0.224 0.188 Standard deviation 0.139 0.137 Sample variance 0.019 0.019 Kurtosis ­0.593 ­0.251 Skewness 0.174 0.425 Panel C: Cross­sectional Control Variables from 2000 to 2001 PB01 DeprTA01 lnTA01 ROA00 FCF00 Mean 3.892 0.046 9.017 6.692 0.043 Standard error 0.481 0.001 0.073 0.332 0.288 Median 2.925 0.043 8.937 5.696 0.635 Standard deviation 8.818 0.025 1.330 6.088 5.279 Sample variance 77.756 0.001 1.770 37.069 27.866 Kurtosis 86.935 10.512 0.516 1.347 78.476 Skewness ­1.037 1.915 0.697 0.618 ­7.475

61

An Examination of Empirical Relationship Between Investment Decisions and Capital Structure Decisions

Table 2: Correlation Matrix between Variables Panel A: Correlation Matrix of Parameter Estimates for Investment Equation DEBT01 DEBT 00 FCF00 ROA00 SIC DEBT01 1.000 ­.940 ­.449 .129 .093 DEBT 00 ­.940 1.000 .521 ­.178 ­.101 FCF00 ­.449 .521 1.000 ­.401 ­.305 ROA00 .129 ­.178 ­.401 1.000 ­.455 SIC .093 ­.101 ­.305 ­.455 1.000 Panel B: Correlation Matrix of Parameter Estimates for Capital Structure Equation DEBT01 DEBT 00 PB01 DEPRTA01 SIC DEBT01 1.000 ­.930 .184 .554 .507 DEBT 00 ­.930 1.000 ­.061 ­.299 ­.208 PB01 .184 ­.061 1.000 ­.044 .241 DEPRTA01 .554 ­.299 ­.044 1.000 .669 SIC .507 ­.208 .241 .669 1.000

IV. METHODOLOGY AND EMPIRICAL RESULTS In this section, we construct simultaneous equations models, employ 2SLS methodology to test of the interdependence hypothesis we have developed, interpret actual empirical results, and finally reinforce the conclusions by undertaking sets of robustness tests.

4.1 Simultaneous Equations Models & Two­Stage Least Square (2SLS) Methodology Since prior studies have documented that both investment decisions and financing decisions may be endogenous, we employ a bivariate regression, two equations and two dependent variables­­­the yearly percentage investment expenditure and the long­term debt ratio­­­estimated simultaneously, to overcome the endogeneity problem. In this case, traditional OLS method is no more possible for parameters estimation, and we utilize the two­stage least square (2SLS) methodology, which provides us a systematical estimation of the parameters of two structural equations we construct to model firms’ corporate policies:

Invt 0   1Capt  1   2Capt   3ROAt  1   4FCFt  1   5SIC   t (1)

Capt 0   1Invt  1   2Invt   3PBt   4DeprTAt   5SIC   t (2)

Where, we treat the current­period percentage investment expenditure (Invt) and current capital structure (Capt)­­­measured by long­term debt ratio­­­as two endogenous variables, and identify other eight exogenous variables, covering one­year time lag capital structure and investment expenditure, firm size, profitability, internal funds, not­interest tax shields, growth opportunities, and industry effects. Our identifications of all these variables are motivated by a large literature on the determinants of investment decisions, as well as the determinations of capital structure decisions, which we have discussed in section II. And above Table 1 has summarized the descriptive statistics for all variables we selected.

Since we hypothesize that investment and capital structure decisions are interrelated while we cannot determine the kind of causality relationship between them, we further induce exogenous variables into the models. For investment decisions regression, identified by equation model (1), we first choose five exogenous variables­­­the natural logarithm of total assets, SIC code, price­to­book ratio, percentage depreciation amount, and one­year time­lag return on assets­­­as the instrumental variables for estimating the values of current­period capital structure. And for capital structure regression, identified by equation model (2), we choose another set of five exogenous variables­­­natural logarithm of total assets, SIC code, one­year time lag return on assets, free cash flow, and capital structure ratio­­­as the induced instrumental variables for estimating the values of current­period investment level. In general, we get the predicted values of current­period capital structure ratio and percentage investment expenditure simultaneously in the 1st Stage:

62

S. Tang and B. Liu Volume 7 – Fall 2009

Cˆapt 0    1InTAt    2SIC    3PBt    4DeprTAt    5ROAt  1   t (3)

Iˆnvt 0    1InTAt    2SIC    3ROAt  1    4FCFt  1    5Capt  1   t (4)

Then, in the 2nd Stage, we estimate simultaneous equations models of (1) and (2), utilizing the fitted value obtained from the equations of (3) and (4) at the first step as the instruments for the endogenous variables of Capt and Invt. The 2SLS estimation procedure is consistent, asymptotically normal, but in general asymptotically inefficient.

4.2 Empirical Results Table 3 reports systematically estimated cross­sectional simultaneous regression models with 2SLS methodology for the year of 2001. And we choose one year as the preliminarily determined time lag.

Table3: Simultaneous Equations Models & 2SLS Estimation Results for the Investment Equation and Capital Structure Equation for the Year 2001 Panel A: Simultaneous Equation (1): Investment Model

Dependent Variable Investment2001 Independent Variable Coefficient t­statistics p­value

Cap2000 ­1.832192 ­1.390 0.1655b

Cap2001 2.026895 2.035 0.0426a

FCF2000 0.012295 1.477 0.1407b

ROA2000 0.001593 0.551 0.5822 SIC 0.021019 0.496 0.6205 (Constant) ­0.029476 ­0.234 0.8152 Panel B: Simultaneous Equation (2): Capital Structure Model

Dependent Variable Capital Structure2001 Independent Variable Coefficient t­statistics p­value

Inv2000 ­1.918609 ­0.121 0.9038

Inv2001 ­3.699623 ­0.227 0.8207

PB2001 ­0.100852 ­1.969 0.0497a

Depr2001 9.697404 1.065 0.2875 SIC ­0.023637 ­0.141 0.8881 (Constant) 0.465228 0.955 0.3405 a significance at 5 % b significance at 15%

Based on the results in above table, it is clear that in the investment equation, capital structure ratios are the important determinants of investment level: current capital structure can significantly influence current period investment decision at the level of 5 percent (t­statistics=2.035, significance t­stat.=0.0426); and also, prior­period capital structure still has the non­ negligible effect on current investment decision with the statistical significance level of about 15 percent (t­statistics=­1.390, significance t­stat=0.1655). These results are consistent with our expectation and provide the empirical evidence on the conclusion that many theoretical articles have reached: since the existence of financial frictions in a real world, investment decisions cannot be made separable from financing decisions. When the managements make investment policy, they will first consider how many debts are due in the near future, and firms with lower prior­period debt ratio will tend to issue more debt and increase investment expenditure in current period. Furthermore, we note that prior­period free cash flow also exerts significantly positive effect on current investment decisions. If a firm has more available internal funds, the management will be likely to invest more.

63

An Examination of Empirical Relationship Between Investment Decisions and Capital Structure Decisions

When examining the capital structure equation, we find that both prior­ and current­period investment instruments are negative, which confirms that firms with higher investment expenditure and higher growth opportunities usually choose lower debt ratio; however, the coefficients are not statistically significant different from zero. Here, the insignificance may be driven by the endogeneity concerns. Then, we reject the interdependence hypothesis and conclude that, although investment decisions are significantly determined by capital structure, capital structure decisions are made independent of either past or current investment decisions. In fact, empirical evidence shows that capital structure decisions are significantly determined by the firm type: whether the firm is a high­growth company or a mature company; the higher the price­to­book ratio, the lower the long­term debt ratio. This result makes sense considering the importance of financial distress cost in growing companies.

Taken together, the results from 2SLS estimation procedure make clear the causality relationship existed between firm level investment decisions and capital structure choices: capital structure, measured by the debt ratio, determines investment expenditure level, but not vice versa. We cannot completely accept the interdependence hypothesis at traditional significance level.

4.3 Robustness Checks One might argue that our results are driven by methodological choices and model specification concerns, or the year effect. To ensure that our conclusions are strong, we run a battery of robustness checks. First, we run multiple regressions separately as an alternative methodology to estimate the models (1) and (2) constructed. Table 4 summarizes OLS results for investment equation for the year of 2001; Table 5 reports OLS results for capital structure equation for the year of 2001.

Table 4: OLS Estimation Results for Investment Equation (Year 2001) Dependent Variable Investment Regression Statistics R square 0.061938919

df F Significance F Regression 5 4.357891783 0.000743975

Independent Variable Coefficient t­statistics P­value Intercept 0.04010072 6.694015576 9.34129E­11 Capital00 ­0.049947511 ­1.392514408b 0.164704222 Capital01 0.072717209 2.030062866a 0.043151935 ROA00 0.001761631 4.346344658a 1.84581E­05 FCF00 ­0.000356375 ­0.816457656 0.414827411 SIC ­0.004566374 ­0.787136361 0.43176699 a significance at 5 % b significance at 15%

When examining above table, we find consistent results. Capital structure is an important determinant for firms’ investment decisions. When prior­period debt ratio is low, the firm will be more likely to recapitalize, issuing more debts and increasing investment expenditures in the current period. We further find the significantly positive effect of time­lag profitability on investment decisions. In fact, we expect that return­on­asset and free cash flow are greatly interrelated; both of them can reflect firms’ past performance and internal funds availability.

64

S. Tang and B. Liu Volume 7 – Fall 2009

Table 5: OLS Estimation Results for Capital Structure Equation (Year 2001) Dependent Variable Capital Structure Regression Statistics R square 0.126929099

df F Significance F Regression 5 9.595235062 1.45451E­08

Independent Variable Coefficient t­statistics P­value Intercept 0.229877688 15.28127921 3.0683E­40 Investment00 ­0.297522204 ­0.994813799 0.320555715 Investment01 0.20121625 0.652189755 0.514732774 PB01 ­0.001603805 ­1.963751405a 0.050398148 DeprTA01 0.462520423 1.550088707b 0.122078691 SIC ­0.110417649 ­6.333219618a 7.85244E­10 a significance at 5 % b significance at 15%

As Table 4 shows, the conclusion that capital structure choices are separable from investment level is robust with the alternative multiple regression procedure. This insignificance is not mainly due to the model specification concern, as the large F­value of 9.60 says. In fact, the R­square is 0.13 and it means our model explains 13% of data set, which is acceptable in financial analysis. Price­to­book ratio, percentage depreciation and amortization, and SIC industry code have the significant explaining power on capital structure choices. These results generally make sense and consist with our expectation: high­ growth companies or firms operating in technology industries will use fewer debts for investment purposes.

In sum, till now, our results are not affected in the material way by using the alternative multiple regressions. It further confirms that our methodological choices and model specification are acceptable and meaningful.

The second group of robustness test focuses on the year effect. We utilize 2SLS estimation procedure for simultaneous equations models for the year of 1998. And we still incorporate one­year time lag variables in the model specification17.

Again, this group of out­of­sample robustness test doesn’t alter our results meaningfully. Investment decisions are negatively influenced by prior­period capital structure while positively related with current capital structure choices; however, t­tests indicate that both relationships are not statistically significant (t­stat2000=­0.297, t­stat2001=0.396). For the capital structure equation, we can conclude that capital structure choices are independent of both prior­ and current­period investment expenditure levels.

Besides, we examine the effects by other firm specific as well as industrial characteristics on corporate policies for the year of 1998. We confirm that firms with more free cash flow or technology companies have more investment expenditures, and high­ growth companies tend to use less debt because of financial distress costs. But again, we note that the empirical results in this set of robustness tests are generally not statistically significant. We check the correlation matrix and find high correlation among variables, which will necessarily weaken the explanatory power. In the next section, we extend our empirical work by further discussing our major findings as well as the limitations in this paper, and suggesting the directions for future academic research in this area.

V. RELATIONSHIP BETWEEN INVESTMENT AND FINANCING DECISIONS Our findings contribute to existing literature that attempts to explore the correlation between firm’s real and financial decisions. In this section we discuss the implications of ours findings for these literatures and introduce some unsolved questions.

17 The regression output will be ready on requirement but not report here. We summarize our major findings.

65

An Examination of Empirical Relationship Between Investment Decisions and Capital Structure Decisions

5.1 Investment distortion caused by debt financing As mentioned in introduction, Jensen and Meckling (1986) discuss asset substitution problem, according to which equity claimants invest in more risky projects when debt is already in place, thereby expropriate value from debt claimants. Obviously, the debt overhang problem and asset substitution problem are closely related. The asset substitution problem refers to the variance distortion of investment, while the debt overhang problem refers to the mean distortion. Both trigger agency costs because equity claimants choose an investment policy that maximizes equity value only, once debt is in place. Leland (1998) extends the research and measures distortion cost, which is small at 1.37 percent of the firm value without agency problem. Nathalie Moyen (2000) proves that although distorting investment decision, overhang debts do no harm equity­holders; that is, equity claimants benefit from underinvestment, especially in a bad economic situation. On the other hand, because of real world market frictions, firms with low debt ratio tend to over invest in non­profitable or unrelated businesses. Myers and Majluf (1984), and Mayer (1986) discuss trade off between tax benefits and default costs.

Our findings provide further evidence on the issue of distorted real decision by financing decision: light investment caused by high debt ratio and heavy investment caused by low debt ratio. Our tests examine investment and capital structure decisions simultaneously and find that firms with high long­term debt ratio tend to under invest. Given the experiment design, it is difficult to attribute our findings to endogenous financial and investment decisions in one estimation equation.

5.2 Independence of Capital Structure Choice As Brian Barry and Dun Gifford Jr (1998)18 pointing out: “a company’s financial structure matters a great deal. But to design that structure intelligently, financial executives need to understand the crucial links between a company’s product markets, its operating efficiency and its potential to create value through new investment…” Our results support this conclusion, which is also consistent with a lot of existing literature. The capital structure choices depend on what attributes? A lot of literature suggests that the variation in firm’s debt ratio selection is affected by factors: asset structure, non­debt tax shields, growth, uniqueness, industry classification, size, earning volatility, and profitability.19

Our findings are generally consistent with the previous studies: growth opportunity and industrial influence are important factors in determining optimal capital structure while investment expenditure has negligible explanatory power in capital structure choices. In other words, capital structure decision is independent of firm’s real decision.

5.3 Causality Relations between Investment and Capital Structure Decisions As we discussed in introduction, capital structure and investment may be correlated in cross­sectional firms or over time for a firm. According to pecking order theory of Myers (1984), one would expect some dependence in the joint distribution since investment return, which is part of internal fund, determine the needs for external funding, especially debt issuing. Further, both variables may be similarly related to other variables in a structural model of firm decision making. Hence, simply utilizing regression models for capital structure and investment decisions cannot conclude on the exact causality relationship. In this sense, we improve our methodology to clarify the direction of correlation between these two firm decisions.

The relationship between the capital structure and investment decision in our sample can be interpreted in following way: the debt ratio (long­term debt) decided upon may be considered in determining current investment outlays, but not vice versa. That is, capital structure choice is “causally prior to” and “exogenous with respect to” investment decisions.

5.4 Limitations in Research There are some limitations in our research. First, in capital structure equation, spurious relations may be induced between debt ratios measured at book value and the explanatory variables if firms select debt level in accordance with market value target. Unfortunately, some firms use book value targets while others use market value targets, and it is difficult to differentiate the sample in practice. Secondly, firms’ investment and financial decisions might change through time; our results reflect static point choice of firm. Further study might offer time­series analysis on dynamic firm level investment and financial decisions.

VI CONCLUSION In this paper, we construct simultaneous equations models for corporate investment decisions and capital structure choices, employing two­stage least square methodology with time lag variables considered to test whether these two kinds of firm­

12 Please refer this citation to http://www.cfoeurope.com/199809g.html 13 These literatures include works by Bradley, Jarrel, and Kim(1984), Auerbach,(1985), and Long and Malitz(1985), Titman(1982;1984)

66

S. Tang and B. Liu Volume 7 – Fall 2009 specific decisions are significantly interdependent in a real world. In a sample of 337 companies consisting S&P 500 Composite Index over the most recent year 2001, we find that with the real world financial frictions existence, firm level investment decisions are significantly influenced by capital structure, while capital structure choices are made separable from investment decisions. These findings lead us to reject the hypothesis of the interdependence between investment and capital structure decisions. For robustness, we also utilize an alternative methodology of multiple regressions and examine the other time period. Generally, our results are consistent and meaningful.

This paper provides strong empirical evidence on the conclusion that investment decisions depend on capital structure given the imperfections of real world. It is consistent with the fact that current academic researches have documented and accepted. Furthermore, we find that capital structure decisions are independent of investment decisions, and we make clear the causality relationship between firm’s real and financial decisions. In this sense, we suggest a promising area for future academic research: to explore more on the joint determination of firm’s capital structure and investment decision with the dynamic analysis.

REFERENCES: A. Auerbach,1985, “Real Determinants of Corporate Leverage”, Chicago University Working Paper Aviazian, V.A., and M.Berkowitz, 1991, “ Production Flexibility and Corporate Capital Structure”, Working paper, Department of Economics, University of Toronto Barclay,M., C. Smith, and R.Watts,1995, “ The Determinants of Corporate Leverage and Dividend Policies”, Journal of Applied Corporate Finance 7, 4­19 Ben S. Bernanke, 1983, “ The Determinants of Investment: Another Look”, The American Economic Review 73(2), 71­75 Bernanke B. and M. Gertler ,1989, “ Agency Cost, Net Worth, and Business Fluctuations”, American Economics Review79, 14­31 ______, 1990, “ Financial Fragility and Economic Performance”, Quarterly Journal of Economics 105, 87­ 114 Brennan, M.J. and E.S. Schwarts, 1984, “ Optimal Financial Policy and Firm Valuation”, Journal of Finance 39, 593­609 Dammon, R. M., and L. W. Senbet, 1988, “ The Effect of Taxes and Depreciation on Corporate Investment and Financial Leverage”, ”, Journal of Finance, 43, 357­74 David C. Mauer, Alexander J. Triantis, “Interaction of Corporate Financing and Investment Decisions: A Dynamic Framework”, Journal of Finance, 49(4), 1253­77 Dhrymes and M. Kurz, 1967, “ Investment, Dividends, and External Finance Behavior of Firms” in R. Ferber, ed., Determinants of Investment Behavior, New York Dixit, A. K. and R.S. Pindyck, 1994,“ Investment Under Uncertainty”, Princeton: Press Eugene F. Fama, 1974, “The Empirical Relationships between the Dividend and Investment Decisions of Firms”, American Economics Review 64(3), 304­18 Dotan, A., and S.A. Ravid, 1985, “ On the Interaction of Real and Financial decisions of the Firm under Uncertainty”’ ”, Journal of Finance, 40, 501­17 Hite, G. L., 1977, “ Leverage, Output Effects, and the M­M Theorems”, ”, Journal of Financial Economics 4, 177­203 H. DeAngelo and R. Masulis, 1980, “ Optimal Capital Structure under Corporate and Personal Taxation”, Journal of Financial Economics 8, 3­29 Jensen, M., 1986, “ Agency Costs of Free Cash Flow, Corporate Finance and Takeovers”’ American Economics Review 76, 322­29 J. Warner, 1977, “ Bankruptcy Costs: Some Evidence”, Journal of Finance 32, 337­47 Leland, H.E., 1998, “Corporate Debt Value, Bond Covenants, and Optimal Capital Structure”, Journal of Finance 53, 1213­43 Long, M., and I. Malitz, 1985, “The Investment­Financing Nexus: Some Empirical Evidence”’ Midland Finance Journal, 53­59 M. Bradley, G. Jarrel, and E.H. Kim, 1984, “On the Existence of an Optimal Capital Structure: Theory and Evidence”, Journal of Finance 39, 857­78 Michael J Barclay, Erwan Morellec and Clifford W. Smith, 2003, “On the Debt Capacity of Growth Options”, working paper of University of Rochester Michael Smirlock, and William Marshall, 1983, “ An Examination of Empirical Relatiobship between Dividend and Investment Decisions: A Note”, Journal of Finance 38(5), 1659­67 Modigliani F, and M.H. Miller, 1958, “The Cost of Capital, Corporate Finance, and the Theory of Investment”, American Economic Review 26, 331­49 M. S. Long and E.B. Malitz, 1985, “Investment Patterns and Financial Leverage”, In B. Friedman(ed.), Corporate Capital Structure in the United State. Chicago: Press

67

An Examination of Empirical Relationship Between Investment Decisions and Capital Structure Decisions

Myers. S. C., 1974, “Interactions of Corporate Financing and Investment Decisions­Implications for Capital Budgeting:, Journal of Finance 29, 1­25 ______, 1984, “The Capital Structure Puzzle”, Journal of Finance 39(3), 575­92 Nathalie Moyen ,2000, “ Investment Distortions Caused by Debt Financing”, working paper of University of Colorado at Boulder Peter MacKay, Gordon M. Philips, 2002, “ Is There an Optimal Industry Financial Structure ?”, NBER Working Paper P. Marsh., 1982, “The Choice between Equity and Debt: An Empirical Study”, Journal of Finance 37, 121­44 Parrino R.and M.S.Weisbach , 1999, “ Measuring Investment Distortions Arising form Stockholder­Bondholder Conflicts”, Journal of Financial Economics 53, 3­42 R. Green, 1984, “Investment Incentives, Debt, and Warrants”, Journal of Financial Economics 13, 115­35 Smith, C., and R.Watts 1992, “The Investment opportunity Set, and Corporate Financing, Dividend, and Compensation Policies”, Journal of Financial Economics 32, 262­92 ______., 1979, “ On Financial Contracting: An Analysis of Bond Covenants”, Journal of Financial Economics(), 117­61 Sheridan Titman, Roberto Wessels, 1988, “The Determinants of Capital Structure Choice”, Journal of Finance 43(1), 1­19 ______, 1984, “ The Effect of Capital Structure on a Firm’s Liquidation Decision”, Journal of Financial Economics 13, 137­51

68

A. J. Smith Volume 7 – Fall 2009

PART I ­ THE FOUR FACTORS OF QUALITY: ACHIEVING THE CIRCLE OF ACCEPTANCE AND SATISFACTION

Avis J. Smith New York City College of Technology, USA

ABSTRACT The purpose of this report is to present a theoretical approach to two concepts which the author defines as, The Four Factors of Quality, and the second being The Circle of Acceptance & Satisfaction. These concepts represent the active business process from the manufacturer to the professional customer and social customers; it is part of the overall process to achieve customer acceptance and satisfaction. It outlines the basic responsibilities of all areas of business and consumption in the process, and how diligent sustainability of those responsibilities can help to perpetuate quality. The concepts are those developed by the author, and apply to past and current research in the area of customer satisfaction.

Keywords: Business Customer, Professional Customer and Social Customer.

LEVELS OF QUALITY ASSESSMENT When communicating in business relations, many times we fail to visualize the big picture or the total process from start to finish. It is very important to know and understand the steps of the total process for the purpose of tracking and ensuring quality. The process in the general since of business (business customer­professional and social customer) refers to the levels of manufacturing to the customers’ acceptance and use of the product or service. This process is commonly referred to by the author as Circle of Acceptance and Satisfaction. The factors of quality are those which include manufacturing, distribution, sales and the customer (professional and social). There are however, many times when we must reassess variable inputs at each level of the four factors, to avoid adverse affects in quality relevant to the successful achievement of the ultimate goal, which reflects the completion of the Circle of Acceptance and Satisfaction (Quality). Completion however, will vary depending on the type of field or business one is involved in; whether it includes a product, service or a combination of the two. Every goal must be properly met at each factor level in order for the business customer, and professional/social customer to successfully complete the Circle of Acceptance and Satisfaction. The goals of the four factor levels manufacturing, distribution, sales and the customer when successfully achieved should equal the completed Circle of Acceptance and Satisfaction. Chart 1 displays the significance of the process below.

Chart 1: Circle of Acceptance and Satisfaction when goals of The Four Factors of Quality are met. Manufacturing Distribution/Distributor

Circle of Acceptance & Satisfaction

Customer Sales Professional/Social Department /Stores, etc.

Each description of the levels of The Four Factors of Quality begins with the existing purpose. Each factor level has its reason for existence, which is rooted in the competitive priorities significant to each. These competitive priorities are significant to each factor and help the business customer or professional customer standout as a more favorable product or service than competitors. The purpose when aligned with competitive priorities must be sustained in order to add value to the business customer or professional customer; therefore the sustained purpose is linked to the achievement of quality, and transfers into the Circle of Acceptance & Quality. By a business customer or professional customer adhering to and sustaining its’ purpose of competitive priorities, they perpetuate a sustained quality.

69

Part І - The Four Factors of Quality: Achieving the Circle of Acceptance and Satisfaction

BUSINESS CUSTOMER The business customer refers to the tasks and processes which describe the chain of direction for getting products to the market place. This chain starts with the manufacturing process, continuing with the distributor/distribution process, followed by sales, and on to the professional and/or social customer. The manufacturer is referred to as a business customer because they generally do business with other businesses of larger scale. Larger scale businesses can include hospitals, car dealers, banks, department stores, educational institutions, government and other similar large scale businesses. Distributors and sales are also in the category of business customers, as they also generally do business with larger scale businesses.

PROFESSIONAL/SOCIAL CUSTOMER The professional customer is the smaller scale business such as private practicing doctors, dentists, lawyers, accountants and other similar small scale professional customers. These small scale customers generally depend on larger scale businesses in the business relationship to supply specific needs. They are similar in many ways to the social customer and generally seek similar forms of redress when there is a lack of quality or service. Together the professional customer and average consumer are referred to as the social customer due to their reliance on the larger scale businesses. What separates the professional and social customer is that the professional customer in its reliance on larger scale business relies heavily on the social customer to patronize their functions. The professional and social customers are primarily involved in a competitive selection process.

MANUFACTURING LEVEL Manufacturing can be very complex when reaching across the various business and professional disciplines. A discussion of the manufacturing processes in various areas of business and professions however, can reveal the generalization of the concept of The Four Factors of Quality across business and professional disciplines. The first step in the manufacturing phase is to realize its purpose, and how that purpose relates to its responsibilities in the overall process. Purpose is rooted in understanding the competitive priorities of the manufacturer in its plans to carry out its mission. As an example, the manufacturer must make guarantees in the parts that make up its product. The manufacturer must express to its distributors the reasons that its product is competitively a better choice than its competitors. What must be included also is a method of expressing and reviewing the purpose of the manufacturing process within the organization and how to achieve overall success. In general manufacturers must know all of the components of their product and arrange for the proper distribution of their product. It is the manufacturer that has the social responsibility of product safety and the need for continuous sustainability. This particularly refers to the manufacturers of toys, foods, drugs, appliances, automobiles and various others types of equipment. When we investigate the sequence of The Four Factors of Quality, there are specific tasks or sustainable responsibilities needed to maintain competitiveness. Chart 2 below outlines the areas of major responsibilities for the manufacturer.

Chart 2: Sustainable Competitive Responsibilities.

Manufacturer Safety, Quality, Quantity, Timeliness

Manufacturing Distribution Sales Customer

Once the manufacturer has sustained these three competitive areas, they are in a position to be competitive in the business environment.

DISTRIBUTION Distributors have a basic responsibility of making sure that the manufacturer’s products are in sufficient quantity and that all factors of timeliness are prepared and in place to be forwarded to sales vendors. The distributors have a responsibility to ship items to the vendors in conditions of quality as sustained by the manufacturer. They are responsible to the sales level to assure delivery of suitable quantity. Chart 3 below represents the process.

70

A. J. Smith Volume 7 – Fall 2009

Chart 3: Required Sustainable Tasks For Distributors

Distributor Quantity, Timeliness, Quality

Distributor Sales

SALES The level of sales has the most visible responsibility due to the constant contact with the social customer. The social customer has many resources available to them for redress when satisfaction is not obtained through transactions. In order for sales to be competitive they must follow strict guidelines of quality service, courteous service, timeliness and maintain sufficient quantity. Customers of sales can include hospitals, doctors, dentists, stores, car dealers, various levels of government etc. The major concerns or tasks for sales will exist in the areas mentioned in Chart 4.

Chart 4: Required Sustainable Tasks in Sales

Sales Flexibility Timeliness Sustained Quality Quantity Customer Service

Sales Customer

The area of sales is the most complex because it includes equipment, products and services. Hospitals, doctors dentists, accountants and lawyers sale their services; while stores, car dealers and other sales facilities sale the products of manufacturers. They are all customers in the sequence of The Four Factors of Quality, with basic responsibilities that vary. All areas of business are legally responsible for the contents it sales, or the quality of the services it renders. Table 1 below, displays an example of some other business and social customers with their basic responsibilities listed. Table 2, further displays defined areas in the four factors element and large, medium and small scale business definitions.

Table 1: Example list of some other responsibilities for the business customer and the social customer. 1. Business Customer ­ Responsibility Social Customer ­ Responsibility 2. Cleaners Bailer to Bailee Patron Accurate requests 3. Hospital Patient Care Patient Know patient rights 4. Toy Dealer Strict Liability Buyer Understand consumer rights 5. Educational Institution State and Federal Laws Student Review student rights and responsibilities 6. Lawyer Legal Representation Client To understand lawyer/client relationship 7. Government Judicial/Social Citizens Social ethics of law

MANUFACTURER BUSINESS CUSTOMER PROFESSIONAL CUSTOMER SOCIAL CONSUMER/CUSTOMER Manufactured Parts Distributors of products Private offices rendering services General consumers Farm Products Sales Doctors Automobiles Government Dentists Toys, etc. Lawyers, etc. Note: Large scale business is defined by the author as a business having not less 500. Medium scale business is defined as a business having between 100 and 500. Small scale business is defined as a business having less than 100.

71

Part І - The Four Factors of Quality: Achieving the Circle of Acceptance and Satisfaction

ETHICAL CONSIDERATIONS The primary ethical considerations in the four factors of quality are outlined in the governmental established laws which govern our society. What is important is that all elements of the four factors have an ethical responsibility in their interactions with each other. These interactions when there is conflict, creates the need to seek redress within the court systems. One of the major concerns in ethics is that of product liability, which reflects the responsibility of the manufacturer. All products of manufactured must be suitable for use or consumption by those who purchase them.

CONCLUSION The Four Factors of Quality is the corner stone of assessment to use in the overall evaluation of the interrelated process of business. It helps to perpetuate a conscious evaluation of the total process that transfers into the Circle of Quality, which makes a society improves on it current standards and prepares for a better future. There is no area of the business process that it excludes.

SOURCES Steiger, Darby., Keil, Linda and Gaertner, Greg; "Mode Effects in Customer Satisfaction Measurement" Paper presented at the annual meeting of the American Association For Public Opinion Association, Fontainebleau Resort, Miami Beach, FL, . 2009­08­13 http://www.allacademic.com/meta/p15954_index.html. Buxton, K.V. and Gatland, R. (1995) Simulating the effects of work­in­process on customer satisfaction in a manufacturing environment, Simulation Conference Proceedings, Winter Volume, 3­6 Dec 1995, pp. 940 – 944. Gustafsson, Anders, Johnson, Michael D., Ross, Inger, (2005), “The Effects of Customer Satisfaction, Relationship Commitment Dimensions, and Triggers on Customer Retention,” Journal of Marketing, Vol. 69, Issue 4, Business Source Elite. Homburg, Christian, Koschate, Nicole, Hoyer, Wayne D., (2005), “Do Satisfied Customers Really Pay More? A Study of the Relationship Between Customer Satisfaction and Willingness to Pay”, Journal of Marketing, Vol. 69, Issue 2, Business Source Elite. Levine, David M., Stephan, David; Krehbiel, Timothy C.; Berenson, Mark L., Statistics for Managers Using Microsoft Excel, 4th Edition, Prentice Hall, 2005. Mithas, Sunil, Krishnan, M. S., Fornell, Claes, (2005), “Why Do Customer Relationship Management Applications Affect Customer Satisfaction?” Journal of Marketing, Vol. 69, Issue 4, Business Source Elite. Morgan, Neil A., Anderson, Eugene W., Mittal, Vikas, (2005), “ Understanding Firms’ Customer Satisfaction information Usage”, Journal of Marketing, Vol. 69, Issue 3, Business Source Premier. Seiders, Kathleen, Voss, Glenn B., Grewal, Dhruv, Godfrey, Andrea L., (2005), “Do Satisfied Customers Buy More? Examining Moderating Influences in a Retailing Context”, Journal of Marketing, Vol. 69, Issues 4, Business Source Elite.

CONTACT: Avis J. Smith, Assistant Professor New York City College of Technology Restorative Dentistry (P409) 300 Jay Street Brooklyn, New York 11201 Phone: 718­260­5137 Fax: 718­254­8557

72

K. Johnson and C. W. Lewis Volume 7 – Fall 2009

FOUNDATIONS OF WORK MOTIVATION: AN HISTORICAL PERSPECTIVE ON WORK MOTIVATION THEORIES

Kimberly Johnson and Christine W. Lewis Auburn University Montgomery, USA

ABSTRACT Motivating employees is one of the primary responsibilities of a manager (Moorhead & Griffin, 1998). Companies want motivated employees because they want increased productivity, profits, and satisfied workers. Therefore, interest in work motivation began as early as the 1930’s (Klein, 1989). Numerous theories influence researchers’ perception of work motivation; consequently, the concept of work motivation does not have one underlying theory. Work motivation is an intangible concept and cannot be measured directly (Ambrose & Kulik, 1999). Although thorough, this paper is not an exhaustive review of the literature. The research focus was limited to articles published in English journals and articles focusing on adults. Moreover, articles were excluded if work motivation was not the primary focus. This paper focuses on the seven traditional work motivation theories: Motives and Needs Theory; Expectancy Theory; Equity Theory; Goal­Setting Theory; Cognitive Evaluation Theory; Job Design Theory; and Reinforcement Theory, and will detail some of the latest research on each of these theories.

Keywords: Work Motivation; Content Theories; Process Theories

INTRODUCTION Work motivation is composed of internal and external forces, and these forces influence work­related behavior in terms of form, direction, intensity, and duration (Pinder, 1998). Work motivation is essential to an organization because work motivation influences an employee’s behavior. Theories of work motivation can primarily be divided into two categories: content theories and process theories (Work Motivation Theories). Content theory focuses on the exact factors that motivate people and determines what factors influence people’s behavior (Content Theory).

However, process theory tries to show why people’s needs alter in terms of motivation. Simply stated, process theory seeks to give an explanation as to how motivation occurs (Process Theory). Articles centered on other variations of motivation, such as motivation to attend, training motivation, motivation to learn, inspirational motivation, and test­taking motivation were excluded from this paper (e.g., Smith, Jayasuriya, Caputi, & Hammer, 2008). This paper will begin with a discussion of content theories beginning with needs theory.

CONTENT THEORIES Needs Theory. The needs theory is comprised of three prominent theories, all of which were developed in the 18th century. The first theory developed on needs is very well­known. The hierarchy of needs theory developed by A.H. Maslow in 1943 attempts to explain an individual’s motivation. Maslow identified a total of five need hierarchy levels, and he divided those into two categories: lower­order needs and higher­order needs. He defined lower­order needs as basic physiological needs: safety and security. Lower­order needs are primarily satisfied through economic rewards (Moorhead & Griffin, 1998).

Maslow delineated that lower­order needs must be met in the order he stated them. Also, he wrote that these needs must be satisfied first before an individual will try and fulfill a higher­order need. The higher­order needs are belonging and social needs, esteem and status, plus self­actualization and fulfillment. Higher­order needs are met differently than lower order needs and can often be satisfied through psychological and social rewards. He further avowed that higher­order needs also had to be met in the order he established them. His theory suggests that different factors can motivate individuals depending on their position on the hierarchy of needs pyramid (Davis, 1981).

While his concepts were interesting, Maslow’s hierarchy of needs has limitations. First, the hierarchy of needs expresses the views of the typical American; yet, the hierarchy of needs may differ in other cultures. Furthermore, studies have also shown that individuals do not always respond in the order suggested by the hierarchy of needs (Davis, 1981). A contemporary of

73

Foundations of Work Motivation: An Historical Perspective on Work Motivation Theories

Maslow, Viktor Frankl, a Holocaust survivor, wrote a bestselling book about Frankl’s efforts in a concentration camp to find meaning through suffering. The 1956 book, Man's Search for Meaning detailed his theory of logotherapy. Frankl asserted that man found meaning through everyday living regardless of the circumstances which was contrary to Maslow’s assertions (Boeree, 2006). According to DeVita (2008), a fundamental flaw in the hierarchy of needs theory is people tend to always want more, resulting in an ever­increasing pyramid.

The second needs theory, which was developed by D. McClelland (1961), identified three types of needs: need for achievement, need for affiliation, and need for power. Need for achievement describes the degree an individual focuses on goals and desires to demonstrate competency. If an individual has a high need for achievement, he or she will place a large degree of his or her energy on accomplishing a task or job. On the other hand, the need for affiliation indicates the degree to which an individual values social interactions. An individual with a high need for affiliation prefers to spend time maintaining social relationships and joining groups. For example, research indicates that women are less driven by power and money and more driven by connection and quality (Gershman, 2008). Finally, the need for power reflects an individual’s desire to influence or encourage others to achieve but this has a positive and a negative side. A person with a high positive need for power enjoys working and is concerned about discipline and self­respect. However, an individual with a high negative need for power is more selfish in nature and is neither group nor company­oriented. He or she has the “I win, you lose” mentality. Consequently, the need for power alone may or may not be beneficial to an employer.

The ERG theory, which is the final need theory, was developed by C. Alderfer (1972). The ERG theory provides an alternative theory about needs, and this theory is simpler than Maslow’s (1943) hierarchy of needs. The ERG theory states that there are three types of needs, specifically the need for existence, need for relatedness, and need for growth. The need for existence corresponds to Maslow’s physiological and safety needs, and the need for relatedness coincides with Maslow’s social needs. Finally, the need for growth corresponds with Maslow’s esteem and self­actualization needs. Unlike Maslow’s hierarchy of needs; however, the ERG theory states that any need could occur at any time. Nonetheless, no empirical or theoretical research utilizing the ERG theory was currently found.

Motives Theory. F.I. Herzberg (1966) developed the motivator­hygiene theory (i.e., two­factor theory). Herzberg thus created, the main motives theory in Organizational Behavior. The motivator­hygiene theory has two components but it does not revolve around how often an individual washes. Hygiene refers to maintenance factors and the second factor deals with motivational factors. Herzberg believed that both motivator and hygiene factors impact an individual’s motivation. The motivator­hygiene theory focuses on which job conditions impact satisfaction and dissatisfaction.

The theory affirms that employees will be dissatisfied if the hygiene factors on their jobs are poor. Yet, the presence of hygiene factors does not necessarily create employee satisfaction; it simply causes an employee not to be dissatisfied. Examples of hygiene factors are company policy and administration, pay, job security, working conditions, status, peer relations, and quality of supervision. Generally, hygiene factors center around job context factors, or extrinsic motivators (Moorhead & Griffin, 1998).

On the other hand, motivational factors primarily center on job content factors, which are intrinsic motivators, but their lack does not necessarily cause job dissatisfaction. Examples of motivator factors are recognition, advancement, the type of work performed, responsibility, and the possibility of growth (Moorhead & Griffin, 1998). Hence Herzberg’s two­factor theory, as both hygiene factors and motivational factors must be taken into account.

IMPACT OF CONTENT THEORIES The interest in motives and needs peaked during the 1970’s and 1980’s. However, since the 1980’s, little empirical and theoretical research has been done (Ambrose & Kulik, 1999). The decline of research in this area may indicate the maturity of this subfield of motivational theory. As a result, as previously stated, few articles were found that utilized motives and needs theories. Nonetheless, despite the lack of recent empirical and theoretical research on motives and needs, this section will describe the foundation theory for each and the latest research on the same.

In 2005, Donavan, Carlson, and Zimmerman studied the influence of personality traits on sports fan identification with the sports team they supported. The study examined several personality traits such as extraversion, agreeability, need for arousal, and need for materialism and the moderating effect of need for affiliation. Study results reveal that need for affiliation did positively influence the level of fan identification with their team. Additional studies examined need for affiliation (Tsung­ Chi & Chung­Yu, 2008) and the need for affiliation and power (Kuhl & Kazen, 2008).

74

K. Johnson and C. W. Lewis Volume 7 – Fall 2009

A study by Daugherty, Kurtz, and Phebus (2009) examined both McClelland’s need for achievement and need for affiliation theories. The study examined 120 participants to determine if personality influenced need for achievement and need for affiliation. A unique aspect of this study was the comparison between how participants evaluated themselves and how close acquaintances rated them. Study results revealed that ratings of acquaintances on conscientiousness significantly helped to predict need for achievement; whereas, self­ratings and acquaintance ratings did not predict need for affiliation. Several studies other studies were done based on McClelland’s (1961) need for achievement, also known as achievement striving (Lee, 1995).

Achievement striving was the focus of a study by Bluen, Barling and Burns (1990). Insurance salespersons were viewed to determine if their work performance, work attitudes (e.g., job satisfaction), and signs of depression could be predicted by their levels of Achievement Striving (AS) and Impatience­Irritability (II). AS is a construct used to describe the degree to which an individual is active, works hard, and takes their work seriously. A construct that describes an individuals’ degree of intolerance, obsession with time, anger, and hostility is II. The authors acknowledged Type A as a global construct composed of AS and II components. When controlling for biographical differences and II, results indicate that AS positively influenced job performance (measured by the number of insurance policies sold) and had a positive effect on job satisfaction. However, depression was not related to AS when biographical differences and II were controlled. In the second part of the Bluen et al. (1990) study, the authors controlled for biographical differences and AS. The results demonstrated that although II was positively linked to depression and negatively influenced job satisfaction, it was unrelated to job performance (the number of insurance policies sold). This study confirmed Type A behavior is comprised of at least two components, AS and II.

Further AS research was conducted by Barling, Kelloway, and Cheung (1996) who studied how a car salesman’s performance was influenced by the interaction between time management behaviors and AS. Results indicated time management behaviors (i.e., short­range and long­range planning) had different effects according to the car salesman’s motivation level. More specifically, short­range planning is defined as those tasks performed daily or weekly, whereas, long­ range planning is performed over a quarter. Barling et al. (1996) suggested that if effective methods of increasing time management behaviors are identified, then the job performance of highly motivated individuals should improve. Additionally, results indicate a significant interaction exists between short­range planning and AS. Therefore, increasing short­term planning by employees should improve performance.

Several studies have utilized Herzberg’s motivator­hygiene theory to determine how certain job attributes influence an employee’s motivation. For example, Maidani (1991) compared how public sector and private sector employees rated the importance of fifteen job attributes. Although the results of the study indicated that both sectors of employees were more motivated by intrinsic job attributes, extrinsic factors were more highly valued by public sector employees. Gabris and Simo (1995) studied twenty motivational needs to determine how each need motivated the employees of public, private, or non­ profit organizations. Although no difference was detected between public and private sector employees, employees of non­ profit organizations had a lower need for competitiveness and autonomy. Not surprisingly, however, they had a higher desire to serve the community.

Another aspect to the motives theory which has been studied is the Protestant Work Ethic (PWE), which symbolizes the degree to which an individual makes his or her work the center of their life (Ambrose & Kulik, 1999). PWE is viewed as a type of motive that influences work motivation. Ali and Falcone (1995) studied work ethic in the United States and Canada to determine whether a relationship existed between work­related measures, such as individualism, work involvement and work ethic. The authors suggested that the historical background of a country (i.e., persistent social and economic conditions) needed to be considered when discussing PWE. Ali and Falcone (1995) found that the United States and Canada share similar political systems and social diversity, but differences may exist in work­related attitudes. Specifically, the results of this study indicated that United States’ employees are more committed to PWE, contemporary work ethic (i.e., CWE ­­ workers expect more receptiveness from their employers and greater personal growth from their work), and work­related individualism.

Two years prior to the Ali and Falcone (1995) study, a group of researchers, Stein, Smith, Guy and Bentler (1993) conducted a longitudinal study on the impact of achievement on job satisfaction for adults. The study dealt with factors previously found in an individual’s life which influences their behavior as adults. They found that low levels of adolescent achievement resulted in low job satisfaction and negative job behaviors in adults. Another interesting finding was that children, under the age of three, who received achievement pressure from their parents, had a higher need for achievement and earned higher incomes as adults.

75

Foundations of Work Motivation: An Historical Perspective on Work Motivation Theories

Each of the content theories studied have provided not only researchers but practioners as well insight into what motivates people. Despite the valuable insight gained from content theories, process theories are equally important. Knowing “what” motivates someone is only one part of the equation; then the next step is determining “how” you motivate them. The second half of this paper will specifically focus on the following process theories: Expectancy Theory, Equity Theory, Goal­Setting Theory, Cognitive Evaluation Theory, Job Design, and Reinforcement Theory.

PROCESS THEORIES Expectancy Theory. The expectancy theory as developed by V.H. Vroom (1964) states that this motivation theory is a product of expectancy, instrumentality, and valence. Expectancy is the belief that effort will result in the desired performance (i.e., effort­performance expectancy). Faith that one’s performance will be rewarded is instrumentality (i.e., performance­outcome expectancy). Valence is the perceived value of the reward to the individual (e.g., a promotion). The expectancy theory proposes that motivation is a multiplicative function of expectancy, instrumentality, and valence (Davis, 1981). The multiplicative nature of the expectancy theory indicates that if all three components are high, then motivation must be high and vice­versa. However, if one component is missing, then the motivational level of the individual will be zero. Another well­ known process theory is equity theory.

Equity Theory. J.S. Smith (1965) conceptualized the equity theory, which states that people are motivated to maintain fair (equitable) relationships with other people. Once the relationship is perceived to be unfair, it is no longer equitable. To determine whether a relationship is equitable, people compare their perceived inputs and outputs to the same inputs and outputs of other peoples, but not limited to, fellow employees, persons in another organization, or himself or herself. Inputs are those things that an individual contributes to a job, such as amount of time worked, amount of effort, and qualifications. Outputs are those things that an individual receives from his or her job, such as pay and fringe benefits.

When an individual compares his or her inputs and outputs to another individual’s inputs and outputs three outcomes can occur. Specifically, an individual can perceive that overpayment inequity, underpayment equity, or equitable payment has occurred. If a person perceives that there is an overpayment inequity (i.e., one is receiving greater output, although he or she has given input that is comparable to others), he or she will feel guilty and seek to increase his or her input or reduce his or her output. On the other hand, if a person perceives that there is an underpayment inequity (i.e., one is receiving less for his or her input than others), then he or she will become angry and seek to reduce the inequity. He or she may choose to decrease his or her input (e.g., increase tardiness) or increase his or her output (e.g., request a raise). Because the equity theory deals with perceptions of fairness, an individual may choose to alter his or her perception of an inequitable state by altering his or her perception of the circumstances. Although several studies have been conducted using expectancy and equity theories, by far, one of the most studied process theories is goal­setting theory.

Goal­Setting Theory. E.A. Locke and G.P. Latham (1990) conceptualized the goal­setting theory as a means to describe how setting goals is an important motivational force. Establishing a goal enables an individual to compare his or her current state to a future desired state. If someone feels they have the ability (i.e., self­efficacy) to accomplish a goal, then they will work towards that goal. However, failure to achieve the desired goal will cause dissatisfaction. But, the person will work harder at achieving that goal because they feel the goal is obtainable. Once the goal is reached, the individual will feel more competent and successful. This is based on the belief that a goal provides a clear illustration of the type and level of performance needed for achievement of that goal. In addition, goal commitment is the degree to which a person accepts and strives to attain a goal. If a person wants to reach a goal and believes he or she can reach the goal (i.e., self­efficacy), then that person becomes more committed to the goal. It then follows that if desire to obtain a goal and self­efficacy do not exist, a person will be less committed to the goal. The goal­setting theory states that an individual’s beliefs about self­efficacy and goal commitment can influence task performance. However, Cognitive Evaluation Theory not only focuses on extrinsic motivation but intrinsic motivation as well.

Cognitive Evaluation Theory. In 1971, E.L. Deci developed the Cognitive Evaluation Theory (CET) which stated that an individual can be motivated extrinsically and intrinsically. An individual who is motivated extrinsically believes that he or she is motivated by outside forces; thus he or she seeks extrinsic rewards, such as a pay raises or promotions. On the other hand, an intrinsically motivated individual believes that he or she is motivated by internal desires; as such, he or she seeks intrinsic rewards, such as self­esteem (Deci & Ryan, 1980). Another aspect of CET is the work environment and its influence on employees’ intrinsic motivation. The intrinsic motivation of an employee has been found to decrease if that employee works in a controlling environment. However, if an employee receives constructive feedback instead of being controlled, the intrinsic motivation of the employee will not be affected (Deci & Ryan, 1980). Research on CET peaked during the 1970’s and 1980’s (Ambrose & Kulik, 1999). Although several meta­analyses of CET have been conducted (i.e., Cameron & Pierce, 1994; Tang

76

K. Johnson and C. W. Lewis Volume 7 – Fall 2009

& Hall, 1995), little research has been conducted where CET is applied to work motivation. However, researchers have also explored how to motivate workers by redesigning their actual jobs.

Job Design. In 1911, the concept of job design was introduced by F.W. Taylor, the Father of Scientific Management. According to George and Jones (2002), scientific management is “a set of principles and practices designed to increase the performance of individual workers by stressing job simplification and specialization” (p. 214). They also stressed that job design is the method used to link specific tasks to specific jobs, while determining the necessary tools and procedures to accomplish those jobs. Job simplification is the subdivision of work into the smallest, most identifiable tasks. Job specialization involves assigning workers to those tasks. Although an interesting concept, workers became bored with the monotony of their jobs. In an effort to reduce the monotony, advances, such as job enlargement, job enrichment and job rotation were made in job design.

Job enlargement is a tool used to expand the scope of a job by adding more variety and tasks at the same skill level. Proponents of job enlargement state that it can improve employee satisfaction, motivation, and quality of production. However, critics believe job enlargement does not have a long­term impact on job performance. In the 1960’s, a tool was introduced to overcome the limited effects of job enlargement on work motivation. Job enrichment was designed to give employees a higher degree of control over their work with respect to planning, design, implementation and evaluation. Job enrichment involves performing tasks at higher levels of skill and responsibility. Another unique process theory is Reinforcement Theory, which focuses on motivating people through encouraging or discouraging certain behaviors.

Reinforcement Theory. B.F. Skinner (1953, 1972) is generally associated with reinforcement theory. The reinforcement theory encourages desirable behavior or discourages undesirable behavior through reinforcement. The four types of reinforcements that exist are positive reinforcement, negative reinforcement, extinction, and punishment. Positive reinforcement consists of giving rewards or feedback for desirable behavior. One example of positive reinforcement is when a manager commends an employee for his or her punctuality. Negative reinforcement involves encouraging an individual to avoid undesirable behaviors or removing an individual from an undesirable situation when he or she engages in desirable behaviors (Davis, 1981). For example, a salesperson may choose to works long hours in lieu of being relocated to an undesirable territory. When undesirable behaviors are eliminated by withholding positive reinforcement, this is known as extinction. An example of extinction is when an employee consistently works overtime, but his or her supervisor fails and/or refuses to acknowledge his or her extra efforts. Punishment, the final reinforcement theory, results in the end of undesirable behaviors by having a negative event follow the undesirable behaviors. For example, if an employee is late to work, his or her supervisor may openly reprimand them.

IMPACT OF PROCESS THEORIES Expectancy and Equity Theories. Advances to the expectancy theory were accomplished by L.W. Porter and G.E. Lawler (1968). They attempted to (1) identify the sources of an individual’s valences and expectancies and (2) link effort with performance and job satisfaction. They found a person must have sufficient opportunities to perform his job and skills, abilities, role perceptions, and a person’s belief of what is expected of him can influence his ability to successfully perform his job. The relationship between compensation package, work motivation, and job performance was studied by Igalens and Roussell (1999). Expectancy and discrepancy theories were used to examine how the components of a total compensation package might influence work motivation and job satisfaction. The results of the study indicated that under specific conditions, individualized compensation of exempt employees can be a factor of work motivation. However, flexible pay (nonexempt employees only) and benefits (both exempt and nonexempt employees) neither motivates nor increases job satisfaction. The next process theory examined is equity theory.

Three studies in the early 1990’s all revolved around baseball. Harder (1991) examined the equity theory in comparison to the expectancy theory by studying major league baseball free agents. He believed these theories produced different results under identical conditions (e.g., perceived under reward and strong performance­outcome expectancies). For example, free agent nonpitchers from 1977­1980 baseball seasons were compared to a random sample of nonpitchers. The author declared that free agents were more likely to feel under rewarded before entering the free agent market, but they had a higher expectation of an increased salary after becoming free agents. Both motivations were believed to impact the players’ performances. Equity theory suggests that performances would decline if an individual felt under rewarded, but the study results also indicated that performances that were strongly linked to future salary (e.g., home run ratios for free agents) did not decline. However, batting average has a weak relation to salary outcome; as such, it declined in the year before free agency. This finding suggests that the expectancy effect had a greater impact than the equity effect. The study further found that both direct and indirect equity effects rose. When participants were faced with inequitable under reward, their performance decreased if it

77

Foundations of Work Motivation: An Historical Perspective on Work Motivation Theories was not strongly linked to future salary (e.g., batting average). Furthermore, under conditions of inequitable under reward, performance did not increase if it was strongly linked only to future rewards (e.g., home run ratios).

The second study by Bretz and Thomas (1992) studied major league baseball position players to determine the influence of perceived equity, motivation, and final­offer arbitration on performance and mobility. Generally, all players increased their performance before arbitration. The results of the study showed that a player’s pre­arbitration performance significantly predicted the outcome of arbitration. Specifically, those players who were successful in arbitration had greater increases in performance prior to arbitration. After arbitration, performance declined for both arbitration winners and losers. However winners, with large gaps between their demands and their actual offers, suffered higher rates of post­arbitration performance decline than those players with smaller differentials. Bretz and Thomas (1992) suggested that the post­arbitration decline was due to the players’ regression to their career averages. Additionally, the study showed a significant relationship between losing the arbitration and post­arbitration performance. Those players who were unsuccessful at arbitration suffered a decrease in performance, and they were significantly more likely to change baseball teams or retire from baseball. In studying pay equity in professional baseball, (i.e., underpayment, equitable payment, and overpayment), one additional study was conducted.

The third study by Howard and Miller (1993) utilized Data Envelopment Analysis, which can provide managers with information concerning the format and levels of compensation that are appropriate for their organizations. Furthermore, this system enables managers to objectively estimate pay equity, and it provides them with a reliable defense for future reward allocations. Additionally, Data Envelopment Analysis allows a manager to systematically evaluate the organization’s compensation policies. Howard and Miller (1993) suggested that this tool will equip managers with those instruments that are necessary to make consistent equitable adjustments to programs and systems.

Van Eerde and Thierry (1996) conducted a meta­analysis which integrated the correlations of seventy­seven studies based on Vroom’s (1964) original expectancy model and work­related criteria. Findings of the meta­analysis indicated average correlations of the studies evaluated were slightly lower than Vroom’s (1964) expectancy model and work­related criterion variables. However, the components of the models studied had higher effect sizes than Vroom’s model which supports that this finding has a lack of validity. Specifically, they believed that several studies were performed incorrectly from the original, intended theoretical viewpoint of Vroom’s (1964) expectancy theory and lack of proper data analysis. The flaws in the models studied suggested that the original components of the expectancy model (i.e., valence, instrumentality, and expectancy) should be used as opposed to other modified or suggested models. They further emphasized the use of precautionary measures when considering potential differing interpretations of the expectancy model theory utilized in prior research. Van Eerde and Thierry (1996) reiterated that the techniques used in prior research on the expectancy theory are often incorrect. Thus, the proper choice of criterion variables can make a difference.

Finally, Wheeler and Buckley (2001) used the expectancy theory as a suggestion on how to motivate a large segment of U.S. employees (i.e., contingent workers). Several differences exist between contingent workers and permanent employees. Generally, contingent workers are hired solely to reduce payroll or work a job that is isolated from other workers; temporarily fill a position while permanent employees are on vacation, long­term disability leave, or maternity leave; and given less pay and fewer (if any) benefits. They argue that the expectancy theory does an excellent job of examining how contingent workers determine what job to choose. According to Wheeler and Buckley (2001), “[t]he attractiveness of each organization (valence), the amount of effort required to join each company (instrumentality) and the expectation that the company will offer employment (expectancy)” (p. 349) cause many contingent workers to choose a specific company for employment (Wanous, Keon, & Latack 1983).

Goal Setting. Several studies took place in the early 1990s involving goal theories. Staw and Boettger (1990) studied the impact of task revision on work performance and stated that goal­setting can be used to energize behavior, and it is an influential method of guiding an individual’s behavior. Task revision is an action implemented to correct a faulty procedure, an inaccurate job description, or a role expectation which is not beneficial to the organization. The study’s results indicated that goal­setting inhibited task revision. For example, participants instructed to “do their best” outperformed those participants who were given a specific goal. However, if a supervisor attempts to implement goals that are counterproductive to the organization, his or her influence can effectively limit the chances for task revision. Conversely, Erez, Gopher and Arzi (1990) examined the effects of goal difficulty, the origin of a goal (self­set versus assigned) and monetary rewards (present versus absent) on the simultaneous performance of two tasks. Self­set goals irrespective of whether moderate or difficult, without monetary rewards, resulted in the highest performance level. However, a combination of self­set goals and monetary rewards negatively influenced performance.

78

K. Johnson and C. W. Lewis Volume 7 – Fall 2009

Varying the spotlight, in 1993, Tubbs addressed the issue of degree of commitment to assigned goals as a moderator of the effectiveness of the goal­setting procedure. He reviewed three studies that recommended a moderating assumption was valid; however, it was only relevant for one of three closely­related motivational concepts – pre­choice attitudes, subsequent choice of a personal goal, and maintenance of the personal choice. In past research, all three motivational concepts had been discussed under the over­arching title of goal commitment. However, for future research, Tubbs (1993) suggested that researchers distinguish between the three different aspects of goal commitment. According to Gostick and Elton (2009), authors of the best­seller The Carrot Principle, rewards should be personal and designed to meet the needs (i.e., interests and lifestyle) of the employee. However, this will take a manager who is concerned enough to find out this information about their employees.

Steele­Johnson, Beauregard, Hoover and Schmidt (2000) conducted two studies to assess the joint effects of goal orientation and task demands on motivation, affect (i.e., satisfaction with performance), and performance. The first study examined whether goal orientation interacted with task difficulty in its effect on performance, affect, and intrinsic motivation. Individuals with performance­goal orientations were more satisfied with their performances on simple tasks rather than difficult tasks. The second study examined the effects of task consistency in goal orientation on performance, motivation, and affect during skill acquisition. In this study, task consistency moderated the effect of goal orientation on self­efficacy and intrinsic motivation. Individuals with performance­goal orientations reported higher levels of self­efficacy on consistent tasks.

Erez and Judge (2001) studied how core self­evaluations were related to goal setting, motivation, and performance. A newly ­­ developed personality taxonomy suggested that self­esteem, locus of control, generalized self­efficacy and neuroticism form a broad personality trait termed core self­evaluations. The authors hypothesized that this broad trait was related to motivation and performance, and their findings supported the same. Erez and Judge (2001) found that, in a laboratory setting, the core self­evaluations trait was related to task motivation and performance. Additionally, the trait was related to the four core traits ­ task activity, productivity as measured by sales volume, the rated performance of insurance agents, and goal­setting behavior.

When these core traits were investigated as one broad concept (i.e., core self­evaluations), they proved to be more consistent predictors of job behaviors than when used in isolation. The individual core traits were related to motivation and performance; however, the core self­evaluations factor displayed higher correlations with motivation and performance, in both a lab and a field study. The previous process theories primarily focus on motivating people extrinsically (i.e., people are motivated if they receive something in return).

Cognitive Evaluation Theory. In an interesting study, Juniu, Tedrick and Boyd (1996) examined amateur and professional musicians’ perceptions of rehearsals and performances. The results indicated that amateur musicians were intrinsically motivated to participate in rehearsals and performances. Amateur musicians viewed rehearsals and performances as a leisure activity and, therefore, were motivated by intrinsic factors, such as pleasure and relaxation. To the contrary, professional musicians viewed rehearsals and performances as work. Consequently, professional musicians were motivated by extrinsic factors (i.e. income), that they could receive as a result of rehearsing and doing performances.

A two­study experiment by Erez and Isen (2002) evaluated the impact of positive affect on expectancy motivation. The first study resulted in three major findings: (1) positive affect increases a participant’s performance; (2) positive affect impacts a participant’s perceptions of expectancy and valence; and (3) positive affect had no impact on a participant’s perceptions of instrumentality. In the second study the link between performance and outcomes were specified; whereas in the first study, the outcomes depended on pure chance. The results of the second study showed that positive affect impacted expectancy, valence and instrumentality. They argued both studies demonstrate how positive affect interacts with task conditions in influencing motivation. Also in 2002, Judge and Ilies conducted a meta­analytic review of the relationship between personality traits (specifically, the Big­5 personality model) and performance motivation, according to goal­setting, expectancy, and self­ efficacy theories. First, the authors’ results indicated that neuroticism was negatively related to performance motivation with respect to all of the aforementioned theories. To the contrary, conscientiousness was positively related to the theories on performance motivation. Generally, extraversion, openness to experience, and agreeableness shared a weak relationship to the three theories on performance motivation. They affirmed that this study clarifies the literature because using the Big­5 personality model to analyze work motivation is more efficient rather than utilizing random personality traits.

Huang and Van De Vliert (2003) conducted a study that examined the national characteristics that moderate the individual­ level relationship between job characteristics and job satisfaction. The objective of the study was to determine where intrinsic job satisfaction fails to work. The results suggested that the link between intrinsic job characteristics and job satisfaction is

79

Foundations of Work Motivation: An Historical Perspective on Work Motivation Theories stronger in wealthier countries; countries with better governmental, social welfare programs; more individualistic countries; and smaller, power distance countries. Additionally, intrinsic job characteristics tend to produce motivating satisfaction in countries with good governmental, social welfare programs, irrespective of the degree of power distance. However, intrinsic job characteristics often do not work in countries with poor governmental, social welfare programs and large, power distances. By contrast, extrinsic job characteristics are stronger and more positively related to job satisfaction in all countries. How to motivate people, whether intrinsically or extrinsically has received a considerable amount of attention in the literature, as is evident with the previous process theories discussed.

Job Design. The job characteristics theory (JCT) developed by J.R. Hackman and G.R. Oldham’s (1976, 1980) built on the work of job enlargement and job enrichment. They tried to identify the specific job characteristics which intrinsically motivate employees’ to perform their jobs such as skill variety, task identity, task significance, feedback, and autonomy. JCT states the higher a job scores on each of the five job characteristics the higher the level of an employees’ intrinsic motivation. To measure a workers’ perception of each of the five job characteristic dimensions, they developed the Job Diagnostic Survey. Furthermore, Wall, Corbett, Martin, Clegg and Jackson (1990) studied the impact of two alternative work designs of stand­ alone advanced manufacturing technology (AMT) on job performance which indicated that those operators who worked under the operator­controlled system improved downtime statistics, had less perceived job pressure, and had higher levels of intrinsic job satisfaction. Additionally, it reduced the demands placed on the specialist staff, and specialist­controlled work design is most frequently utilized in the manufacturing industry.

However, an interdisciplinary examination of the costs and benefits of enlarged jobs was conducted by Campion and McClelland (1991). Overall, enlarged jobs resulted in a better motivational design (e.g., increased variety, autonomy, and task significance). The enlarged jobs seemed to increase employee satisfaction, lessen mental overload for employees, increase the probability of catching errors, and improve customer service. Interdisciplinarily enlarging jobs could but did not necessarily result in increased costs of training, the need for additional skills, and increased compensation. Furthermore, Wong and Campion (1991) studied ways in which to design a motivational job. Specifically, they focused on the motivational value of tasks, task interdependence, and task similarity on creating a motivational job. There were differing results for each of the variables studied. Task design shared a positive relationship with motivational job design which suggests that the motivational level of tasks is important when implementing a motivational job design. Unlike task design, task interdependence had an inverted­U relationship with motivational job design, which indicates that task interdependence should be increased until the break­even point. In other words, task interdependence should not be increased to the point of creating a negative relationship with motivational job design. Task similarity had a negative relationship with motivational job design. This finding reiterated earlier job design research that demonstrated that simplification and specialization of jobs decreased motivation. The results of Wong and Campion’s (1991) study also indicated that job design mediated the relationship between task design and affective outcomes.

Meanwhile, Spector and Jex (1991) studied the relationship between job characteristics obtained from multiple sources and employee affect, absenteeism, intention to turnover, and health. According to them, traditional research, utilizing the JCT, only collected reports of job characteristics from incumbents. They argued that incumbents may not be the best source for obtaining job characteristics information; consequently, they obtained information regarding job characteristics from three independent sources ­­ incumbents, ratings from job descriptions, and the Dictionary of Occupational Titles. Findings of the study showed that incumbent ratings only slightly correlated with the other two sources of job characteristics. Of the three job characteristic sources evaluated, only incumbent ratings correlated with employee outcomes such as job satisfaction, work frustration and turnover intentions. In sum, the results indicated that incumbent ratings did not accurately reflect actual work environments and thus should not be used solely to measure JCT.

In a study conducted by Dodd and Ganster (1996), three specific job dimensions ­­ (i.e., objective autonomy, task variety, and objective feedback) were manipulated to determine their impact on participants’ perception of job characteristics (e.g., job satisfaction) and job outcomes (e.g., job performance). Manipulations of objective autonomy and task variety impacted job satisfaction. More specifically, if a task had a large amount of variety, increased amounts of objective autonomy resulted in increased job satisfaction. However, if a participant was given increased objective autonomy over a task with little variety, small increases in job satisfaction resulted. When evaluating the impact on job performance, increased levels of objective autonomy resulted in an increase in job performance for high variety tasks by 16 percent. Increased objective autonomy for low variety tasks had little impact on job performance. Objective autonomy also interacted with objective feedback. Specifically, increased objective feedback in an environment of high objective autonomy resulted in a 16 percent increase in job performance. However, increased objective feedback and low objective autonomy had a small impact on job performance. Job design, in comparison to the other process theories is unique because this theory focuses on the workers job not just on the worker.

80

K. Johnson and C. W. Lewis Volume 7 – Fall 2009

REINFORCEMENT THEORY. Organizational Behavior Modification (OBMOD) is the systematic administration of operant conditioning for teaching and managing those organizational behaviors that the organization has deemed important. Organizational behavior modification consists of five basic steps (Drucker & Associates, 2007). The first step is to identify the behavior to be learned. The identified behavior should then be measured for frequency of occurrence before any intervention takes place. In step three, a functional analysis should be performed. A functional analysis is a method used to determine what antecedents or factors caused the identified behavior. Next, one must develop a strategy to change the frequency of the behavior. As part of the strategy, employees affected by the behavior should understand the change that is requested, and the change should be applied fairly and uniformly to affected employees. Finally, one must measure the frequency of the behavior after the previously mentioned steps have been implemented. According to George and Jones (2002), operant conditioning has successfully improved important organizational behaviors, such as productivity, attendance, punctuality, and safe work practices.

A study by Ball, Trevino and Sims (1994) examined the impacts of just and unjust punishments on subordinate performance and citizenship behaviors. Contrary to conventional wisdom, punishment can have a positive effect on subordinates’ behavior, if the punishment is administered in a particular way. If an employee believes that he or she has a high level of control over punishment procedures and imposed punishments (i.e., subordinate control), he or she will be more likely to engage in citizenship behaviors. Perceived harshness, a distributive characteristic of the punishment process because it impacts perceptions of equity and severity appropriateness, impacted subsequent supervisor perceptions’ of the subordinates’ job performance. Butterfield teamed up with Trevino and Ball (1996) and studied punishment from a manager’s perspective showing that managers received pressure regarding punishment from various sources, such as punished employees, organizations, work groups, and themselves. Although, managers influence these sources, the converse is also true. Managers understood that long­range consequences could result from administering punishment to subordinates; these long­ range consequences extend beyond changing the behavioral problems of those subordinates. As a result of the study, Butterfield et al. (1996) were able to develop an inductive model of punishment from a managerial perspective. The model noted the key relationships, variables, processes, and outcomes that enable one to understand punishment from a manager’s perspective.

Stajkovic and Luthans (1997) conducted a meta­analysis of the effects of OB MOD on task performance from 1975­1995. The meta­analysis revealed that employees who were involved in OB MOD groups generally improve their performances by 17 percent as opposed to employees who were not involved in OB MOD groups. The study also showed that type of organization can impact the effect of OB MOD on employee performance. More specifically, the survey results suggested that improved employee performance after an OB MOD intervention is generally greater for manufacturing organizations than service organizations. Research further indicates that when people are able to participate in changes made, they are more motivated to abide by those changes because they had were persuaded not threatened (56 Clev. St. L. Rev. 111).

CONCLUSION Work motivation research, with a history extending back to the 18th century, has provided varying explanations as to what factors motivate employees. Although, several of the work motivation subfields such as motives and needs have been extensively researched, the field of work motivation is still continually evolving.

This evolution of the work motivation field is apparent in newer work motivation topics such as culture, groups, and creativity (10 U. Pa. J. Bus & Emp L 958). Even as these new work motivation topics emerge, research is continually performed on older work motivation theories such as work design and reinforcement theory. As organizations continue to change and compete in a world defined with fewer boundaries and influenced by factors such as new technology, differing compensation systems, and flexible work schedules, the field of work motivation will continue to redefine itself.

REFERENCES Alderfer, C.P. (1972). Existence, relatedness, and growth. New York: Free Press. Ali, A.J. & Falcone, T. (1995). Work ethic in the USA and Canada. The Journal of Management Development, 14 (6), 26­34. Ambrose, M.L. & Kulik, C.T. (1999). Old friends, new faces: Motivation research in the 1990s. Journal of Management, 25 (3), 231­292. Ball, G.A., Trevino, L.K., & Sims, H.P. (1994). Just and unjust punishment: Influences on subordinate performance and citizenship. Academy of Management Journal, 37 (2), 299­223.

81

Foundations of Work Motivation: An Historical Perspective on Work Motivation Theories

Barling, J., Kelloway, E.K., & Cheung, D. (1996). Time management and achievement striving interact to predict car sales performance. Journal of Applied Psychology, 81 (6), 821­826. Bluen, S.D., Barling, J., & Burns, W. (1990). Predicting sales performance, job satisfaction, and depression by using the achievement strivings and impatience­irritability dimensions of type A behavior. Journal of Applied Psychology, 75 (2), 212­216. Boeree, C.G. (2006). Viktor Frankl. Accessed on September 25, 2009 from http://webspace.ship.edu/cgboer/frankl.html Bretz, R.D. & Thomas, S.L. (1992). Perceived equity, motivation, and final­offer arbitration in major league­baseball. Journal of Applied Psychology, 77 (3), 280­287. Butterfield, K.D., Trevino, L.K. & Ball, G.A. (1996). Punishment from the manager’s perspective: A grounded investigation and inductive model. Academy of Management Journal, 39 (6), 1479­1512. Cameron, J. & Pierce, W.D. (1994). Reinforcement, reward, and intrinsic motivation: A meta­analysis. Review of Educational Research, 64: 363­423. Campion, M.A. & McClelland, C.L. (1991). Interdisciplinary examination of the costs and benefits of enlarged jobs. A job design quasi­experiment. Journal of Applied Psychology, 76 (2), 186­198. Content Theory. Retrieved on September 25, 2009 from http://en.wikipedia.org/wiki/Content_theory. Daugherty, J.R., Kurtz, J.E., & Phebus, J.B. (2009). Are implicit motives “visible” to well­acquainted others? Journal of Personality Assessment, 91 (4), 373­380. Davis, K. (1981). Human behavior at work. New York: McGraw­Hill. Deci, E.L. (1971). Effects of externally mediated rewards on intrinsic motivation. Journal of Personality and Social Psychology, 18: 105­115. Deci, E.L. & Ryan, R.M. (1980). The empirical exploration of intrinsic motivational processes. In L. Berkowitz (Ed.), Advances in Experimental Social Psychology, 13: 39­80. New York: Academic. DeVita, E. (2008, March 19). Management ­ in theory... the hierarchy of needs. Management Today. Dodd, N.G. & Ganster, D.C. (1996). The interactive effects of variety, autonomy, and feedback on attitudes and performance. Journal of Organizational Behavior, 17, 329­347. Donavan, D.T., Carlson, B.D., & Zimmerman, M. (2005). The influence of personality traits on sports fan identification. Sport Marketing Quarterly, 14 (1), 31­42. Drucker & Associates (2007). Behavior modification. Retrieved on September 27, 2008 from http://drucker­ group.com/behaviour.htm. Erez, A. & Isen, A.M. (2002). The influence of positive affect on the components of expectancy motivation. Journal of Applied Psychology, 87 (6), 1055­1067. Erez, A. & Judge, T.A. (2001). Relationship of core self­evaluations to goal setting, motivation, and performance. Journal of Applied Psychology, 86 (6), 1270­1279. Erez, M., Gopher, D., & Arzi, N. (1990). Effects of goal difficulty, self­set goals, and monetary rewards on dual task performance. Organizational Behavior and Human Decision Processes, 47, 247­269. Gabris, G.T. & Simo, G. (1995). Public sector motivation as an independent variable affecting career decisions. Public Personnel Management, 24, 33­50. George, J.M. & Jones, G.R. (2002). Understanding and managing organizational behavior. Upper Saddle River, New Jersey: Prentice Hall. Gershman, B.L. (2008). The most dangerous power of the prosecutor. Pace Law Review, 29 (1). Gostick, A. & Elton, C. (2009). The carrot principle: How the best managers use recognition to engage their people, retain talent, and accelerate performance. New York: O.C. Tanner Company. Hackman, J.R. & Oldham, G.R. (1976). Motivation through the design of work: Test of a theory. Organizational Behavior and Human Performance, 16, 250­279. Hackman, J.R. & Oldham, G.R. (1980). Work redesign. Reading, MA: Addison­Wesley. Harder, J.W. (1991). Equity theory versus expectancy theory the case of major league baseball free agents. Journal of Applied Psychology, 76 (3), 458­464. Herzberg, F. (1966). Work and the nature of man. Cleveland: World. Howard, L.W. & Miller, J.L. (1993). Fair pay for fair play: Estimating pay equity in professional baseball with data envelopment analysis. Academy of Management Journal, 36 (4), 882­894. Huang, X. & Van De Vliert, E. (2003). Where intrinsic job satisfaction fails to work: National moderators of intrinsic motivation. Journal of Organizational Behavior, 24, 159­179. Igalens, J. & Roussell, P. (1999). A study of the relationships between compensation package, work motivation, and job satisfaction. Journal of Organizational Behavior, 20, 1003­102 Judge, T.A. & Ilies, R. (2002). Relationship of personality to performance motivation: A meta­analytic review. Journal of Applied Psychology, 87 (4), 797­807.

82

K. Johnson and C. W. Lewis Volume 7 – Fall 2009

Juniu, S., Tedrick, T., & Boyd R. (1996). Leisure or work? Amateur and professional musicians’ perception of rehearsal and performance. Journal of Leisure Research, 28: 44­56. Klein, H.J. (1989). An integrated control theory model of work motivation. The Academy of Management Review, 14 (2), 150­ 172. Kuhl, J. & Kazen, M. (2008). Motivation, affect, and hemispheric asymmetry: Power versus affiliation. Journal of Personality and Social Psychology, 95 (2), 456­469. Lee, C. (1995). Prosocial organizational behaviors: The roles of workplace justice, achievement striving, and pay satisfaction. Journal of Business and Psychology, 10, 197­206. Locke, E.A. & Latham, G.P. (1990). A theory of goal setting and task performance. Englewood Cliffs, NJ: Prentice­Hall. Maidani, E.A. (1991). Comparative study of Herzberg’s two­factor theory of job satisfaction among public and private sectors. Public Personnel Management, 20, 441­448. Maslow, A.H. (1943). A theory of human motivation. Psychological Review, 50, 370­396. McClelland, D. (1961). The achieving society. Princeton, NJ: D. Van Nostrand. Moorhead, G., & Griffin, R. W. (1998). Organizational behavior: Managing people and organizations (5th ed.). Boston, MA: Houghton Mifflin. Pinder, C. C. (1998). Work motivation in organizational behavior. Upper Saddle River, NJ: Prentice­Hall. Porter, L.W. & Lawler, E.E. (1968). Managerial attitudes and performance, Homewood, IL: Irwin. Process Theory. Retrieved on September 25, 2009 from http://en.wikipedia.org/wiki/Process_theory_. Skinner, B.F. (1953). Science and human behavior. New York: MacMillan. Skinner, B.F. (1972). Beyond freedom and dignity. New York: Knopf. Smith, J.S. (1965). Inequity in social exchange. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 2, pp. 267­299). New York: Academic Press. Smith, R., Jayasuriya, R., Caputi, P., & Hammer, D. (2008). Exploring the role of goal theory in understanding training motivation. International Journal of Training and Development, 12 (1), 54­72. Spector, P.E & Jex, S.M. (1991). Relations of job characteristics from multiple data sources with employee affect, absence, turnover intentions and health. Journal of Applied Psychology, 76 (1), 46­53. Stajkovic, A.D. & Luthans, F. (1997). A meta­analysis of the effects of organizational behavior modification on task performance, 1975­95. Academy of Management Journal, 40 (5), 1122­1149. Staw, B.M. & Boettger, R.D. (1990). Task revision: A neglected form of work performance. Academy of Management Journal, 33 (3), 534­539. Steele­Johnson, D., Beauregard, R.S., Hoover, P.B., & Schmidt, A.M. (2000). Goal orientation and task demand effects on motivation, affect, and performance. Journal of Applied Psychology, 85(5), 724­738. Stein, J.A., Smith, G.M., Guy, S.B., & Bentler, P.M. (1993). Consequences of adolescent drug use on young adult job behavior and job satisfaction. Journal of Applied Psychology, 78, 463­474. Tang, S. & Hall, V.C. (1995). The overjustification effect: A meta­analysis. Applied Cognitive Psychology, 9, 365­404. Taylor, F.W. (1911). The principles of scientific management. New York: Harper and Brothers. Tsung­Chi, L. & Chung­Yu Wang (2008). Factors affecting attitudes toward private labels and promoted brands. Journal of Marketing Management, 24(3/4), 283­298. Tubbs (1993). Commitment as a moderator of the goal­performance relation a case for clearer construct definition. Journal of Applied Psychology, 78 (1), 86­97. Van Eerde, W. & Thierry, H. (1996). Vroom’s expectancy models and work­related criteria a meta­analysis. Journal of Applied Psychology, 81 (5), 575­586. Vroom, V. (1964). Work and motivation. New York: John Wiley & Sons, Inc. Wall, T.D., Corbett, J.M., Martin, R., Clegg, C.W., & Jackson, P.R. (1990). Advanced manufacturing technology, work design, and performance a change study. Journal of Applied Psychology, 75 (6), 691­697. Wanous, J.P., Keon, T.L., & Latack, J.C. (1983). Expectancy theory and occupational/ organizational choices: a review and test. Organizational Behavior and Human Performance, 32, 66­86. Wheeler, A.R. & Buckley, M.R. (2001). Examining the motivation process of temporary employees. A holistic model and research framework. Journal of Managerial Psychology, 16, 2001. Wong, C. & Campion, M.A. (1991). Development and test of a task level model of motivational job design. Journal of Applied Psychology, 76 (6), 825­837. Work Motivation Theories. Retrieved on September 25, 2009 from http://www.oppapers.com/essays/Work­Motivation­ Theories/210796. 10 U. Pa. J. Bus & Emp L 958 56 Clev. St. L. Rev. 111

83

Glass Ceilings and Gender Gaps: A Survey

GLASS CEILINGS AND GENDER GAPS: A SURVEY

Lesley Mace and Ken Linna Auburn University Montgomery, USA

ABSTRACT Based on a survey given at a university economics forum, this paper investigates public opinion concerning important issues dealing with women in the workforce such as the glass ceiling, the gender gap, women’s role in the labor force, and various public policies used to address perceived gender inequalities in the workplace. The ability of women to balance career and family and serve in leadership roles was another topic addressed in the survey. Questions on employment choice and satisfaction were also asked of women who took a career break and then returned to the workforce. An overview of labor economics literature dealing with these issues is given, including a discussion of both the gender and family gaps found in wages of men and women, the effect of career breaks on women’s wages, and possible wage differences as the result of differing career choices. Evidence is also examined regarding the existence of both glass ceilings and sticky floors and statistical discrimination towards women in the workplace. An examination of the survey data found important differences in opinion not just between men and women, but also between different age groups, even within gender categories, and that younger groups of workers particularly favor a policy role for government in achieving workplace equity.

Keywords: Economics of Gender, Labor Markets, Labor Economics, Labor Discrimination

1. INTRODUCTION The labor force participation of adult women is now close to 60%, proof of the tremendous strides they have made in the workforce since 1950, when their participation was only 33%. Women are also increasing their leadership presence, with a record number now heading Fortune 500 companies and prominent national leadership positions such as Speaker of the House and for the second time, Secretary of State, being held by women. Women are also making progress in education; 58% of all college degrees now are awarded to women, including 59% of all Master’s degrees and 47% of all PhDs. Yet despite this progress, women still only represent 2% of all Fortune 500 CEOS, and less than 20% of the members of the Senate and the House of Representatives. And women are still earning only 81% of what men earn.

Two opposing schools of thought exist to explain why women have not achieved more in the four decades since the 1963 Equal Pay Act and the 1964 Civil Rights Act Title 7 were passed, putting into law equal pay for equal work and barring sex discrimination in the workplace. One school contends that it is persistent discrimination and sexism that conspire to hold women back under what has been popularly called the “glass ceiling”. The other school of thought, brought to light in the journals of labor economics, claims that it is the intermittent work history of women, and the subsequent loss in pay and seniority that accompany these “career breaks”, that are the main reason why women do not so often achieve the career success of their male counterparts.

This paper reports the results of a survey of a small sample of citizens, the majority of them women, on issues pertaining to women in the workforce which was conducted at a conference on the subject held at Auburn University Montgomery. This paper is organized as follows: In section 1, a general overview of the literature that deals with both the glass ceiling and career breaks is given. In Section 2, the Survey results are presented with a general discussion of the findings.

2. SECTION 1 The Gender Gap Literature on women in the workforce long focused on the “gender gap”­ the difference between the wages of men and the wages of women. Since the 1970s, that gap has fallen by 50%, from a difference of 40% to 19%. As women, and particularly mothers, have increased their labor force participation, now spend more years in the labor force, and are attaining more education than men, the question is now not why the gap exists, but why it still exists. Recent literature has identified a new

84

L. Mace and K. Linna Volume 7 – Fall 2009 culprit­ the “family gap”. This gap has been found to explain almost half of the gender gap, with another 30­40% being explained by work experience, something also affected by motherhood. (Waldfogel, 1998B). This is especially pertinent today, when 72% of mothers with children under the age of 18 are in the workforce (Hymowitz, 2004)

Since 1980, this gap has been determined to have increased in importance as a cause of women not making up the gender gap. (Waldfogel, 1998A) In 1991, the gap between men and women’s wages actually was smaller than the gap between the wages of mothers and nonmothers, indicating that something in parental responsibilities was affecting women’s wages. (Waldfogel, 1998B) Numerous studies have found that single women earn wages that are almost equal to their male counterparts, while married women lag behind. While women with children are earning only 60­70% of men’s wages, non­ mothers have been found to earn as much as 95% of a man’s wage, putting the gap between the wages of mothers and non­ mothers at over 20%. (Waldfogel, 1998A, 1998B, and 1995, Blau and Kahn, 1997; and Budig and England, 2001).

Several theories abound as to why mothers pay a wage penalty. Because most women eventually do have children, and because the wages of women who eventually have children and those who do not are found to be similar at age 21, heterogeneity does not seem to be a factor. (Waldfogel, 1998A) Having children does necessarily mean leaving the workforce, even if for a short time. The average child related career break has been estimated at 2.2 years, (less for those employed in business.) (Hewlett and Luce, 2005), with length and frequency of breaks inversely related to education. (Mincer and Ofek, 1982) and with younger cohorts taking shorter breaks than those taken by their older counterparts. (Waldfogel, 1998B)

What is happening during these career breaks to erode the earnings of women? During career interruptions, human capital depreciates at a rate widely estimated at 1.5% a year, with a third of this being specific human capital. (Mincer and Ofek, 1982; Mincer and Polachek, 1974). Women who take career breaks may find themselves returning to the workforce with outdated skills and contacts, and may be forced to reenter the labor force at a lower wage, particularly if they are starting with a new employer or changing fields.(Spivey, 2005; Hewlett, 2005) Estimates of the negative impact of a child on a mother’s wage range from 4­8% for one child and from 12­23% for two children, (Waldfogel, 1997; Arun, 2004; Lundberg and Rose, 200; Budig and England, 2001), with a penalty of 32% found for working mothers of three of more children. (Davies and Pierre, 2005). This wage penalty increases with education level and the length of break, with breaks of more than three years estimated at costing a woman 37% of her earning power. (Hewlett and Luce, 2005). Those who returned to work part­time, as more mothers than non­mothers do, were found to face an additional ten percent penalty. (Joshi, Paci, and Waldfogel, 1999; Waldfogel, 1997) Children therefore indirectly affect women’s wages by lowering a women’s labor force attachment, and her experience and tenure in a field (Korenman and Neumark, 1992), and may also lead a woman to invest less in human capital if she anticipates taking time out of the labor force for childrearing (Mincer and Ofek 1982; Mincer and Polachek, 1974). One study found fully 70% of the gender gap could be explained by differences in human capital and work experience, all of which erode during a career break. (Joshi, Paci, and Waldfogel, 1999).

Becker proposed a different hypothesis, claiming that the exhausting nature of childcare and household responsibilities (which still fall predominately on the woman, regardless of her education or occupational status), leaves women with little energy left for market work. This will lead women to respond by putting less effort into their jobs or to choose jobs that require less effort and rely on human capital skills that are not easily eroded, such as elementary teaching. (Mincer and Ofek, 1982; Becker, 1985) If employers think likewise, they may be likely to offer mothers lower wages, hire them only for lower level positions with less responsibility, or be reluctant to hire mothers at all. Employers may also fear that mothers will have frequent absences due to children being sick or childcare arrangements falling through. This statistical discrimination also may serve to lower wages which could already be affected by occupational selection. (Joshi, Paci, and Waldfogel, 1999).

Given this evidence, it is no surprise that a survey published in the Harvard Business Review found that 33% of successful career women age 41­55 were childless, with that number rising to 42% in corporate America. Among women earning over $100,000, 49% were childless, compared to only 19% of men earning that salary. Only around 60% of high achieving and corporate women were found to be married, compared to around 80% of their male counterparts. (Hewlett, 2005). A study of female college presidents also found that they were less likely to be married than men (60% versus 90% ) and more likely to be childless (32% versus 9%) (Pope, 2007).

The New York Times created a furor in 2005 when it reported on the “opting out” phenomenon­ highly educated women who were leaving the workforce voluntarily, and not returning. A survey of female Stanford graduates revealed that 57% left the workforce; only 38% of female Harvard Business School graduates were working fulltime, as well as only 1/3 of their female MBAs. This is compared to only 5% of Harvard’s male MBAs leaving the workforce.

85

Glass Ceilings and Gender Gaps: A Survey

Whether aware of the consequences or not, fully 4 out of ten women, and 43% of mothers, take a career break, usually for family responsibilities. These responsibilities usually involve childrearing, but can also involve caring for other family members, such as elderly parents. (Hewlett and Luce, 2005). For policymakers who are concerned about the subsequent ground lost by mothers returning to workforce (and fully ¾ eventually do return in some capacity) (Hewlett, 2002), the barriers to entry that women face after a long time out are a cause for alarm. Employers also have a concern about losing valuable employees. Policies that help women return to the workforce and retain their skills during a leave should be encouraged in a workforce that is currently 46% female. (BLS, 2005) While maternity leave is a cost to employers, it has also been found to increase employee retention. Over 60% of all women return to their jobs after maternity leave, with those who are the most educated the most likely to return. Because women who return to the same employer will lose less specific human capital and seniority, it has been found to actually reduce the effects of a career break. (Waldfogel, 1998A). On the other hand, the anticipation of maternity leave could cause employers to pay women lower wages. (Summers, 1989). Women who were covered by maternity leave and returned to work after childbirth were found to receive a wage premium that offset the negative wage effect of children, or to have no penalty at all if they maintained continuous employment. (Joshi, Paci, and Waldfogel, 1999). Those who return to the same job can benefit from their pre­birth tenure, good job matches, and seniority; job changers were found to pay a wage penalty. (Waldfogel, 1998B) Maternity leave was also found to reduce turnover, increase commitment and productivity on a job, and increase the likelihood of a woman returning to work. (Lundberg and Rose, 2000) Unfortunately, a recent study by the McGill University Institute for Health and Social Policy rated the United States’ maternity leave policies as “among the worst”; the U.S. was among only 5 out of 173 countries surveyed that did not provide some type of paid maternity leave. (Schweitzer, 2007).

Workplace support plays a large role in job satisfaction, and those who are most satisfied with their jobs would be most likely to return to them. Child care is also an issue that working women must contend with, even though most care is done by relatives, and not in the market. Cost of child care has been found to negatively impact the probability that a woman will work (Connelly, 1992); in fact, Blau and Robins (1988) found that if childcare was fully subsidized, 87% of mothers would work. The high cost of childcare may explain the higher labor force participation rates and subsequent lower gender gaps that we see in Scandinavia, where childcare is fully subsidized. Child tax credits such as those given in the United States, while a step in the right direction, are taken mainly by those in the middle and upper classes, who probably also have higher education levels that lead them to participate at higher rates anyway. (Blau and Robins, 1988). In the U.S., tech companies such as Google, IBM, Microsoft, and Sun Microsystems are leading the way on this issue in the private sector by providing options such maternity and paternity leave programs, child care, flexible work schedules, disability leave, and the opportunity to work from home. (Rothberg, 2006).

In order to understand the persistent gender gap, we must understand the differing nature of labor force participation by men and women. Family responsibilities, while willingly undertaken, penalize mothers who must leave the workforce for a time and then return later, seeking to make up lost ground. The contributing factors of lost human capital, experience, a perception of inferior work effort, and perhaps a lower investment in human capital and workplace skills to begin with, all conspire to lower the wages of mothers. Quick returns to the workforce, particularly to the same employer, and support of working mothers in the form of maternity leave and child care may serve to lessen these effects somewhat, but the “family gap” still remains.

The “Glass Ceiling” Over twenty years have passed since the term “glass ceiling” made its way into modern terminology via an article in the Wall Street Journal. In 1995 a Congressional Commission was established to investigate the nature, causes of, and cures for this phenomenon. Today, the debate is still raging as to whether it even exists at all. Although studies have shown that companies with women directors have stronger financial performance than those with all­male boards, (Hertz, 2006), in 2005 women held only 4% of the corporate officer positions in U.S. Fortune 500 companies, (Arcieri,2007), and only 9.4% of Fortune 500 jobs higher than vice president (Virzi,2006). Believers point to statistics such as these, which are found worldwide (Wirth, 1998) as all the proof necessary that indeed, the glass ceiling exists. There is also quite a difference in the way men and women view the glass ceiling, a result found not only in our survey, but also in a survey published in the Wall Street Journal, where four times as many women as men believed women face a glass ceiling. (Badal, 2006).

Empirically, the glass ceiling has been defined as a widening of the gender wage gap at the top end of the wage distribution. A related phenomenon is the sticky floor, where the gender wage gap is also wider at the lower end of the pay distribution. This occurs when women and men are employed in the same ranks or positions, but the starting salaries paid women tend to be at the lower end of the range for that position, while men are offered wages at the higher end of the pay scale for their position and/or rank. This leaves women already starting behind from the beginning.

86

L. Mace and K. Linna Volume 7 – Fall 2009

There is a smaller body of literature that specifically concerns itself with the glass ceiling, and much of it rests on the empirical body of work done on the gender pay gap. Albrecht, Bjorkland, and Vroman (2003) investigated whether there was a glass ceiling in Sweden during a time when the gender pay gap was decreasing overall. They found a sharp increase in the gender wage gap starting at the 75th or 80th percentile of the wage distribution, even after controlling for education, sector and industry. Despite the many family support policies available in Sweden, their glass ceiling effect was found to be larger than that found in the United States or other non­Scandinavian European countries. Kee (2006) examined Australian data and found a strong glass ceiling effect in the private sector, while the sticky floor phenomenon was more prevalent in the public sector. Another Australian study by Connell (2006) also suggested a glass ceiling that manifested itself primarily in gender division of labor. Arulampalam, Booth, and Bryan (2007) studied eleven European countries and found glass ceilings in nine countries, and sticky floors in two. For all but two countries, the glass ceiling was greater in the private than in the public sector. Gang, Landon­Lane, and Yun (2003) found glass ceiling effects in both Germany and the United States, concluding that men have an approximately 30% more upward mobility compared to women in the upper income classes. McDowell, Singell, and Ziliak (1999) found that the average female economist is 36% less likely to receive a promotion from assistant to associate professor, suggesting a glass ceiling in the Economics profession. On the upside here, the overall proportion of tenured women faculty is rising, even though it is still only 25% at doctoral granting universities. (Pope, 2007)

On the other side of the debate, two notable exceptions are found in a paper dealing with the law profession and another on public sector administration. Baker (2003) sampled high income law school graduates and discovered more evidence for a sticky floor than a glass ceiling. He attributed the sticky floor to self­limiting career moves women make to accommodate child­rearing and other family obligations. Bowling, Kelleher, Jones, and Wright (2006) looked at the leadership of state agencies in all 50 states. Their research showed that female leadership has been steadily increasing since the 1970s, with the number of women serving as state agency heads growing from 1 in 20 in 1974 to 1 in 3 in 2004. Women agency executives earned 95% or more of what their male counterparts were earning, a type of equity that does not yet exist in the private sector, where women in full­time management and professional jobs average pay that is about 73% of men’s, and female top executives still earn ½ to 1/3 the salary of male executives. (Sied, 2006)

Explanations for the glass ceiling results mirror those that explain the gender pay gap­ women do not work continuously throughout their lives, and often take breaks at points in time that are crucial to their careers; women self­select less demanding jobs in order to accommodate family responsibilities or work part­time; female academics emphasize teaching over the more highly rewarded research; and a lower level of commitment to careers by women, whether this is real or perceived by employers, who assign jobs accordingly. A report in the Economist magazine (2005) identified several key factors that seemed important in the persistence of the glass ceiling. An exclusion from informal networks such as the golf course can leave women on the outside of important deals. To remedy this, some companies are now offering golf lessons for their female employees. Women are also held back by statistical discrimination and a feeling that they are less qualified for leadership. The lack of female role models may serve to reinforce these stereotypes among male employees and leaves female employees with few female mentors to emulate. The flattening of management levels in recent years as organizations seek to become more efficient also leaves fewer openings for women, especially for those seeking to re­enter the workforce after a career break. Both Martin (2007) and Wirth (1998) found that while flexible working arrangements might seem to be a positive for women, women who take advantage of these arrangements are seen as being less committed to the workforce and therefore less likely to advance into higher positions. Perhaps partly in response, women are establishing their own businesses in record numbers. Over 9 million U.S. companies, representing 38% of the total, are women­owned (Bolte, 2006), and women are launching small businesses at more than twice the rate of men (Hymowitz, 2006).

3. SECTION 2 The 2006 Auburn University Montgomery Business and Economic Forum entitled “The New Role of Women in the National and State Economies” focused on this issue of increasing importance in today’s workplace. Speakers included Ms. Karen Ransom, Economist with the Bureau of Labor Statistics, Rosemary Elebash, Alabama State Director of the National Federation of Independent Business, Joyce Bigbee, Director of the Alabama Legislative Fiscal Office, Kim Hendrix, News Reporter for WSFA­TV in Montgomery, Alabama, Dr. Melinda Pitts, Research Economist and Associate Policy Advisor at the Federal Reserve Bank of Atlanta, and Dr. Donna Paul, Medical Director of the Rheumatology and Osteoporosis Center in Montgomery. Attending the forum were faculty members from the School of Business, business students, high school economics students, and businesswomen from around the state.

All attendees were asked to complete a survey at the end of the forum evaluating the forum itself, demographic information on gender, age, education level, and employment status and their opinions on questions relating to women in the workforce. This

87

Glass Ceilings and Gender Gaps: A Survey information was then analyzed to assess both opinions in general and to assess differences in opinions among different demographic groups.

Questions Asked The original survey instrument was completed by 52 participants after the panel of speakers had completed their presentations, 22 additional surveys obtained from economics students, and an additional 62 were completed online to increase the sample size and verify poll results for the statistical analysis. Participants were given the following questions, and asked to rate their opinion as “Strongly Agree”, “Agree”, “Neutral”, “Disagree”, and “Strongly Disagree”.

1. The “glass ceiling” is an obstacle to the career success of women. 2. Employers should hold open a woman’s job past the legal mandatory time given for maternity leave. 3. Employers should provide on­site daycare. 4. A woman can successfully balance career and family. 5. I believe money is the most compelling reason women stay in the workforce. 6. Most women who took a career break wish they had not. 7. A job or career is important for self­fulfillment. 8. Women are qualified to take on leadership roles in business and government.

Those who took a “career break” were also asked to answer the following:

a. It was difficult to resume my career at the same level I left. (Yes or No) b. I am happy I returned to work (Yes or No) c. The primary reason I returned to work was : (Career / Boredom / Money)

Demographic Statistics Two­thirds of our survey respondents were under age 30, reflecting the fact that 42% of our surveys were completed by current high school students, and another 40% by current college students. Nearly 40% of the respondents had completed some college, with about 14% having completed a Bachelor’s degree, and just under 6% a post graduate degree. Only one survey was completed by a PhD. The program drew a predominately female audience, with only 19% of the original surveys from attendees being completed by males, half of them current high school students. Additional surveys raised this percentage to just over 25%.

Overall Results  Nearly half of those surveyed, (43%) either agreed or strongly agreed that the glass ceiling in a obstacle to the career success of women­ 48% were neutral on this point.  56 % believed that employers should hold open a women’s job past the legal mandatory time for maternity leave, but 20 % disagreed.  Over 55% were in favor of on­site day care, with 28.4% neutral.  87 % either agreed or strongly agreed that a woman can successfully balance career and family. Only 4 % disagreed.  Nearly 60% agreed or strongly agreed that money is the most compelling reason women stay in the workforce.  34% disagreed or strongly disagreed that most women who took a career break wish they had not; 19% agreed, with 42% neutral.  68% felt that a job or career were important for self­fulfillment; 16% disagreed or strongly disagreed.  95% of those surveyed felt women are qualified to take on leadership positions.  100% of men in the original survey reported working for primary income; 75% of women did. (Since the majority of additional surveys came from college and high school students, their responses to this question were not included.)  For those who took a career break, 42% felt it was difficult to resume their careers at the same level they left; 58% did not.  92% of those who took a career break were happy they returned to work.  56% of those who took a career break returned for money reasons, followed by career (36%) and boredom (8%)

88

L. Mace and K. Linna Volume 7 – Fall 2009

Results By Gender The Survey results are presented in Table 1. Not surprisingly, men and women had differences of opinion when it came to issues involving women in the workforce.

Table 1 Response By Gender Strongly Agree Agree Neutral Disagree Strongly Disagree

The glass ceiling is an All:6.8 All: 36.6 All:48.0 All:6.8 All:1.5 obstacle to the career M:2.77 M:30.5 M:52.7 M:13.8 M: 0 success of women W:8.42 W:38.9 W:46.3 W:4.21 W:2.10

Employers should hold open a women’s job All:15.8 All:40.4 All:20.6 All:19.8 All:3.17 past the legal M:8.57 M:34.2 M:28.5 M:25.7 M:2.85 mandatory time given W:18.6 W:42.85 W:17.58 W:17.58 W:3.29 for maternity leave

Employers should All:23.0 All: 32.3 All:28.4 All:14.6 All:1.53 provide on­site M:14.7 M:32.3 M:23.5 M:29.4 M: 0 daycare W:26.0 W:32.2 W:30.2 W:9.3 W:2.08

A woman can All:48.3 All: 38.9 All:8.47 All:4.23 All:0 successfully balance M:25.8 M:38.7 M:22.5 M:12.9 M: 0 career and family W:56.3 W:39.0 W:3.44 W:1.14 W:0

I believe money is the All:20.7 All: 36.9 All:19.2 All:19.2 All: 3.84 most compelling M:14.7 M:38.2 M:26.4 M:17.6 M:2.94 reason women stay in the workforce W:22.9 W:36.4 W:16.6 W:19.79 W:4.16

Most women who took All:4.83 All: 19.35 All:41.9 All:29.0 All:4.83 a career break wish M:0 M:18.18 M:63.6 M:18.18 M:0 they had not W:6.59 W:19.78 W:34.0 W:32.9 W:6.59

A job or career is All:32.5 All: 35.7 All:15.4 All:12.1 All: 4.06 important for self­ M:36.36 M:27.27 M: 12.12 M:24.24 M: 0 fulfillment W:31.1 W:38.8 W:16.66 W:7.77 W:5.55

Women are qualified All:66.9 All:28.4 All:3.07 All:1.53 All: 0 to take on leadership M:41.77 M:47.05 M:5.88 M:5.88 M:0 role in business and government W:76.0 W:21.8 W:2.08 W:0 W:0

All: All survey respondents; M: Male respondents; F: Female respondents

While nearly 50% of females believed that the glass ceiling presents an obstacle to women, only 33% of men agreed. Significantly more women (61.4%) than men (43%) agreed that a woman’s job should be held open past mandatory maternity leave. Close to half of men and almost 60% of women were in favor of on­site childcare. In perhaps the most interesting result, while almost 100% of women believed that a woman can successfully balance career and family, just under 2/3 of men

89

Glass Ceilings and Gender Gaps: A Survey agreed. Around 26% of both men and women believed that most women who took career breaks wished they had not. Interestingly, more women than men agreed that a career or job was important for self­fulfillment, although slightly more men (36% versus 31%) agreed strongly. 89% of men and 98 % of women believed that women were qualified to take on leadership roles.

Chi­square tests of independence were performed relating the response distributions on the survey items to age and gender variables. Adjacent response categories were combined when necessary to avoid cells with expected counts less than 5. The results showed that men and women exhibited statistically different responses to questions 4, 6, and 8 (p­values .003, .014, and .000 respectively). On items 4 and 8, “A woman can successfully balance career and family” and “Women are qualified to take on leadership roles in business and government”, women were significantly more likely to strongly agree with the statement. For item 6, “Most women ho took a career break wish they had not”, men were more likely to respond neutral, while women tended to either agree or disagree. No statistically important differences in responses existed between men and women for the other survey items.

The divergence in opinions on such matters as the glass ceiling, maternity leave, and balancing career and family are troubling in that male employers and female employees are likely to not see eye to eye on these important issues that so largely impact women. The results here also show that men may not fully understand that career breaks could also lead to career setbacks for women. Men may agree that women are capable of taking on leadership roles, but may believe they will have difficulty doing so. While our survey showed that more women than men valued careers as important for self­fulfillment, this may reflect the young age bias in the survey, as older women were less likely to agree. Men who value careers as foremost in determining self­fulfillment may not understand women who get just as much or more fulfillment from their roles at home. When this is the case, statistical discrimination may be more likely to occur.

Results By Age The survey results are presented in Table 2. An interesting inquiry we wished to research, especially given the large number of survey takers who were high school students, was whether young girls, who are just poised to enter the workforce, differ significantly in their views on the workplace environment that women face. While not yet tempered by experience, the opinions of young girls may well determine future outcomes for women who work. Results in the survey were divided between high school girls and women over the age of 40, essentially a younger and an older cohort, while overall age responses in the statistical data considered the age categories of below 20, 20 to 29, and 30 and older, and teenaged girls in comparison to older women. The high school girls were aged 17 and 18, with the older women ranging in age from 41­63. 68% of the older cohort had at least a Bachelor’s Degree, with 32% holding advanced degrees. 67 % of the older women agreed that the glass ceiling was an obstacle to the career success of women; only 31 % of the high school girls agreed. High school girls were much more likely to agree (80%) than those over 40 (63%) that jobs should be held open for women past maternity leave. 6 % more of the older women agreed that employers should provide on­site daycare. While 97% of the high school females believed that a woman can successfully balance career and family, only 79% of the older women agreed. While over ¾ of women over 40 surveyed felt that money was the compelling reason women stayed in the workforce, only half of the younger women did. About ten percent more of the older women agreed that most women regretted taking career breaks. Just slightly more of the high school aged women believe a career or job is important for self­fulfillment. Both groups unanimously agreed that women are qualified to take on leadership roles, with 76% of the young women agreeing strongly, compared to 58% of the women over 40.

90

L. Mace and K. Linna Volume 7 – Fall 2009

Table 2

Female Response By Strongly Agree Agree Neutral Disagree Strongly Disagree Age and Career Break

The glass ceiling is an HS:6.06 HS: 24.2 HS:66.6 HS:3.03 HS:0 obstacle to the career WF:16.6 WF:50.0 WF::27.7 WF: 5.55 WF:0 success of women CB: 4.0 CB: 52.0 CB: 36.0 CB: 0 CB: 8.0

Employers should hold HS:13.8 HS: 66.6 HS:13.8 HS:5.55 HS:0 open a women’s job past WF: 15.78 WF:47.36 WF: 26.3 WF:10.52 WF:0 the legal mandatory time given for maternity leave CB:24.0 CB: 28.0 CB: 16.0 CB: 24.0 CB: 8.0

HS:17.1 HS: 40.0 HS:31.4 HS:11.4 HS:0 Employers should WF:26.3 WF:36.8 WF:36.8 WF: 0 WF: 0 provide on­site daycare CB:29.1 CB:20.83 CB:41.6 CB:8.33 CB: 0

A woman can HS:55.8 HS: 41.1 HS:2.9 HS:0 HS:0 successfully balance WF:31.57 WF :47.3 WF: 10.52 WF: 0 WF: 0 career and family CB: 52.3 CB: 42.85 CB: 4.76 CB: 4.76 CB: 0

I believe money is the HS:17.6 HS: 32.3 HS:14.7 HS:29.4 HS:5.88 most compelling WF: 38.8 WF: 38.8 WF: 11.11 WF: 5.55 WF: 5.55 reason women stay in the workforce CB:19.2 CB: 50.0 CB: 7.69 CB: 19.2 CB: 3.84

Most women who took HS:8.82 HS: 20.5 HS:41.1 HS:26.4 HS:2.94 a career break wish WF: 5.88 WF: 35.29 WF: 11.76 WF: 41.17 WF: 5.88 they had not CB: 0 CB:20.0 CB: 24.0 CB: 44.0 CB: 12.0

A job or career is HS:32.3 HS: 38.2 HS:14.7 HS:14.7 HS:0 important for self­ WF: 29.4 WF: 35.2 WF: 29.4 WF: 0 WF: 5.88 fulfillment CB: 30.7 CB: 38.46 CB: 15.3 CB: 0 CB: 15.3

Women are qualified to HS:76.4 HS: 23.5 HS: 0 HS: 0 HS: 0 take on leadership role in WF: 57.8 WF: 42.1 WF: 0 WF: 0 WF: 0 business and government CB: 76.9 CB: 19.2 CB: 3.84 CB: 0 CB: 0

If you took a career break: It was difficult to resume Yes = 41.6 No = 58.3 my career at the same level I left

I am happy I returned to Yes = 92.3 No = 7.69 work

The primary reason I Career = 36% Boredom = 8% Money = 56% returned to work was HS: High School Aged Women; WF: Women over age 40; CB: Women who took Career Breaks

91

Glass Ceilings and Gender Gaps: A Survey

Results were also examined by age only, excluding gender, and are shown in Table 3. Three categories were established: teenagers, those aged 20­29, and those aged 30 and over. Those over thirty were more than twice as likely as teens to believe that the glass ceiling is an obstacle to the success of women, and about a third more likely than those in their twenties to agree or strongly agree. Teens were the group most in favor of extending maternity leave, with the oldest group most strongly in favor of on­site day care. Age did not make a significant difference on the question of balancing career and family, but the two younger groups were more likely to disagree that money was the most compelling reason for women to stay in the workforce, with the oldest group most likely to give an affirmative answer to that question. Answers were evenly divided on career breaks, but teens who had an opinion tended to disagree that women regretted taking career breaks. Those over 30 were slightly more likely to believe a job or career was important for self­fulfillment; more than one in 5 teens disagreed. No respondent over age 30 disagreed on the qualifications of women to take on leadership roles.

Table 3

Responses By Age Strongly Agree Agree Neutral Disagree Strongly Disagree

The glass ceiling is an TN: 5.76 TN: 26.9 TN: 63.4 TN: 3.84 TN: 0 obstacle to the career TW: 4.44 TW: 37.7 TW: 4.22 TW: 11.1 TW: 4.44 success of women TH: 9.09 TH: 54.4 TH: 30.3 TH: 6.06 TH: 0

Employers should hold open a women’s job TN: 10.7 TN: 57.1 TN: 16.0 TN: 12.5 TN: 3.57 past the legal TW: 22.5 TW: 30 TW: 22.5 TW: 22.5 TW: 2.5 mandatory time given TH: 16.6 TH: 26.6 TH: 26.6 TH: 26.6 TH: 3.33 for maternity leave

Employers should TN: 15.7 TN: 36.8 TN: 26.3 TN: 19.2 TN: 1.75 provide on­site TW: 31.7 TW: 24.3 TW: 29.2 TW: 14.6 TW: 0 daycare TH: 25 TH: 43.75 TH: 21.8 TH: 9.37 TH: 32.0

A woman can TN: 43.3 TN: 43.3 TN: 7.5 TN: 5.66 TN: 0 successfully balance TW: 63.8 TW: 27.7 TW: 2.77 TW: 5.55 TW: 0 career and family TH: 39.2 TH: 42.8 TH: 17.8 TH: 0 TH: 0

I believe money is the TN: 19.6 TN: 39.2 TN: 13.7 TN: 23.5 TN: 3.92 most compelling TW:18.1 TW: 31.8 TW: 27.2 TW: 18.1 TW: 4.54 reason women stay in the workforce TH: 28.1 TH: 40.6 TH: 15.6 TH: 12.5 TH: 3.125

Most women who took TN: 5.66 TN: 16.9 TN: 47.1 TN: 28.3 TN: 18.8 a career break wish TW: 5.12 TW: 15.3 TW: 41.0 TW: 28.2 TW: 10.2 they had not TH: 3.22 TH: 29.0 TH: 32.3 TH: 32.2 TH: 3.22

A job or career is TN: 32.0 TN: 30.1 TN: 15.0 TN: 20.7 TN: 1.88 important for self­ TW: 30.7 TW: 35.8 TW: 12.8 TW: 10.2 TW: 7.69 fulfillment TH: 29.0 TH: 45.1 TH: 19.3 TH: 3.22 TH: 3.22

Women are qualified TN:66.0 TN: 30.1 TN: 1.88 TN: 1.88 TN: 0 to take on leadership TW: 70.4 TW: 20.4 TW: 6.81 TW: 2.27 TW: 0 role in business and government TH: 62.5 TH: 37.5 TH: 0 TH:0 TH: 0

TN: Teenagers; TW: Respondents aged 20­29; TH: Respondents over age 30

92

L. Mace and K. Linna Volume 7 – Fall 2009

To establish important response differences by age in Chi Square testing, three categories were considered to determine if age was important overall in determining answers: below 20, 20 to 29, and 30 and older. The three age groups differed significantly only in their responses to item one on the glass ceiling (p value .023). For item one, the under 20 age group was more likely to disagree, the 20­29 age group was closer to neutral, and the 30 and over group was more likely to agree with the statement.

In comparing the teenage girls to older women, Chi Square tests showed that older women are more likely to believe that the glass ceiling is an obstacle to the career success of women ( p value .011) and were less likely to agree that a woman’s job should be held open past maternity leave than the younger women (p value .049). These two questions were the only ones found significant in a comparison of the two age groups.

The attitude of young women as reflected by our survey respondents reflects a generation that does not see as many obstacles to success as the older group, who are more confident in their abilities to balance a career and family and take on leadership roles, and seem to see a career as being more important, and not just for money. This shift in attitude may help them to have more positive experiences in the workforce than older cohorts, the majority who took career breaks, sometimes found that returning to the workforce at the same level was difficult, and who were most likely to return to the workforce for money.

4. CONCLUSION While admittedly a small sample, our survey uncovered many important differences in opinion between men and women and between different ages of women on issues pertinent to the new role women are playing in the economy. While men are much less likely to believe the glass ceiling is an obstacle to women’s success, they are also less convinced of a women’s ability to balance career and family, and are less supportive of the ideas of extended maternity leave and on­site daycare. Younger women are less likely to believe in the glass ceiling than their older counterparts as a hindrance to career success, and seem to see their future as being one with a strong commitment to career and leadership. They see policies such as flexible maternity leave as being important in their ability to balance career and family, which they are confident that they can do. The attitudes of men in our survey, a third of whom are not confident that a women can successfully balance career and family, who are less supportive of extended maternity leaves and on­site childcare, and whom may not understand the career sacrifices career breaks often entail, would support the theory that the “glass ceiling” may indeed be real and that statistical discrimination may work against women as they try to begin or restart careers and are seen as less committed and/or capable.

Those surveyed who took a career break supported the findings of numerous studies that for many women resuming their career at the same level was difficult; yet only 20% agreed that women who took career breaks wish they had not. They believed in the ability of women to successfully balance career and family, and were nearly unanimous in their satisfaction in returning to work, even if money was the primary motivation for their return.

A rational choice approach would suggest that the willingness of women to take career breaks despite these costs suggests the benefits outweigh the penalties they may pay later when they return to the workforce. Policies that help ease the return or to shorten the time spent out of the labor force, which reduces this penalty, such as maternity leave and subsidies for childcare, may improve the “family gap” that is responsible for much of the “gender gap” that still remains. The Department of Labor projects that 51% of the labor force growth between 2004 and 2014 will be due to the participation of women. With a younger generation seemingly even more committed to market work, it is in the interest of employers and government, and both men and women, to make the workplace more accessible to those who choose to “have it all.”

REFERENCES 1. Albrecht, James. Bjorkland, Anders., and Vroman, Susan. (2007). Is There A Glass Ceiling in Sweden?. Journal of Labor Economics, 21:1, 145­78. 2. Arcieri, Katie. (2007). Breaking the Glass Ceiling. The Capital, February 25, 2007, B4. 3. Arulampalam, Wiji., Booth, Alison L., and Bryan, Mark L.. (2007). Is There A Glass Ceiling Over Europe? Exploring the Gender Pay Gap Across the Wage Distribution. Industrial and Labor Relations Review, 60:2, 163­185 4. Arun, Shoba., Arun, Thankom., and Booroah, Vani. (2004). The Effect of Career Breaks on the Working Lives of Women. Feminist Economics, 10:1, 65­84. 5. Baker, Joe G. (2003). Glass Ceilings or Sticky Floors? A Model of High­Income Law Graduate. Journal of Labor Research, 24:4, 695­712.

93

Glass Ceilings and Gender Gaps: A Survey

6. Badal, Jaclyne.(2006). Surveying the Field/Cracking the Glass Ceiling. Wall Street Journal, June, 19, 2006, B.3 7. Becker, Gary. (1985). Human Capital, Effort, and the Sexual Division of Labor. Journal of Labor Economics, 3:1, S53­58. 8. Bolte, C.Clint. (2006). Time to Break Through The Glass Ceiling. The Seybold Report, May 30, 2006, 14­15. 9. Blau, David. and Robins, Philip. (1988). Child­Care Costs and Family Labor Supply. Review of Economics and Statistics, 70, 374­381. 10. Blau, Francine. and Kahn, Lawrence. (1997). Swimming Upstream: Trends in the Gender Wage Differential in the 1980s. Journal of Labor Economics, 1:1, 1­42. 11. Bowling, Cynthia J., Kelleher, Christine A., Jones, Jennifer., and Wright, Deli S. (2006). Cracked Ceilings, Firmer Floors, and Weakened Walls: Trends and Patterns in Gender Representation among Executives Leading American State Agencies, 1970­2000. Public Administration Review, 66:6, 823­37. 12. Budig, Michelle. and England, Paula. (2001). The Wage Penalty for Motherhood. American Sociological Review, 66, 204­ 225. 13. Connelly, Rachel. (1992). The Effect of Child Care Costs on Married Women’s Labor Force Participation. The Review of Economics and Statistics, 74:1, 83­90. 14. Bureau of Labor Statistics, (2005), Women in the Labor Force: A Databook. 15. Bureau of Labor Statistics, (2005), Women in the Labor Force in 2005. Accessed February 19, 2007. 16. Bureau of Labor Statistics, (2007). Usual Weekly Earnings of Wage and Salary Workers: Fourth Quarter 2006. 17. Connell, Raewyn. (2006). Glass Ceilings or Gendered Institutions? Mapping the Gender Regimes of Public Sector Worksites. Public Administration Review, 66:6, 837­850. 18. Davies, Rhys. and Pierre, Gaelle. (2005). The Family Gap in Pay in Europe: A Cross­Country Study. Labour Economics, 12:4, 469­486. 19. The Economist, (2005). The Conundrum of the Glass Ceiling. 376:8436, 63­65. 20. Gang, Ira N., Landon­Lane, John., and Yun, Myeong­Su. (2003). Does the Glass Ceiling Exist?: A Cross­National Perspective on Gender Income Mobility. Economics Working Papers. 21. Hertz, Noreena. (2006). Come On, Get Your Sledgehammers Out. New Statesman, August 7, 2006, 20. 22. Hewlett, Sylvia Ann. (2002). Executive Women and the Myth of Having it All. Harvard Business Review, 80:4, 66­73. 23. Hewlett, Sylvia Ann. and Luce, Carolyn Buck. (2005). Off­Ramps and On­Ramps. Harvard Business Review, 83:7/8, 184­ 185. 24. Hymowitz, Carol. (2004). While Some Women Choose to Stay Home, Others Gain Flexibility. Wall Street Journal, March 30, 2004, B1. 25. Hymowitz, Carol. (2006). Women Swell Ranks of Middle Managers, But Are Scarce at Top. Wall Street Journal, July 24, 2006, B1. 26. Joshi, Heather., Paci, Pierella., and Waldfogel, Jane. (1999). The Wages of Motherhood: Better or Worse? Cambridge Journal of Economics, 23:5, 543­564. 27. Kee, Hiau Joo. (2006). Glass Ceiling or Sticky Floor? Exploring the Australian Gender Pay Gap. Economic Record, 82:258, 408­428. 28. Korenman, Sanders., & Neumark, David. (1992). Marriage, Motherhood, and Wages. The Journal of Human Resources, 27:2, 233­255. 29. Lundberg, Shelly. and Rose, Elaina. (2000). Parenthood and the earnings of married men and women. Labour Economics, 7:6, 689­710. 30. Martin, Arthur. (2007). Women Still Battling to Break Through Glass Ceiling. Sunday Territorian, January 21, 2007, 19. 31. McDowell, John M., Singell, Larry D., and Ziliak, James P. (1999). Crack in the Glass Ceiling: Gender and Promotion in the Economics Profession. The American Economic Review Papers and Proceedings of the One Hundred Eleventh Annual Meeting of the American Economic Association, 89:2, 392­396. 32. Mincer, Jacob. Ofek, Haim. (1982). Interrupted Work Careers: Depreciation and Restoration of Human Capital. Journal of Human Resources, 17:1, 3­24. 33. Mincer, Jacob. and Polachek, Solomon. (1974). Family Investments in Human Capital: Earnings of Women. Journal of Political Economy, 82:2, 76­108. 34. Pope, Justin. (2007). Harvard Poised to Name Woman President­ A High Profile Crack in the Glass Ceiling. Associated Press, February 9, 2007. 35. Rothberg, Deborah. (2006). Tach’s Glass Ceiling Shows Some Cracks. eWeek, June 19, 2006, 26. 36. Schweitzer, Tamara. (2007). U.S. Policies on Maternity Leave ‘Among the Worst. Inc.com, February 16, 2007. 37. Seid, Jessica. (2006). 10 Best­paid Executives: They’re All Men. CNN Money.com, October 10, 2006. 38. Summers, Lawrence H. (1989). Some Simple Economics of Mandated Benefits. American Economic Review, 79:2, 177­ 183.

94

L. Mace and K. Linna Volume 7 – Fall 2009

39. Spivey, Cynthia. (2005). Time Off at What Price? The Effects of Career Interruptions on Earnings. Industrial and Labor Relations Review, 59:1, 119­140. 40. Story, Louise. (2005). Many Women At Elite Colleges Set Career Path To Motherhood. The New York Times, September 20, 2005, A1. 41. Virzi, Anna Marie. (2006). Women CIOs: How To Smash the Glass Ceiling. eWeek, December 20, 2006. 42. Waldfogel, Jane. (1995). The Price of Motherhood: Family Status and Women’s Pay in a Young British Cohort. Oxford Economic Papers, 47:4, 584­610. 43. Waldfogel, Jane. (1997). The Effect of Children on Women’s Wages. American Sociological Review, 62:2, 209­217. 44. Waldfogel, Jane. (1998A). The Family Gap For Young Women in the United States and Britain; Can Maternity Leave Make A Difference? Journal of Labor Economics, 16:3, 505­546. 45. Waldfogel, Jane. (1998B). Understanding The “Family Gap” In Pay For Women With Children. Journal of Economic Perspectives, 12:1, 137­156. 46. Wirth, Linda. (1998). Women in management: Closer to breaking through the glass ceiling? International Labour Review, 131:1, 93­102.

95

Body Art: The Question of Hiring Employees with Visible Body Art

BODY ART: THE QUESTION OF HIRING EMPLOYEES WITH VISIBLE BODY ART

William. J. Carnes and Nina Radojevich­Kelley Metropolitan State College of Denver, USA

ABSTRACT In many cases today, body art seems to be becoming more acceptable throughout society in general. However, the same connection does not seem to be occurring in the workplace. In this paper, the authors address three perspective questions of body art; 1) Although corporate culture changes over time, does it necessarily change as often as it should? 2) Have corporate dress codes been affected by body art in the workplace? 3) Is it discrimination if employers do not hire applicants with visible body art? For the purpose of this article, the authors define body art as any tattoos, brands or piercings not natural to the human body that individuals add as a decoration or statement. Although sometimes synonymous with the younger generation, body art is a practice among all generations. In fact, some older people are using body art as a means of applying permanent beauty procedures. One clear indication of a cultural shift in attitudes about body art is the increased prevalence of body art made for children. Currently, there are a number of temporary tattoos available for children in a number of popular characters, such as famous cartoon and children’s movie characters. In addition, the authors explore some of the legal cases and religious accommodations surrounding body art. The authors conclude the article by suggesting some guidelines for managers to help ensure that they are implementing legal and favorable policies regarding body art.

Keywords: Body Art, Tattoos, Discrimination, Generations, Workplace

BODY ART: THE QUESTION OF HIRING EMPLOYEES WITH VISIBLE BODY ART Body Art—is it a fad, or a cultural shift? Historically, body art was visible among select demographic subgroups of society. However, today, body art is no longer restricted to one particular demographic group; instead, it has become mainstream in modern culture (Wohlrab, Stahl & Kappleler, 2007). Due to this phenomenon, it is important for managers to acknowledge and understand the changing cultural needs of the workforce; more specifically, in the area of changing attitudes toward body art in relation to dress code standards in the workplace. The intent of this article is to address the following research questions: 1) Although corporate culture changes over time, does it necessarily change as often as necessary? 2) Have corporate dress codes been affected by body art in the workplace? 3) Is it discrimination if employers do not hire applicants with visible body art? For the purpose of this article, the authors define body art as any tattoos, brands or piercings not natural to the human body that individuals add as a decoration or statement.

THE BODY ART MOVEMENT From an early age, people learn to differentiate between others by assessing their appearance. People perceive some aspects of appearance as acceptable, while objecting to other aspects. Although the intent of learning to differentiate is not to create biases, people do create biases as part of the maturing process. “Unfortunately, many of us tend to believe that there is an objective reality and that all of our perceptions are accurate in understanding that reality” (Hoffman, Krahnke & Bell, n. d.). As a result, people make decisions about others using preconceived, and sometimes stereotypical, ideas that they develop. For example, when individuals observe body art on others, they may stereotype them as risk takers, carefree, risqué, socially marginal, using poor judgment, impulsive, subject to peer pressure, fashion forward, cool or trendy (Armstrong, Roberts, Owen & Koch 2004). These perceptions can lead individuals to draw inaccurate assumptions and conclusions about others.

Historical Perspective Body art, such as tattooing and body piercing, dates back thousands of years (Armstrong, 2005). “Humans have in fact been adorning themselves with tattoos, piercings, paint, scars and other forms of permanent and semi­permanent ornamentations for tens of thousands of years” (LaFee, 2006). In 1991, in the Austrian Alps, archeologists discovered a 5000­year­old Iceman who had at least 57 tattoos covering his body (LaFee, 2006). Ancient Celts permanently painted their bodies with

96

W. J. Carnes and N. Radojevich-Kelley Volume 7 – Fall 2009 extracts from the mustard plant family. In the South Pacific, tattooing is a common practice among Tahitian cultures. In fact, many believe that the word “tattoo” was derived from the Tahitian word “tatau”, meaning “to mark something” (LaFee, 2006; www.designboom.com). Various artifacts indicate that the Japanese culture was practicing body art as far back as 3000 BC. Throughout history, people displayed body art for various reasons and its use had the support of many cultures around the world. Historically, we know that “every known culture has pursued some kind of body ornamentation” (LaFee, 2006). Today, society observes an increase in the number of people practicing some form of body art.

Some people believe that the increase in body art is a result of the Punk movement during the latter part of the 20th century (Wojcik, 1995). Punks used tattooing, body piercings and other adornments to display their disaffiliation with mainstream society. Much like previous youth subcultures, the exotic use of body art soon became more acceptable by other groups and assimilated into the more dominant cultures, making it fashion more than fad (Wojcik, 1995).

Motivation Behind Body Art The motivation behind why someone would partake in body art varies from person to person and among different cultures. Perhaps the most common reason is self­expression and aesthetical reasons. “A 2004 Harris poll found 34 percent of Americans thought tattoos made them appear sexy and 29 percent thought they made them attractive” (LaFee, 2006). In some cultures, the use of piercings in the nose, ears and lips portray social rank, wealth and importance. Other cultures carve out symbols, numbers and designs in their skin to display social status, tribal relations and the number of enemies killed in battle. In the past, aristocrats in Britain used tattoos to differentiate themselves from the lower class.

Historically, cultures were more accepting and encouraged the use of traditional tattoos to display spiritual beliefs, norms and values. However, recent motivations behind body art differ vastly from those of the past. For example, today a young teen may select an intricate tattoo of an ancient Celt symbol simply because it is cool, instead of displaying a spiritual belief or displaying the belief that it reflects the complexity people experience in life. Today’s youth seem to embrace body art to show that they have control of their own personal body (Forbes, 2001). In addition, individuals today use body art because they like the look or the design of the tattoo (Forbes, 2001). In contrast, older surveys found rebellion as a common and strong motivation behind the use of body art. According to a recent study conducted by Armstrong et al. (2004), the most common reason cited for using body art was to portray uniqueness, self­expression, and to feel more attractive.

Body Art and the Youth There is a strong prevalence of body art among young individuals. According to a recent study, 51 percent of college students have piercings and 23 percent have tattoos (Mayers & Chiffriller, 2008). In fact, a recent study concluded that piercing and tattooing were “mainstream” among the 18­23 year old population (Mayers, Judelson, Mortiarty & Rudell, 2002). The consensus seems to be that the prevalence of body art is among young people. Another study, conducted by Pew Research Center, found that 36 percent of 18­25 year olds and 40 percent of 26­40 year olds have at least one tattoo (Osburn, 2007). They also found that 30 percent of 18­25 year olds and 22 percent of 26­40 year olds had at least one piercing other than in their ear lobe. Colbert (2008) cites, “the National Education Association reports that 15­20 percent of school­age students are tattooed, or pierced, or both…” and a 2007 report by Kloppenburg and Maessen, found that an estimated, “51 percent of college­age individuals in the United States have multiple ear piercing or other forms of body piercing or tattoos” (Colbert, 2008).

Interestingly, the reasons why individuals use body art may be changing. While previous studies mentioned rebellion or rejection of social standard as a key motivator, the most common reasons given for using body art currently are to display control over one’s body, to decorate, or make ones self more attractive or sexy (Forbes, 2001; Armstrong et al., 2004). This is another clear indication that society is more accepting of body art among the youth, which more than likely will spill over into society as a whole.

Body Art and Beauty Since people in the US have an exorbitant fascination with beauty and appearance, it may be some other aspect about an individual that causes us to form our positive or negative opinion about that individual. Baby Boomers, Gen Xers and Gen Yers all use self­expression in different ways and body art is one of those self­expressions (Brooks, 2006). “More females, middle­class, and educated individuals participate in tattooing as compared to previous generations, when prisoners, thugs, soldiers, freaks, and gangs were clumped together as the dominant users of tattoos (Colbert, 2008). Specifically, it appears as though women are more inclined to participate in tattooing and piercing than are men (Schulz, Karshin & Woodiel, 2006). An

97

Body Art: The Question of Hiring Employees with Visible Body Art estimated 40 percent of males and 60 percent of females have piercings and 23 percent of both groups have tattoos (Armstrong et al., 2004). One study found that after the youth, the fastest growing groups of people with tattoos are women over the age of 50 (LaFee, 2006).

Appearance does make a difference to some, whether in the private sector, in the military or in some other public sector employment. Advertisements are on television and other media on a daily basis that tell the public how they should look. People are too fat or too thin, they have too much hair or too little hair, they have too many wrinkles or they need to purchase a specific product to make them look older. Add that to the advertisements for cosmetic tattooing, or permanent makeup for personal beautification (Armstrong, 2005), and it is no wonder that teens and young adults are increasing their body art practices. In addition, there are television shows and movies that portray body art as a good thing. Although they may not come right out and say that body art is good, the fact that the hero or heroine has body art implies that it is good; especially, when the professional reviewers make comments about how sexy or attractive the hero or heroine looks in the film. In addition, “the public media tends to portray body art procurement as risqué and carefree behavior” (Armstrong et al., 2004), which adds to the desirability of body art. An increase in the use of body art by Hollywood stars, top athletes and other opinion leaders whom the media scrutinizes and promotes as desirable, beautiful and hip also adds to the dilemma (Wohlrab et al., 2007) . The increased use of body art has also infiltrated and spilled into the general population and more importantly into the workplace, as workers tend to consider tattoos and piercings as hip or trendy (Colbert, 2008).

CHANGING ATTITUDES ABOUT BODY ART As explained earlier, tattoos and other forms of body art went from being taboo, to being trendy, then widely accepted and finally to being desirable (Forbes, 2001; Armstrong 1991; Mayers & Chiffriller, 2008, Wohlrab et al., 2007). Historically, body art participants were thought to be perverts, psychopaths, prostitutes, psychotics, rebels, anti social, aggressive, deviants, risk takers, gang members, military people, educationally marginal, someone with poor judgment, impulsive, intoxicated, unhealthy and unwanted people (Wohlrab et al., 2007; Carrol, Riffenburgh, Roberts & Myhre, 2002; Forbes, 2001). Today, people do not view body art with such negativity. In fact, “…traditional stereotypes that body modifications are indicators of social or personal pathology does not describe the contemporary…” views (Forbes, 2001). Beginning in the late 1960’s, popular stereotypes and attitudes about body art began to shift (Sanders, 1989). Today, more people tend to view body art participants as “artists,” trendy, “not just for bikers,” not associated with alcohol, but are planned in advance and are rarely motivated by rebellion; they view them as a fashion accessory, as attractive expressions of individuality, and to distinguish one’s self from others (Wohlrab et al, 2007; Armstrong & Pace, 1997; Bell, 1999).

Another clear indication of a cultural shift in attitudes about body art is the increased prevalence of body art made for children. Currently, there are a number of temporary tattoos available for children in a number of popular characters, such as famous cartoon and children’s movie characters. In addition, well established toy manufacturers are using body art to revitalize and extend the product life cycles of their aging toys. For example, Mattel launched a new Spring 2009 Toy Line that includes a “Totally ‘Stylin’ Tattoos” Barbie. Apparently, the ever­popular Barbie has a new look that includes a permanent butterfly tattoo on her shoulder and comes with a tattoo gun that enables children to stamp new washable tattoos on the Barbie or themselves. This is a new look for the 50 year old Barbie and is yet another indicator of a mainstream acceptance of body art in society, which Mattel appears to be capitalizing on (http://cbs5.com). Furthermore, the fact that our children’s toys are sporting new looks that include body art is indicative of the tremendous cultural shift in acceptance of body art. Lastly, the fact the Mattel refers to the new Barbie as the “Totally ‘Stylin’ Tattoo Barbie” clearly establishes that tattoos are no longer taboo in our culture, but are now stylish. As a side note, Mattel has had early success with the new Barbie and the toy has sold out in various stores nationwide (http://cbs5.com).

The question then is whether the acceptance of body art by mainstream society is an indication that employers should also accept the same body art. After all, employers see the need to maintain a certain image for the organization, and body art may contradict that image. People often judge professionalism on appearance. Therefore, employers who hire employees with body art may find that customers deem the company unprofessional. For example, customers may have a different view of an employee with body art depending upon whether the individual is working in a fast­food restaurant or in a more professional setting such as a bank. It may result in either an acceptance or rejection of the person’s appearance and the business itself. In turn, the company may lose customers with negative views of body art. If a customer views body art as mainstream within the culture, he or she may be more prone to purchase the company’s products if their employees display body art. On the other hand, a customer who opposes body art may refrain from purchasing the company’s products if employees display body art. In the long­term, body art may positively or negatively affect the organization.

98

W. J. Carnes and N. Radojevich-Kelley Volume 7 – Fall 2009

Tolerance of Body Art in the Workplace Although researchers can trace body art to ancient times, and it has been common among certain groups for many years (Armstrong, 2005), the latter part of the twentieth century brought about a significant increase in the use of body art in American culture (Mayers & Chiffriller, 2008, Colbert, 2008; Osborn, 2007; Wojcik, 1995). As a result, the face of American youth is changing and the workforce may need to change with it. According to recent findings (Armstrong, 2005), more than half of American youth have some form of body art. As a result, employers are starting to consider their dress code policies more carefully (Thier, 2007). For example, some employers are starting to relax dress code policies to accommodate the new trend and to target younger workers (Deseret Morning News, 2006). However, other employers are tightening their dress codes to limit visible body art in the workplace.

The question that arises is should employer dress codes follow suit with the increasing acceptance of body art by American society. Edgar Schein (1999, p. 12) postulates that within organizations, there is a “…need to identify those cultural elements that may be increasingly dysfunctional as external environmental conditions change.” Business leaders and small business owners need to consider whether the increased occurrences of body art are an indicator that culture is changing. If that is the case and the organization’s external environment—in this case, the culture’s wider acceptance of body art—is changing, should the manager’s view of body art change as well?

“While managers are entitled to expect their staff—especially those who are seen by clients—to adopt smart business dress, heightened sensitivity over inadvertent religious or cultural discrimination can make rigid dress codes a minefield for the unwary HR professional” (Matthews, 2007). Even though there is a desire to have and create a diverse workforce, many companies are drawing the line when it comes to diversity in appearance, such as body art. The question that a business faces is where the law falls. Historically, businesses had very loose and general dress code policies. Today, more businesses are creating formal dress code policies and adding new rules to keep body art covered up. Employers feel they need to be very, very specific when it comes to dress codes, specifically dress codes dealing with body tattoos and piercings. Even with the prevalence of body art becoming mainstream among our youth, it is still not widely accepted in the workplace. Nearly 85% of respondents on Vault.com felt that body art impedes an individual’s chance of finding a job (Osburn, 2007). In addition, nearly 16% of employers have established some type of body art policy for the workplace (Osburn, 2007). The consensus is that even though body art is gaining in popularity, culturally, and especially among the youth, it is still not widely accepted in the business world. This is indicative that corporate culture is not changing at the same rate as society’s culture is changing. This could cause long­term problems in the corporate world because our society and culture is changing while the workforce’s attitudes, values and beliefs remain constant and dated. Eventually, more and more of the youth will infiltrate upper level management and their attitudes and beliefs may force corporate culture to shift and accept new norms, values and standards that are more in harmony with society’s views.

As do private sector employers, the US Army has its own brand of appearance standards. Army regulations (AR) address both weight and body art in the discussion of appearance. For example, “the Army is a uniformed service where discipline is judged, in part, by the manner in which a soldier wears a prescribed uniform, as well as by individual personal appearance” (AR 670­1, 2005). The regulation goes on to discuss appearance expectations, but does make some exceptions for religious practices. The regulation prohibits tattoos and brands on the head, face and neck (above the dress uniform collar line), as well as tattoos and brands that are derogatory in nature or may symbolize gangs or extremist groups (AR 670­1, 2005). If a soldier has a tattoo or brand that is inappropriate for good order and discipline, Commanders will ensure the soldier understands the Army policy and will provide guidance to the soldier to seek medical advice for the removal or alteration of the tattoo or brand. Although Commanders cannot force a soldier to remove or alter an inappropriate tattoo or brand, a soldier’s refusal to comply with the Army policy will result in discharge from the service (AR 670­1, 2005). This is an example of intolerance of certain inappropriate types of body art in the workplace, but displays an acceptance of body art in general.

When is it Discrimination? The third question posed by the researchers asks, is it discrimination if employers chose not to hire applicants who display body art. Currently, no legislation protects individuals with body art from discrimination. “…Tattoos are generally regarded as personal self­expression and not the type of speech or expressive conduct that would warrant first amendment protection” (Baker, 2007, p. 28), nor does body art, by its self, fit the criteria for protection under the fourteenth amendment (Baker, 2007). In fact, if an employer asks employees to cover their body art during work hours it is not considered discrimination unless the company makes a difference between sexes or does not make reasonable accommodations due to religious beliefs or for health reasons (Jesperson v. Harrah, 2006; Burger Chain, 2005). As younger workers attain management level positions there may be a demand for looser policies on body art and dress codes. This is simply because more youth participate in

99

Body Art: The Question of Hiring Employees with Visible Body Art body art, hold vastly different views about it and are more comfortable with it. The younger workers do not have the same stereotypes in regards to body art, as do older generation workers. Thus, in the future the younger management teams may not feel the need to demand that the workforce cover up body art or feel that body art is an issue to be concerned with when hiring future employees.

Religious Accommodation From a more practical perspective, body art may have some religious or other protections under Title VII of the Civil Rights Act of 1964. “Title VII requires an employer, once on notice that a religious accommodation is needed, to reasonably accommodate an employee whose sincerely held religious belief, practice, or observance conflicts with a work requirement, unless doing so would pose an undue hardship. Under Title VII, the undue hardship defense to providing religious accommodation requires a showing that the proposed accommodation in a particular case poses a ‘more than de minimis’ cost or burden” (EEOC). Although there are no specific ties to body art and religion, some religious practices do include the use of body art. Additionally, the EEOC does not require an individual to be a member of an organized church to use the protections of Title VII with religious beliefs (Zachary, 2005). Employers will need to deal with reasonable accommodation on a case­by­case basis because there are too many variables to consider before providing a list of what is or is not reasonable.

Guidelines for Employers Organizations create dress codes to establish professionalism in the workplace. When creating dress codes, Managers need to be aware of discrimination, religious accommodation and the growing acceptance of self­expression through body art. The following are suggestions to consider when creating a dress code:

 Beware of your target customers and their generational perspectives on body art when establish dress codes  Establish a clear and concise dress code and make employees aware of the dress code at the beginning of their employment  Include a list of what the organization considers acceptable and unacceptable forms of body art  Publish existing dress codes and any changes to the policies in a manner that makes them easily accessible to all employees and easily understood  Have new employees read and sign the dress code during employee orientation and stress compliance with the dress code  Apply the dress code in a uniform and consistent manner  Address all requests for religious and other accommodations on an individual basis, keeping the organization’s image in mind.  A reasonable accommodation may be to require that the employee cover or remove the body art while at work, unless that is a violation of their religious practice  Avoid being vague in the dress code, as it can lead to more problems in the future  Thoroughly explain the importance of dress code policies and why employee image is crucial to the organization.

CONCLUSION This article provides some useful insight for management practitioners in helping them to understand the complexities of corporate policies, especially in regards to body art. Having an understanding of the historical significance of body art, along with the current trends and views, better prepares managers for creating non­discriminatory policies concerning body art in the workplace. The authors define body art as any tattoos, brands or piercings that are not natural to the human body, which individuals add to their bodies as a decoration or statement.

Conducting research for this article brought about other questions on this topic and areas to study in the future. Primarily, 1) how long will the lack of tolerance for body art in the workplace last; and 2) as the new generations (X’s and Y’s) gain managerial roles, will there be a more relaxed view on body art in the future? The authors are currently conducting further qualitative and quantitative studies concerning the issue body art and its acceptance in the workplace.

REFERENCES Armstrong, M.L. (1991). “Career­oriented woman with tattoos image.” Journal of Nursing Scholarship, 23, 215­220. Armstrong, M.L. & Pace, M.K.M. (1997). “Tattooing: Another adolescent risk behavior warranting health education.” Applied Nursing Research 10: 181.

100

W. J. Carnes and N. Radojevich-Kelley Volume 7 – Fall 2009

Armstrong, M. L., Roberts, A. E., Owen, D. C. & Koch, J. R. (2004). “Contemporary college students and body piercing.” Journal of Adolescent Health 35(1): 58. Armstrong, M. L. (2005). “Tattooing, body piercing, and permanent cosmetics: A historical and current view of state regulations, with continuing concern.” Journal of Environmental Health. 67(8): 38. Retrieved December 30, 2008, from: http://proquest.umi.com/pqdweb?did=820309061&sid=4&Fmt=3&clientId=5728&RQT=309&VName=PQD Army Regulation 670­1, Wear and appearance of the military uniform. (2005) Washington: Department or the Army. Baker, L. A. (2007, February). “Regulating matter of appearance: Tattoos and other body art. FBI Law Enforcement Bulletin. 76(2): 25­32. Retrieved February 3, 2009, from Academic OneFile via Gale: http://0­ find.galegroup.com.skyline.cudenver.edu:80/itx/start.do?prodId=AONE Bell, S. (1999). “Tattooed: A participants observer’s exploration of meaning.” Journal of Popular Culture, 22, 53­58. Brooks, D. (2006). “Nonconformity is skin deep. The New York Times. (August 27, 2006, Sunday). Burger Chain to pay $150,000 to resolve EEOC religious discrimination suit. (2005). Retrieved from: http://www.eeoc.gov/press/9­16­05. Carrol, S.T., Riffenburgh, R.H., Roberts, T.A., & Myhre, E.B. (2002). “Tattoos and body piercings as indicators of adolescent risk­taking behaviors.” Pediatrics. 109(60) 1021­1037. CBS. (2009). “Some parents not too happy with ‘tattoo barbie’.” Retrieved March 5, 2009 from http://cbs5.com/consumer/barbie.tattoo.mattel.2.950549.html. Colbert, R. (2008). “Teacher candidate fashion, tattoos, and piercings: Finding balance and common sense.” Childhood Education, 84(3) 158C. Retrieved December 30, 2008 from: http://proquest.umi.com/pqdweb?did=1440054161&sid=4&Fmt=3&clientId=5728&RQT=309&VName=PQD Desert Morning News. (2006). “As body art gets popular, workplace dress codes get a second.” Retrieved February 3, 2009 from: (http://0­global.factiva.skyline.cudenver.edu/aa/default.aspx?pp=Print&hc=Publication EEOC—The Office of the Equal Employment Opportunity Commission. “Questions and answers: Religious discrimination in the workplace.” Retrieved December 30, 2008, from: http://www.eeoc.gov/policy/docs/qanda_religion.html Forbes, G.B. (2001). “College students with tattoos and piercings: Motives, family experiences, personality factors, and perception by others.” Psychological Reports. 89(3) 774­788. Hoffman, D. L., Krahnke, K. & Bell, J. (n. d.). “Appearance discrimination and small business. Unpublished manuscript. Jespersen v. Harrah’s Operating Company 444 F. 3d 1104, 9th Cir. Ct. App. (2006). Lautman, V. (1994). The new tattoo. New York: Abbreville Press. Lafee, S. (2006). “Skin Deep: The history and meaning of body art is hardly superficial.” Retrieved February 3, 2009 from http://0­global.factiva.com.skyline.cudenver.edu/aa/default.aspx?pp=Print&hc=Publication Matthews, V. (2007). “Spotlight on…body art.” Personnel Today, 2(3) 35. Retrieved February 3, 2009, from: http://proquest.umi.com/pqdweb?did=1372502411&sid=1&Fmt=3&clientId=5728&RQT=309&VName=PQD Mayers, L.B. & Chiffriller, S.H. (2008). “Body art (body piercing and tattooing) among undergraduate university students: ‘Then and Now’.” Journal of Adolescent Health. 42 201­203. Mayers, L.B., Judelson, D.A., Mortiarty, B.W., & Rundell, K.W. (2002). “Prevalence of body art (body piercing and tattooing) in university undergraduates and incidence of medical complications. Mayo Clinic Proc. 77, 29­34. Osburn, L. (2007). “No ink at INC. – Tattoos have left more of a mark on mainstream, but body art still isn’t an acceptable accessory in workplace dress code.” The Star Ledger. Retrieved February 3, 2009, from http://o­ global.factiva.com.skyline.cudenver.edu/aa/default.aspx?pp=Print&hc=Publication) Sanders, C.R. (1989). Customizing the body: the art and culture of tattooing. Philadelphia, PA: Temple University Press. Schein, E. H. (1999). The corporate culture survival guide. San Francisco: Jossey­Bass. Schulz, J., Karshin, C. & Woodiel, D. K. (2006). “Body art: The decision making process among college students. American Journal of Health Studies. 21(1/2): 123. Retrieved December 30, 2008, from: http://proquest.umi.com/pqdweb?did=1286495411&sid=4&Fmt=3&clientId=5728&RQT=309&VName=PQD Thier, K. (2007), “Workplace rules vary on display of body art.” The News & Observer Publication Company. Retrieved on February 3, 2009, from: http://0­global.factiva.com.skyline.cudenver.edu/aa/default.aspx?pp=Print&hc=Publication Wohlrab, S., Stahl, J. & Kappeler, P. (2007). “Modifying the body: Motivations for getting tattooed and pierced.” Body Image. (4): 87­95. Retrieved February 3, 2009, from www.sciencedirect.com. Wojcik, D. (1995). Punk and neo­tribal body art. Jackson: University of Mississippi Press. Zachary, M. K. (March 2005). “Body piercings and religious discrimination.” SuperVision, 66(3) 23. Retrieved December 30, 2008, from: http://proquest.umi.com/pqdweb&did=805603401&sid=11&Fmt=3&clinetId=5728&RQT=309&VName=PQD.

101

United States versus Japan: Are there Myths Associated with Cross-Cultural Sales Negotiations?

UNITED STATES VERSUS JAPAN: ARE THERE MYTHS ASSOCIATED WITH CROSS­CULTURAL SALES NEGOTIATIONS?

J.D. Williams Kutztown University, USA

ABSTRACT This paper will address the cultural aspects of Japanese salespersons conducting business negotiations and compared that experience to that of the American salespersons. Through studying the unique differences between cultures, it has been revealed that, in today’s business environment of complex, highly competitive structures, more effective forms of negotiations will play a vital role in the 21st century selling processes. Such forms of negotiations will likely be even more critical to apply while negotiating in foreign environments.

Keywords: U.S. & Japan: Selling via Cross­Cultural Sales Negotiation

PURPOSE This research will show that American sales teams selling in Japan necessitate cross­cultural extended knowledge of key elements unique to the Japanese culture. These learned factors are necessary in order to produce a victorious sales negotiation. Research collected and disseminated in this paper will seek to explain these negotiation styles and deficiencies between American and Japanese selling negotiation techniques. This research takes on a novel approach through its synthesis of comprehensive reports and manuscripts of cross­cultural, negotiations, and sales material to derive a selling concept for U.S. salespersons.

BUSINESS NEGOTIATIONS AND CULTURE Based upon my consulting experience in Japan and the U.S., I would categorize business negotiations requiring 5 distinct steps:

1. Impression Formation Accuracy­ initial contact between negotiators 2. Interpersonal Attraction­ the immediate face­to­face impressions influenced by the feelings of attraction or liking between the buyer and seller 3. Exchange of Information­ defines the participants’ needs and expectations 4. Communication of Persuasion­ the ability to verbally move one’s position forward 5. Concessions and Agreement­ the final stage involving compromise and, building toward agreement

Within a typical domestic setting, each of these five negotiation stages will necessitate a thorough understanding of both party’s roles, perceptive or intuitive reasoning, institutional and/or individual goals, vested interest, and hidden agendas. When the setting takes on a global perspective, the stages become less clear­ muddied by cultural dynamics that set their own agendas, negotiation conditions, priorities and process. If this were not troubling enough, consider two nations that may share in their quest for industrial might but are as culturally different as different could be. Here lies the heightened negotiation condition between Japan and the Unites States.

It is safe to assume that, as in most nation­states, cultural values and beliefs are important towards many, if not all aspects of life. In terms of the sub­culture, business and management in particular, the overriding philosophy is created to mimic one’s personal value and belief system acquired from the larger elements of their national origin. Research has shown that this is indeed the case in Japanese management. Values are deeply ingrained, stable in nature, and a relatively permanent part of a person's inner self. Business managers, either individually or collectively, make culturally­driven decisions that are influenced by their country’s values (Giacomino, 1999).

102

J. D. Williams Volume 7 – Fall 2009

Both overt and hidden elements of one’s culture can present complex actions or responses to foreign or unknown stimuli­ human or non­human. The comfortable one is with his/her situation and human interactions, the more likely their best cultural attributes will come forth. Of course, the opposite would likely foster opposing, if not uncomfortable, reactions. Misunderstandings during cross­cultural business discussions can occur for any one of four reasons — differences in: 1) Language and language behaviors; 2) Nonverbal behaviors; 3) Values; and 4) Decision processes (Graham, 1986).

National culture tends to umbrella its sub­cultures including its systems of health, education, welfare, entertainment, recreation, socialization, as well as its business conduct. As such, an assessment of Japan’s country culture is likely to reflect the attitudes, behaviors, aesthetics, religious beliefs, moralities, and ethical conduct of its business culture. Both the components of culture in addition to the facets of negotiation are important foundation elements for the research work found in this text.

JAPAN VERSUS AMERICAN BUSINESS CULTURAL DIFFERENCES The term, ‘Haji’ is the intense shame that the Japanese feel when objectives are not achieved or norms are not observed. ‘Tatemae’ (things as they are made to appear) refers to the face­saving, harmony­creating syndrome that permeates all aspects of Japanese life. The term, ‘Honne’ refers to things the way they really are, particularly the communication of one's honest perceptions and beliefs. All of these life determining aspects are conditioned by the Shinto notion of national divinity and counterchecked by a paucity of natural resources. These societal control systems shape Japan's approach to business. They identify the relationship between superior and inferior; they help explain wealth and economic expansion as a national imperative; and they help shape the role of women, the importance of the group, and the nature of self­sacrifice. Additionally, they provide much of the underpinning for the success of Japanese business (Maher, 1994).

Japanese people will often refer to each other by following the last name with “san”. In addition, bowing is common as a mark of respect dealing with Japanese business people (Japan External Trade Organization (JETRO), 1991). Although seemingly a simple act of respect, these actions frame a deep­seated behavioral pattern that impacts Japanese business practices as well as their sales techniques, particularly that of negotiations. The aforementioned cultural moray may seem somewhat manageable for most foreigner business persons (sometimes referred to as gaijins) to adjust to but maybe, not so easy to adjust to these additional behavioral and institutional facets:

 Individual vs. group oriented meetings and decision processes;  Social and subtle ethnic class distinctions;  Aversion towards women in sales positions;  Selected differences in buyer­seller styles of communication, reporting and stature;  Differences in motivational environment and motivating factors;  Differences in sales force compensation;  Differences in job satisfaction and time management; Distinctly long­term generational cultural philosophy towards multi­year planning as compared to seeking short term results;  Cultural discrimination, heightened egotisms and nationalized separatism;  An ideological excessive focus towards manufacturing excellence; and  An uncanny sense of humbled religious essence towards their existence and the existence of other Japanese. Genestre, et al. (1995). What does Marketing really mean to the Japanese. Marketing Intelligence & Planning. Bradford.

Many, if not most of these elements were/are factored into Japanese sales person’s daily business lives, which understandably influence their thoughts and communications with American business persons. Alan Goldman wrote in his well recognized book, Doing Business with the Japanese, “To the Japanese, many American business leaders are too obvious and prominent about their success by flaunting their many material possessions. The Japanese tend to be more subtle and refined about the issue. The greatest issue found in doing business with Japan is communicating with the Japanese host in an effective manner. Many managers have no choice but to consider a ‘radical cultural transformation’ if the wish to be successful. They must forget everything they know about ‘Americanized approaches’ to business and relearn their expertise within the context of Japans protocol (Goldman, 1994, pg. 22).” These perceptions markedly draw distinct lines between the cultural mind sets of Japanese and American business persons.

Overcoming cultural and perception differences may not be so simple to surmount for American business persons. American salespersons tend to have a tough time when selling to Japanese companies. And, it has been shown that the barriers of acceptance may not be related to product quality issues that have plagued American firms previously. Rob Speigel wrote an

103

United States versus Japan: Are there Myths Associated with Cross-Cultural Sales Negotiations? article entitled, Selling to Japan by Joint Venture, in which he described, “…the Japanese business culture as being difficult for outsiders to master. Japanese executives have operated with elaborate business formality that has left outsiders baffled. Add to this the country’s prolonged reluctance to purchase goods and services from non­Japanese businesses and the market has and may still seem impenetrable (Speigel, 2002). This would suggest ethnically­sensitive and well developed selling skills to overcome these business cultural obstructions. As Figure­1 [Appendix­ A], points out, not uncommon to most nations, there are fundamental cross­cultural negotiation challenges existing between the two nations. The elements presented in Figure­1 can become formidable factors in negotiating between any two firms but is clearly exacerbated by cultural, ideological, philosophical, global business perception, and industrial differences.

Japanese and American firms have had, for more than 50 years, difficulty in bridging cultural differences. These challenges were recently echoed by Tadayuki in his journal article, A Marketing Model of Japanese Buyer­Supplier Relationship, in which he described the cross­cultural stumbling blocks in the following way, “…In the increasingly resource­demanding and competitive environment, manufacturers have adopted a Japanese style manufacturer relationship or cooperative long­term manufacturer­supplier relationship… This new marketing environment urges business marketers to develop a new set of marketing competence and knowledge as to how to build customer trust (Tadayuki, 2004, pg. 314).” Obviously, new approaches for American salespersons would seem to be necessary, which has represented the impetus for this author’s current research work, housed in this manuscript.

Having lived in Japan for a number of years and conducted my business as a professor and corporate consultant and trainer for Japanese business managers, I am prepared to offer my practical experiences in dealing with these robust sales negotiation issues. One of my experiences was to observe the launching of Microsoft in Japan. Although there were reported an array of rather exciting comments, Microsoft’s sales efforts did not reflect a deep seated cultural knowledge of the Japanese marketplace nor the depth of recognition of Japanese business practices­ trouble was in the making.

Microsoft seemed to have begun as if it were back in the U.S., with a strong focus toward volume selling as well as building efficiencies of sale and distribution as quickly as possible. Microsoft forgot about the concept of relationship marketing and its significant meaning in Japan, whom may very well be the best practical business example of this concept. As reported by Hoshino, Microsoft went through a rather significant transformation in Japan, “The change in marketing reflects the broader transformation of Microsoft. It started as a charismatically driven, smart new firm, but it eventually became more of a conventional Japanese company­ mindful of the importance of its relationships with partners and customers. …The commitment [to the Japanese market] included more than 1.4 million small and midsize businesses in Japan ­ a market segment that was a hard nut for Japan's Microsoft to crack…smaller Japanese companies do not use IT as much compared to smaller firms in the West (Hoshino, 2007, Pg. 16).” From the various meeting that I attended over the years, I saw a marked difference of Microsoft’s professional demeanor. It was abundantly clear that Microsoft was becoming a ‘Japanese’ firm.

Although on a broader scale, the import of information technology, from the United States, has had very strong successes in Japan. JETRO reported that the U. S. has been the dominant importer of software to Japan, which has included a 96.5% share of OS/server imports, and a 91.2% share of custom software imports. (www.jetro.go.jp/en/invest/success_stories) In general, my experience reflected that Japanese customers have shown a strong preference to work with American software providers. The possible reasons for such an abundant success in this area may be attributed to Japan’s slowness in it own software and information technological developments, even though the application base for such has been growing exponentially. The other factor that support Japan’s purchase of U.S. software technology is their continued reliance on U.S. made mainframe and mini computers, as well as the extensive array of U.S. designed distributive systems hardware and software configurations. As such far less rigorous software negotiations have been called for between these two entities.

On the other side of export/import, there has been Japans’ global success in offering robotics to the American and European markets. The robotics industry has been stimulated by the 3k’s above as well as a strong world­wide demand for more efficient production of otherwise labor intensive tasks. Japan’s eagerness to embrace new technology has led to huge opportunities in the robotics industry. Consistently strong demand from highly­competitive manufacturing companies means Japan’s robotics market, valued at $7 billion, is among the largest in the world. This strength has translated into leadership in the burgeoning social welfare robotics field, which is expected to fuel explosive growth to the robotics industry as a whole. Much of the projected growth in the robotics industry will be driven by the emergence of robots as commercially viable consumer products. The next generation of robots will play a number of roles in everyday life. (JETRO) Western Europe and the U.S. are moving forward in Robotic production but not at the rate of the Japanese. Essentially, the demand for this technology and Japan’s manufacturing advancements in robotics has exceeded most needs for extensive sales negotiations. An age old saying, ‘whose got the best with the most, wins,’ seems quite apropos.

104

J. D. Williams Volume 7 – Fall 2009

Japanese sales negation prowess, coupled with a globally competitive manufacturing sector has been proven to represent a major obstacle for U.S. exports and international sales efforts. Part of reason that Japan has been so successful was it ability to hire persons like myself to work with them on their cross­cultural marketing and sales negotiations. My professional sales managerial experience with IBM and Chase Manhattan Information Services prove to be highly beneficial s a skills transfer to Japanese managers. An example of this was my extensive time spent with Dentsu Corporation, the largest advertising house in Japan. Literally all of my training exercises, by their requests, were focused upon cross­cultural negotiations with American firms. Both costs and time working with Dentsu was extensive and there were many more Japanese firms desiring such training. I would wonder how many American firms have contracted for cross­cultural negation training with the Japanese.

LITERATURE REVIEW (Presented in Appendix­ B)

SCOPE OF RESEARCH This paper has hypothesized, through the concept of MYTHS, that in today’s marketing environment, represented by American firms conducting business in Japan, sales negotiations has been and will continue to adversely affected by factors such as non­task related information exchange, task­related information, persuasion and concession/agreement. Research has indicated that through understanding and proper use of cross­cultural selling tools (including Japanese cultural awareness, high quality/value product characteristics, use of a bilingual member, training and attitude of firms) should lead toward higher success rates of sales negotiations for American sales teams.

US firms have been dissatisfied with quality of past negotiations results with Japanese business persons. The majority of these troubles have been huddled around the following areas:

 Different negotiating styles;  Language barriers;  Delays in decision making;  Cultural differences;  Lack of control over pace/content of the negotiation;  Lack of authority on Japanese part negotiation team to make major decisions;  Inquiries not fully or promptly answered; and  Insincerity on part of Japanese negotiations (Tung 66)

Americans, in general, as well as American business persons seemed to have formulated a series of myths about doing business with Japan and it is these myths that are at the heart of many sales negotiation problems. The myths and their respective findings will be addressed next:

 Myth­1: That American sales team are fundamentally structured and trained along similar lines as the Japanese. If true, it would lead one to perceive much of the sales techniques, process, presentations and results to be similar. Previous studies in this area of sales team structure and training have shown consistent threads of performance within the U.S. domestic environment, which have contributed greatly to the ease of job mobility from one company to another with little ‘ramp­up time.’ Assumption 1­ in essence, a successful sales track record should be able to transfer the skills sets to another company while maintaining similar levels of success. Assumption 2­ it would follow then that two firms with analogous sales training should have somewhat paralleled sales results, assuming industry factors to be relatively comparable.

 Myth­2: That there exist substantial cultural similarities between US and Japanese business persons when negotiating in a typical sales environment. It has been readily accepted that proper training has been essential in generating success during business negotiations, domestic or foreign environments. The assumption­ if U.S. sales teams and their management receive sufficient pre­departure training, the premature return rate of American sales people, when doing business with the Japanese, should be very low.

105

United States versus Japan: Are there Myths Associated with Cross-Cultural Sales Negotiations?

 Myth­3: Positive attitudes including sincerity, good faith, and honesty of American sales teams will equal that of Japanese sales teams, which will return relatively equal sales success in their respective nations as well as within cross­national sales. Previous studies have shown that culturally­driven attitudes have played an overwhelming role in the success and failure of sales negotiations. Much of culture­based research, applied to business settings, has revealed a preponderance of cultural misunderstandings due to national, political, ideological, aesthetics, beliefs/religion, and value system differences.

 Myth­4: U.S. companies will endure a high success rate conducting business negotiations in Japan due to their offering high quality products with uniqueness and value for the Japanese marketplace. Research has shown that the Japanese society have been known to emphasize high quality products at moderate to low costs. Japanese take great pride in producing and enjoying superior products created domestically and from imports.

 Myth­5: A very large number of American firms, doing business in Japan, will excel through the negotiation stages due to an overall level of competence and understanding of Japanese language and business practices. In the process of researching cultural misunderstandings, it became quite evident that many sales negotiation problems were directly linked to language barriers. Even the use of interpreters have shown to produce less than desirable results due to these interpreters reflecting home county biases.

RESEARCH METHODS An extensive amount of secondary data was reviewed to exhaust the multitude of sales and sales negotiation situations that had occurred over the last 15 years. This was accomplished using online databases, scholarly journals, doctoral dissertations, thesis papers, white papers, text books, and my own cross­cultured experience in conducting negotiation seminars and training in Japan with a select group of Japanese corporations.

FINDINGS Addressing Myth­1: Are there fundamental differences between Japanese and American selling structures and practices? And, if different, would it not suggest differences in the results or outcomes based upon their respective efforts? The American sales method is considered to have a seven step process: 1­ generate sales leads; 2­ quantify leads; 3­ preparation for the sales call; 4­ conducting sales meetings; 5­ handling buyer resistance; 6­ closing the sale; and 7­ account maintenance. This seems to differ from the traditional Japanese sale model.

The Japanese sales progression is much more in depth and structured than the American sales methodology. Consider this very typical process of selling in Japan.

The Japanese selling development is human intensive rather than product intensive. (Otsubo, 1986) When the Japanese seller comes to America to market his products, he naturally assumes the lower status position and acts accordingly (with great respect for the American buyer, etc.), and a sale is initiated. If the salesmen represents a much smaller company than the prospect company, then they may take their senior managers (and often their President if the opportunity is substantial) to meet with the buyer and one of the buyer's senior managers. If the salesmen represents a large Japanese company and the prospect company is also large, then they may take a team of 5 or more senior managers to meet with the buyer and a team of his senior managers.

At this point (maybe 2 ­ 3 months after the initial visit) the buyer may ask for a quotation for a small initial 'gesture' order. Initially, the Japanese seller is taken advantage of. After all, he expected the American buyer to respect his needs (consistent with amae). But in any case, a relationship is established between the two firms. The door is open and the Japanese seller has the opportunity to learn the ‘American Way,’ to adjust their behavior, and to establish a more viable long­term relationship. Eventually, although much more time consuming the sales (in many cases a larger sale) would probably be consummated, and with the likelihood long­term repeat sales. Assuming the initial order is satisfactorily completed, the salesmen start a regular pattern of personal visits, giving the traditionally expected summer and winter gifts, sending New Years cards and gradually building the depth and value of the relationship over a period of several years.

106

J. D. Williams Volume 7 – Fall 2009

The Japanese believe that their jobs are to identify with and support their customers and to act as the customer liaison officer and advocate within their own company. They feel that, to keep their business, they must spoil their customers and satisfy their every business desire. Their jobs are to provide devoted service for the long run. (Rudlin, 2007) Thus they build a weight of obligation and dependence that has to be repaid ­­ typically through sales orders, recommendations, etc. In Japan, word of mouth recommendations and who you know are far more important than advertising (Otsubo, 1986).

It is very important, when going into negotiations with the Japanese; a U.S. firm would understand the values of the Japanese and what they see as admirable. One major subject is education. It would be in the favor of a U.S. firm entering into negotiations with the Japanese to have and be able to provide sufficient knowledge of the product they are attempting to sell and all of its characteristics. In this way, a U.S. firm would be able to educate the prospective clients on the product, answer any questions no matter the difficulty, and seem more presentable all around which may intrigue their Japanese prospective clients to look further into the negotiations (Lawrence, 1991).

In the USA, it is said "the customer is always right; in Japan it is said okyakasuma wa kamisama desu ­­ the customer is God (Rudlin, 2007). This orientation of serving others is drilled into the Japanese from childhood as an ingrained attitude derived from the Confucian philosophy of respecting others. By the time Japanese enter a company, knowing how to treat a customer is second nature. This attitude of respect permeates the entire organization. When the company loses face, each member of the organization accepts blame as if it is not the company but my company (Genestre, 1995).

The sales team (Japanese salespeople usually work in teams of 2 or more) identify a prospect company in their region. They try to arrange a personal introduction to the prospect company through an existing customer but failing that will personally visit the prospect company and, after enquiring the name of the manager responsible for purchasing, leave business cards with the receptionist or with a junior clerk from the purchasing department. After a further 1 or 2 such visits over a few weeks, they will ask to see the buyer in person and, if successful, exchange business cards with the buyer, make a few polite inquiries, bow and leave. After another 1 or 2 such visits over a few more weeks to reinforce their dedication and commitment, the salesmen will begin to more seriously pitch their product portfolio to the buyer.

The sales cycle is definitely longer in Japan primarily due to relationships building and overall pace of the business process. Japanese firms tend to be very cautious about new product concepts that would alter their way of doing business necessitating compelling value propositions, presented in multiple ways and without sales pressure. Japan’s caution in business will likely generate longer communications between selling and client managers as well smaller product orders as proof statements or justification trials conducted without disruption of the operation (Sullivan, 1992).

This author contends that the differences between sales cycles and culturally sensitive components of relationship building and patience make for a very different sales process between Japan and the U.S. My own training responsibilities for Japanese business persons, desiring to conduct business in the U.S., focused upon the benefits of maintaining the elements of patience, relationship building, while initializing trial/small quantity selling concept (a selling facet used as a last resort in the U.S.). The Japanese company I worked for conducted a comprehensive 3­year evaluation of our sales training program. The results were outstanding in terms of these international deployed Japanese business persons who proved most successful, particularly in the United States. As such, MYTH­1 would be considered invalid.

Addressing Myth­2: Are there substantial cultural similarities or significant cultural disparities between Japan and the U.S.? An abundance of secondary research revealed that there are significant differences between these two countries­ their peoples and their business mind­sets. The variations between the way the U.S. and Japan conduct their business are often quite remarkable and sometimes seemingly insurmountable to bridge (Sherman, 1990). Let us know consider some of these cultural obstacles.

Dr. Bill Kelley wrote in his book, Culture Clash: West meets East. Sales and Marketing Management, “In Japan, individuality and independence—in or out of business– aren’t as highly valued as they are here. The notion of someone jumping up at a meeting and taking credit for a business plan– or trying to blame someone for its failure– is entirely foreign to them. That’s just not done.” (Kelley 28) Similar thoughts were echoed by Erin Anderson and Leonard Lodish who wrote, “Japan: a collectivistic and high­context culture, in which collective effort, personal relationships and status are highly valued; USA: a highly individualistic and low­context culture, in which individualism and economics are highly valued (Anderson 8).”

Compared to Japanese, Americans spend little time establishing a relationship. The typical Japanese negotiation may involve a series of non­task interactions and even ceremonial gift giving. Witness the recent attention given to the very large kosai­hi

107

United States versus Japan: Are there Myths Associated with Cross-Cultural Sales Negotiations?

(literally, entertainment expenses) typical of business dealings in Japan (Steinnhoff, 1987). While the Japanese defense budget is 0.9 percent of the country's GNP, corporate wining and dining accounts for 1.2 percent of the total national output (Tung, 1984). To the American critic this may seem an immense waste. However, the Japanese put in great efforts toward establishing a harmonious relationship in the beginning, which I think, in part, has helped them to avoid the expensive litigation (when things go wrong) which seems more and more common in the United States (Gehrt, 2005).

Another area of difference comparing Japan verses the U.S. would be that of their respective social classes­ complexity for one yet simplicity for the other. Anderson and Lodish researched this subject and wrote “Social class distinctions affect business relationships in Asian countries. The USA has a relatively simple social class system that is based on economic criteria (income, wealth, material possessions, etc).’ ‘Movement up and down the social order is by accumulation of income and material possessions. But several other countries with longer social histories than the USA, base their social class distinctions on seniority criteria, hereditary criteria, or ethnic criteria. In such societies, social distinctions are taken seriously and affect individual and affect individual and group behavior in business (Anderson 9). ’” Managers have to take such social practices into account in the recruitment and selection of salespeople in these markets. Clearly, there are far more complex aspects in Japan as compared to the United States.

In terms of job satisfaction, differences between Japanese and American selling are also quite substantive. Anderson and Lodish’s findings revealed that, “In the USA, financial incentives are very important in enhancing salespeople’s performance and job satisfaction. In Japan, money alone is insufficient in motivating salespeople. Value congruence is also important in boosting salespeople’s job satisfaction, but not performance (Anderson 8).” Kelley also supported Anderson’s views by stating, “Similarly, the Japanese tend to look at their careers differently than do Americans. In Japan, it is usually a job for life; in America, it’s often a job until something better comes along. As a result, many Japanese have a sense of loyalty to their employers. The Japanese are usually in the same company all their working lives (Kelley 28).”

Management styles, particularly in terms of discussions, negotiations and decision making, were quite different between the two nations. (Gross, 1990) In general, Japanese management move at a much slower pace than the Americans do when negotiating. Kelley wrote, “The Japanese group approach hinders sales because things move too slowly and no one takes responsibility for the marketing plan. … It has been discovered that the Japanese are not always the most efficient workers– they do not manage time well (Kelley).” It has been proven time and time again that the ‘ringi’ method of decision making may take more time but its overall ‘buy­in’ yield by its employees pay very high dividends of work efficiency and quality results.

Long­term planning has been far more robust in Japan compared to the U.S. Clearly, life­time employment, as a philosophy, has made many Japanese firms think more long­term and, as a result offer incentives and rewards based upon one’s long­ term commitment to their respective organizations. As Kelley wrote, “This is certainly not the case with the US, where promotions, raises, and other incentives are usually tied to short term results– especially at a company that is highly leveraged. … The Japanese tend to look at different ways to work things out, so it may take a while longer for you to learn where they stand. They are not as quick to shoot off their mouths as we are– which isn’t exactly a bad quality. They just want to be sure (Kelley 28).”

The final topic of discrimination posses some interesting twists as both nations discriminate in a multitude a ways towards their populations and toward their social­economical statuses. Yet, this author contends that the state of discrimination in Japan was far more acute. For example, many Americans have felt that their advancement is limited within Japanese firms. Regardless of their success, they say they eventually hit a “glass­ceiling”– a level within a company they can’t go beyond. This would be akin to working for a family business in the U.S., in which no matter the effort, ‘blood lines’ comes first. In a certain number of cases, Japanese companies have been sued by former employees who claim they were discriminated against or held back because they weren’t Japanese. And while a few have even gone so far as to cite racism as the basis for such policies, most believe it’s really more a question of who the Japanese are comfortable working and socializing with. Having lived and worked for two Japanese firms myself over a period of three years, I was aware of utterances of prejudice that were clearly apparent, yet there remained a shared gratification that I was, none­the­less, part of the team, deserving of respect and appreciation (Wilson, 1980).

The aforementioned differences reveal MYTH­2 as being both highly naïve and false.

108

J. D. Williams Volume 7 – Fall 2009

Addressing Myth­3: Has there been distinct differences in the levels of salespersons’ attitudes (including sincerity, good faith, and honesty) of American sales teams compared to that of Japanese sales teams? If so, have these differences suggested distinctly different levels of sales effectiveness?

There is no doubt that Japan and the United States are major business partners and that successful negotiations between Japanese and U.S. companies have implications for the economies of both countries. Yet descriptions of Japanese and U.S. negotiating styles suggest substantial differences in approach, (Graham, 1993; Kato & Kato, 1992; March, 1990) that may affect intercultural negotiations. For example, a vice president in the Japan merchant banking operation at Bankers Trust noted in the New York Times Magazine that information is viewed as an important source of power in negotiations in both the United States and Japan (Yoshimura, 1997). U.S. negotiators, he suggested, exercise the power of information by disclosing it, and in return, they get information from other people. In contrast, the Japanese exercise the power of information by hiding it, he noted, going on to point out another fundamental difference between Japanese and U.S. negotiating styles: "For Japanese, negotiation is usually a process of reaching a point that is acceptable to both parties. For Americans, it's a competition dividing winners and losers. Americans, he continued, often open negotiations at a level that is totally unacceptable to the Japanese, seeing the opening offer as a starting point, but the Japanese cannot see trust in such behavior (Yoshimura, 1997).”

For the Japanese, this exchange of information is the main part of the negotiation. A complete understanding is imperative; the Japanese are reported to ask "endless" questions while offering little information and ambiguous responses. In both the field and the laboratory, Japanese negotiators were observed to spend much more time trying to understand the situation and associated details of one another's bargaining position.

One of the major issues that contribute to the negotiations failure rate is the attitudes of the Americans towards the Japanese. In the U.S., business negotiations are set out to be quick and straight to the point, leaving no time for ‘lollygagging’, but in Japan, negotiations are carried out in a completely different way. Negotiations take time and patients, the Japanese must know and trust the American firms they may be doing business with and this initial process may take weeks, even months before being complete. As a result of the time ‘wasted’ and all of the ‘lollygagging’, many U.S. firms who are not accustomed to the Japanese negotiation style develop a less than positive attitude about how they conduct business and are in no rush to do business with them.

Past research had determined that Japanese and American cultures view trust on very different levels. The simple descriptive statistics of automobile supplier­automakers shown in Table­1 [APPENDIX­ C] indicates that supplier trust is significantly higher in Japan than the United States. The length of the supplier­automaker relationship was highest in Japan (41.4 years), followed by the U.S. (32.6 years) and Korea (12.4 years) (Dyer 270).

A vignette of the Japanese character would reveal that they are extremely competitive in business, both at home and abroad, with a keen sense of perfection and attention to detail. They are thought to have a strong sense of "being Japanese," with the subordination of individual interests. They exhibit a humble, almost obsequious approach toward their superiors. No one would question their high degree of self­sacrifice for family, job and nation, or their strong sense of shame when behavior departs from identified group norms. A persistent desire for harmony in human relationships is clearly evident, resulting in group, rather than individual decision making. They have a strong sense of personal honesty and an insatiable desire to improve their standard of living. And they are willing to take high risks, especially in business (Maher 40).

Although it remains a highly debatable topic as to the transfer value of one’s religion into their daily lives, it would seemingly be unjust not to at last state the relative importance of religion in the Japanese society. According to statistics published by a Japanese government agency, the combined number of Buddhists and Shintoists in Japan exceeds 217 million (Agency for Cultural Affairs 1991). Since there are only 124 million people in Japan, that implies a double counting as most of the Japanese subscribe to both of these religions and mention them in enumeration (109 million for Shintoism and 96 million for Buddhism). Being a Buddhist myself, I can attest to the orderly commitments of both mind and body towards all aspects of one’s daily activities. In addition, having experienced a multitude of Shinto rituals being performed in Japan for new machinery installations, new buildings, weddings, deaths, and a multitude of other personal/family needs, I can attest to the important role this religion plays in Japanese society as well as Japanese business.

Both religions, Buddhism and Shintoism speak to the peaceful undertaking of all aspects of one’s life, which by definition, would permeate through Japanese business structure and its ‘family­oriented conduct. As such, Japanese salespersons,

109

United States versus Japan: Are there Myths Associated with Cross-Cultural Sales Negotiations? being backed by the conduct of their company and their religion(s) would tend to conduct themselves with heightened levels of sincerity, good faith, and honesty. (Mohammed, 2006)

American businesses and Christianity, the largest faith in the U.S., tend to play an ‘arms­length’ relationship, which would suggest differing levels of religious influence on the moral fibers of American businesspersons. Also, firms in the U.S., for the most part, do not conduct a ‘family­oriented’ structure as reflected in the job instability in America. Lastly, the cultural concept of ‘I’, not ‘We,’ as in the case of the Japanese, would foster more sales independence and internal competition within the U.S,­ resulting in varying levels of conduct and ethical behavior.

On the basis of the aforementioned findings and personal experiences, I would conclude that MYTH­3 would be false.

Addressing Myth­4: Is it possible to sell inferior products or similar products with no market distinction to the Japanese and what are the consequences in attempting to do so? At any early stage of an American sales negotiation with the Japanese, the U.S. made products and/or services may have already been thwarted by the mere fact that the goods or services would not meet Japanese standards. (Seawright, 1994) Japanese take on a more holistic (Genestre, 1995) view towards products. They see a total product as consisting of both tangible and intangible components. From a management perspective, the purpose is to supply customers with items that cover maximum value – which is determined by cost and quality. While US management views cost reduction and quality improvement as contradictory objectives that go in tandem. In fact, the Japanese word ‘keihakutansho’ means “lighter, slimmer, shorter, and smaller” and implies less expensive items that are more useful products that are economical in purchases, use and maintenance (Seawright, 1994).

Dr. Rosalie Tung researched negotiating with the Japanese businesses and wrote that eighty­three­percent of failures during business negotiations were due to Japanese not needing products/services that the US companies supplied. This was due to the products not being unique in their industry. Tung wrote, “A second factor that was responsible for the failure of business negotiations was labeled “product characteristics” and included the two items of Japanese did not need products/services offered by the US firm” and “too many competitors all offering the same product/service that the US companies supplies (Tung, 1984, pg. 43).” The first item Japanese did not need products services offered by the U.S. firm was perceived by 83% of the respondent firms as being responsible for failure of business negotiations. This was, by far, the most frequently mentioned item. This finding again point to the extreme competitiveness of the Japanese markets and the need for US firms to offer truly unique products and/or services in order to make significant inroads into the Japanese economy (Tung, 1984, pg. 47).” A clear example of this has been the dismal marketing of American vehicle sales in Japan.

Japanese have been quite reluctant to purchase American vehicles. This is undoubtedly due to three primary factors. The first and most significant is that Japanese cars are on par, if not perceived to be much higher quality over American automobiles. This would be considered true for both cars and trucks. In addition, Japanese automobile manufacturer are considered world leaders in technological advancements. Consider three words in Japan, commonly called the 3Ks, that have helped to stem the automobile research towards major technological advancements, Kitsui meaning hard or severe or heavy, Kitanai meaning dirty, and Kiken meaning dangerous. These words have started a major thrust towards the production of hybrid engines or diesel engines with a philosophy of ecology and fuel reduction, and requiring high technology to produce (Yagi, 2006). In addition, such tested designs as hydrogen vehicles as well as full electric cell vehicles are actively being designed and manufactured.

The second factor that has curtailed Japan’s interests in American automobiles has been their love of high tech vehicles, which have made BMW and Mercedes Benz their most desired foreign automobiles. The third factor in Japan has been its consumers’ insatiable desire for miniaturization, which has greatly impacted their automobile manufacturing (Jai­Beom, 1999). I experienced a multitude of cute little 1­4 passenger vehicles on the major roads and narrow small town streets in Japan. As one readily knows, small cars have not been Detroit’s forte. Therefore, negotiations in sales within a non­competitive environment become futile and so very few U.S. automobiles are sold in Japan.

The third and last factor came about during my stay in Japan, while conducting a training exercise with Nissan Motors. Nissan designed a novel concept that was to revolutionize the automobile industry. The term they coined for this concept was called the ‘5 Anys’­ 1) any product accessory, 2) any customization, 3) any where, 4) any time and 5) for any vehicle. Essentially, Nissan wanted to customize most of their vehicles for each and every customer, worldwide. The concept, although failing in its initial market thrust, was slightly ahead of its time. Toyota Motor Corporation has adopted a very similar 4 of 5 Anys in terms of its Scion vehicles. It is believed that Scion will represent a massive test as to the level of automobile customization

110

J. D. Williams Volume 7 – Fall 2009 desired by consumers today. Clearly, advance thinking may yield tremendous down­line benefits. The U.S. Big­3 automobile industry has made little to no moves in this direction of customization.

Japanese offer to their domestic consumers and customers a wide variety of high­quality as well as fashionable set of products and services produced by Japanese, German, and other foreign nations. Japanese technological advances within many of these products and services in many cases far exceed that of American products and services, as such generating formidable challenges for American sales in Japan. Product characteristic is one of the factors measured to determine the success or failure of a sales negotiation. This factor includes the importance of the uniqueness of the product or service offered by the US firm in the Japanese market. Tung wrote, “Given the difficulties encountered by foreign investors in establishing operations in Japan, and the general competitiveness of Japanese producers, it is important that the product or service offered by a U.S. firm be truly unique. Otherwise, the chances of a U.S. firm’s gaining successful entry and penetration into the Japanese market may be severely hampered (Tung, 1984, pg. 70).”

The net conclusion would be that product distinction as well as product quality have hampered many American products from gaining inroads in Japan. Although high technology has made some inroads in sales to Japan, U.S. consumer electronics and automobile imports have failed to gain even a modicum of market share in the Japanese marketplace. As such, MYTH­4 has little to no merit.

Addressing Myth­5: Will the large majority of American firms, conducting sale negotiations with the Japanese, fare well in terms of their negotiating styles and levels of cross­cultural savvy? This question is, for all intent and purpose, the bottom­line challenge towards American firms and, in consideration of the fact that due to the lack of proper pre­departure training by American firms, the premature return rate of American sales people when doing business with Japanese sales team is in excess of 50%, there are definitely issues and hurdles yet to be addressed (Graham, John L., 1986).

The selected differences in buyer­seller styles [Table 2: Appendix­ C], shows a difference between the presentation styles of the American and Japanese selling and relationship styles as well as sales compensation. This representation of buyer differences was confirmed by Kelley’s research, which was reflected in his statement, “Recognizing, rewarding, and praising employees is almost unheard of in most Japanese companies, even those based in the US. In America, of course, these are key factors that drive many salespeople to perform better. … It’s not just a business thing. It’s a way of life (Kelley, pg. 28).”

Although it has already been mentioned that bi­lingual skills can clearly present distinct advantages for the sales team, there are still more fundamental factors in the process of negotiation that would significantly affect negotiating with the Japanese: 1) Attitude and preparation of U.S. firm; 2) Cultural Awareness; 3) Attitudes and preparation of Japanese firm; 4) Product characteristics; 5) Personal relationships; and 6) technological transfer environment.

Attitudes and Preparation of US Firms The U.S. Department of Commerce presented an alarming statistic, in which 24 out of every 25 business proposals made by American firms to Japanese counterparts died (failed) during the negotiation stage, and in many cases, the reason had nothing to do with price or quality (Gehrt, 2005). For the most part, the failures were due to cultural misunderstandings. Corporate and national image were at stake, yet, over the past 20 years, cross­cultural bunglings have been prolific amongst American firms endeavoring to carry out business dealings in Japan.

American firms, performing business activities in Japan, have overwhelmingly agreed that they flayed around in unnecessary and costly fashions due to not being properly prepared to negotiate and structure business dealings with Japanese businesses. Whether it was American industrial arrogance, or just underestimating the sophistication of the Japanese market, or its complex Keiretsu business structures, or its bureaucratic economic and trade ministries, American firms were soon to realize that successfully performing business in Japan was going to take a lot more groundwork and research than previously planned (Akhter, 2003).

Cultural Awareness A comparison of the American firms’ preparation and resulting Japanese buyer responses as well as Japanese firms’ preparation and resulting business responses in the U.S. is represented in Figure­2 [Appendix­ A]. The research supporting this graph has uncovered that the amount of information exchanged prior to the first offer would be more positively related to joint gains for U.S. negotiators (by 12.5%) than for Japanese negotiators. The graph shows that a late first offer (after 20

111

United States versus Japan: Are there Myths Associated with Cross-Cultural Sales Negotiations? days) has led to lower joint gains for Japanese negotiators but higher joint gains for U.S. negotiators. As foreseen in controlling the overall proportion of information exchanged generated two distinctly different primary effects, the more information­exchange statements U.S. negotiators made prior to their first offer, the higher their joint gains. This diagram also revealed that the opposite was true for Japanese negotiators­ the more explicit information they exchanged prior to a first offer, the lower their joint gains (Adair).

Over 80% of non­Japanese businesses state that having language skills at a “social level” are essential to not only conduct business in Japan but it would also prove valuable in non­working, social settings. Indeed for the unprepared westerner, the simple mechanics of living in Japan may be extremely complex in unexpected ways (McDaniel, 2000). Simply put, there are a lot of hassles for the semi­literate in a modern industrial society. Language is the first step in solving the puzzle of culture and social practices in Japan. Sherman wrote, “No matter how good your translators are, you will always be missing out on tremendous amounts of your information about your company, your industry, and your marketplace if you do not speak and read Japanese (Sherman, 1990, Pg. 306).”

When one considers that 56% of American firms include a bilingual member in their negotiations team due to language barriers, in which most of these firms would succumb to high levels of reliance on the interpreter. Realizing that (77%) of these American firms indicated the bilingual member’s role was primarily that of an interpreter, it would also suggest that there would likely be no bias interpretations leaning towards the foreign business (Tung, 66).

A good deal of research has been conducted towards examining the negative aspects of using a Japanese provided interpreter and how it affects the quality of the negotiation process for U.S. firms. As stated by Koldau, “Their lack of foreign language abilities puts American negotiators at a disadvantage… The use of interpreters gives foreign negotiators better opportunity to observe the American nonverbal responses and provide more time to respond (Koldau, 1996, Pg. 12).” Additionally, Japanese may also use a tactic known as “selective understanding.” Selective understanding refers to a technique Japanese negotiators use to buy time to think of an appropriate response, or to ignore a question, concern, or objection. In the absence of a bilingual member, another area that Americans are often flawed leading to diminished quality is by making false assumptions. Koldau wrote, “Americans often assume that the person in the foreign negotiating team with the best English speaking ability represents the most intelligent and influential [person] in the group. This is often not the case and leads to paying most attention to the wrong person (Koldau).” This situation could lead to offending the main decision maker, and ultimately hinder the ability to reach an agreement. The lack of a bilingual member could also position American negotiators at a disadvantage. Since many Japanese executives understand English they can use it to their distinct advantage. “Americans face strong conversational disadvantages when Japanese executives use an interpreter even though they understand English.’ ‘Having the double response time and being able to focus on observing nonverbal signals while the interpreter translates one’s own statement are significant advantages for the Japanese side (Koldau).’”

In some cases, the lack of effective translation was based upon the interpreter having a lack of command of the English language, or unwillingness to present the American firm’s points of view. It may also have been that the interpreter(s) did not make the exact translation guided by the feeling that there were ‘only direct communications’ required between the parties, or not understanding the various innuendos within the conversations, or not understanding the slang expressions or business acronyms unique to the product and/or industry or geographic region of the U.S. (Tung).

In the absence of a bilingual member, another area that Americans are often flawed leading to diminished quality is by making false assumptions. Koldau wrote, “Americans often assume that the person in the foreign negotiating team with the best English speaking ability represents the most intelligent and influential [person] in the group. This is often not the case and leads to paying most attention to the wrong person (Koldau).” This situation could lead to offending the main decision maker, and ultimately hinder the ability to reach an agreement. The lack of a bilingual member could also position American negotiators at a disadvantage. Since many Japanese executives understand English they can use it to their distinct advantage. “Americans face strong conversational disadvantages when Japanese executives use an interpreter even though they understand English.’ ‘Having the double response time and being able to focus on observing nonverbal signals while the interpreter translates one’s own statement are significant advantages for the Japanese side (Koldau) .’”

Attitudes and Preparation of Japanese Firms Past research verified that 83% of American businesses have attested that the attitude of the Japanese firm was important or very important to the success of a sales negotiation (Tung). An assessment of the factors responsible for the success of business negotiations in Japan, ranging from very important to moderately important is presented in Figure­3 [Appendix­ A].

112

J. D. Williams Volume 7 – Fall 2009

There seems to be little doubt that sincerity, good faith, and hoesty play a very important role in Japanese business negotiations.

Product Characteristics Ken Flynn, sales manager in the US subsidiary of a large Japanese electronics company whom has worked in Japan says, “As a rule, (the Japanese) make very good products, and that’s what they count on in order to succeed (Kelley 1).” In the same article the author states, “… the fact that most Americans feel products the Japanese make are superior, has given them a leg up on their competitors (Kelley 1).” In fact, it could be said that their reputation for building quality products is the one thing that has allowed them to excel in the world markets in a way that makes others green with envy.

Personal Relationships Culture, more than any other factor, will influence the attitude of a Japanese firm who engage in sales negotiations. Human relations in Japan, as characterized by the close bonds that exist between superior and subordinate and among peers themselves, play a significant role in the smooth operation of any human endeavor. American firms should pay due regard to this factor (Tung).

Additional studies have demonstrated that the Japanese are very relationship oriented. Japanese feel that business negotiations run much smoother, and they feel much more secure with their decisions if there is a close trust built up between the negotiating teams. Japanese culture would be classified as high context, collectivism, high power distance, linguistically indirect, background focused, and achievers of efficiency through reduction of transaction costs. Both cultures (U.S. and Japan) do achieve efficiency, but through different emphases (Graham, 2006). A review of Figure­4 [Appendix­ A] amplifies some these differing characteristics shown through social focus and goal orientation. As Graham and Cateora pointed out in their book, International Marketing, focusing on long­term relationship building has been especially important in most international markets where culture dictates stronger ties between people and companies (Graham).

Technological Transfer Environment Technical expertise provided by U.S. firms to their Japanese partners in the past is viewed as a contributing factor to the success of business negotiations. Technical expertise might be defined as the extent to which a buyer understands the production processes and affiliated technologies related to a purchased good. The skill involves technical know­how, is specific to the good, and is developed over time. Firms gain this expertise directly through production of the component or indirectly through producing related products and conducting relevant research activities (Wallace, 1972).

A buyer’s technical expertise will, in general be able to assist in developing accurate and detailed specifications that can then be used within superior evaluation tools, resulting in higher supplier performance. This is the mind­set of Japanese manufacturing. The negotiation process would likely move more readily, or at least in the ‘right’ direction if the U.S. sales team realized that a Japanese’ company, as a potential buyer, must be able to benefit from deeper information sharing from the American firm since they can exchange more complex technical details.

There seems to be an abundance of research that negates MYTH 5 has having, at best, only a modicum of truth.

LIMITATIONS AND RECOMMENDATIONS Unfortunately, this study lacked a primary research component to gain a perspective on specific sales team(s) success and/or failed negotiations as well as details as to the post assessment for each outcome. Since the project countries were only two­ the United States and Japan, one could only extend the results of this study to other countries with caution. There may also be differences in regional sales territories throughout the United States that have differing levels of cross­cultural experience due to pre­existing trade flows with that or those nations, which was not addressed in this study. For example, there is more Asian trade being conducted in the west (California) and east (New York) coasts of U.S. than any other areas. In addition, differences in foreign experiences may also play a role in easier cross­cultural acceptance due to previous levels of contact and business experience within certain sectors (geographic or industrial) in the U.S. were not addressed in this study.

IMPLICATIONS OF THE RESEARCH Further Research: It may prove more beneficial to incorporate primary research (achieved by administering field surveys or conducting sales and sale management interviews) to validate secondary research findings as well as bring forth the all­important element of

113

United States versus Japan: Are there Myths Associated with Cross-Cultural Sales Negotiations? currency to the research conclusions. Additionally, it is recommend that further investigation, using primary data, be conducted to address current sales negotiation differences and commonalities between the two nations. It may be of value to test cross­cultural sales negotiations within other nations that are deemed, ‘difficult to do business with,’ which would include China, Nigeria, Argentina, and Russia.

Application Use: The abundance of evidence offered through this study, showing both mistakes and solutions in foreign negotiations, would be of benefit and use to all companies that are planning on or conducting overseas sales activities, particularly in Asia.

CLOSING COMMENTS Based upon the secondary research incorporated into this study and my own experiences both living and working in Japan and the U.S., it appears that there exist significant differences between negotiation styles of these countries respective sales teams. These aforementioned sales negotiation differences, which have been shown to present distinct variations on sales negotiations, were/are: cultural awareness, product characteristics, use of bilingual member, training and attitude of firms. Gavin Kennedy wrote, “My central message is that you can negotiate abroad providing that you remember that culture does influence your partner’s behavior and that if you want to do better in your negotiations you had better become aware of the influence that your partner’s culture is exerting on him or her—and, as important, the extent to which you are influenced by your own culture (Gavin Kennedy, Negotiate Anywhere, Preface).” Successful cross­cultural training and adaptation of the Japanese sales negotiation model would prove highly beneficial for American salespersons to incorporate.

Japanese sales teams are better trained in cross­cultural sensitivity and international marketing as a sales focus. The research has additional shown that Japanese view their skills in these negotiations as being central to their success. Research has revealed that before a U.S. manager is asked to negotiate with Japanese, extensive preparation is essential. The manager should be given a frame of reference into which the ‘strange’ behavior of the foreigner can be fitted. He/she should be armed with an available a set of intellectual tools in order to analyze what’s happening during the communication. The most experienced international businesspeople are able to accept cultural differences without making value judgments. They are able to work creatively with these differences and not feel personally threatened by them.

So what can an American company do to attempt in solving these dilemmas? As a way of combating this behavior, companies need to offer negotiations training and international diversity training programs. Within the negotiations training, certain techniques should be implemented to give a more real world experience. In terms of international diversity training, which also focuses on negotiations, the central imperative in process would be that managers and other employees from different cultures understand much better how and why culture(s) affects their expectations, reactions, view of themselves and each other, including possible negative perceptions (Kent, 2004). U.S. companies must learn about Japanese lifestyles and values within cultural awareness training. They must learn what is viewed as acceptable and unacceptable behavior. They must learn how to be respectful and how to apologize for unintended disrespect. Based upon my experience, I would say that the Japanese way of life is something so unique that one would find it extremely difficult to function everyday without some prior knowledge as to what is to be expected. The U.S. firms will need to develop a dire need for intercultural communication training in meeting the challenge of operating as a gaijin (stranger) within the Japanese social and corporate culture (Goldman 16).

In terms of international diversity training, which also focuses on negotiations, the central imperative in process would be that managers and other employees from different cultures understand much better how and why culture(s) affects their expectations, reactions, view of themselves and each other, including possible negative perceptions. (Kent, 2004) U.S. companies must learn about Japanese lifestyles and values within cultural awareness training. They must learn what is viewed as acceptable and unacceptable behavior. They must learn how to be respectful and how to apologize for unintended disrespect. Based upon my experience, I would say that the Japanese way of life is something so unique that one would find it extremely difficult to function everyday without some prior knowledge as to what is to be expected. The U.S. firms will need to develop a dire need for intercultural communication training in meeting the challenge of operating as a gaijin (stranger) within the Japanese social and corporate culture (Goldman 16).

WORKS CITED ­ Adair, Wendi L, Tetsushi Okumura, and Jeanne M. Brett (Sept. 1, 2004). Negotiating Across Cultures. ­ Akhter, Syed H and Toshikazu Hamada (2003). Japanese Attitudes toward American Business Involvement in Japan: An Empirical Investigation Revisited. The Journal of Consumer Marketing. Iss. 20.6. Pg. 526­535.

114

J. D. Williams Volume 7 – Fall 2009

­ Anderson, Erin and Leonard Lodish (2006). Leading the Effective Sales Force: The Asian Sales Force Management Environment. Alliance Center for Global Research and Development. 40/MKT. Pg. 7­8. ­ Clarke, Clifford and C, G Douglas Lipp (1 Feb. 1998). Conflict Resolution for Contrasting Cultures. Training & Development. 20­33. Pg. 157. ­ Clark, Phillip B. (Aug 28, 2000). B2B­ U.S. Firms see Asia’s Promise. B to B Chicago. Vol . 85. Iss. 13. Pg 1 & 2. ­ Cohen, Reymond (2001). Resolving Conflict across Languages. Negotiation Journal. ­ Frankenstein, John, and Hosseini, Hassan (July 1988). Advice from the Field: Training for Japanese Duty. Management Review. Pg. 41­42. ­ Gehrt, Kenneth C., Sherry Lotz, Soyeon Shim, Tomoaki Sakano and Naoto Onzo (2005). Overcoming Informal Trade Barriers among Japanese Intermediaries: An Attitudinal Assessment. Agribusiness. 21.1. Pg. 53. ­ Genestre, Alain, Herbig, Paul, Shao, Alan T. (1995). What does Marketing Really Mean to the Japanese. Marketing Intelligence & Planning. Bradford. Vol. 13. Iss. 9. Pg. 16. 12 pgs. ­ Giacomino, Don E, Michael D Akers, and Atsushi Fujita (1999). Personal Values of Japanese Business Managers. Business Forum. Iss. 24.1­2. Pg. 9­14. ­ Goldman, Alan (1994). Doing Business with the Japanese. Preparing U.S. Managers. State University of New York, Albany. Pg. 22 ­ Graham, John L. (Autumn 1986). Across the Negotiating Table from the Japanese. International Marketing Review. Pg. 58­ 71 ­ Graham, John L. & Philip R. Cateora (2006). International Marketing. Irwin Professional Pub. Pg. 338­410. ­ Gross, Neil (October 15, 1990). Zen and the Art of Middle Management. Business Week. Industrial/technology edition; Business Week's Gross Covers Science and Technology in Tokyo. Letter from Saijoji. No. 3182. Pg. 20 Article presented in www.japan­guide.com. ­ Hollensen, Svend (2004). Cross Cultural Sales Negotiation. Global Marketing. Prentice Hall. 4th Ed. Pg. 617­620. ­ Hoshino, Tomohiko and Sakakibara, Ken (11 September 2007). Client Relations become Top Priority. The Nikkei Weekly. Pg. 16. ­ Ilon, Alon (2003). Academy of International Business. From http://aib.msu.edu/publications/insights/insights_v003n01.pdf ­ Jai­Beom, Kim (1999). Relationship Marketing in Japan: The Buyer­Supplier Relationships of Four Automakers. The Journal of Business & Industrial Marketing. Vol. 14. Pg. 118­125. ­ Japan External Trade Organization (JETRO). From http://www.jetro.go.jp/en/invest/success_stories/ ­ Kelley, Bill. Culture Clash: West meets East. Sales and Marketing Management. New York Jul 1991. Vol. 143 Iss. 8. Pg, 28. 5 pgs. ­ Koldau, Claudius. Meaning of Cross Cultural Differences. ISBM Report 14, 1996. ­ Kent, John (Sept. 2004). Training for International Success. Purification Magazine. Vol. 46. Nos. 9. Pg. 1. ­ Kumagai, Fumie (1995). Families in Japan: Beliefs and realities. Journal of Comparative Family Studies. Iss. 26.1. Pg.135. ­ Lawrence, Robert Z., Saxonhouse, Gary R. (1991). Efficient or Exclusionist? The Import Behavior of Japanese Corporate Groups; Comments. Brookings Papers on Economic Activity. Vol. 1. Pg. 311. ­ Maher, Thomas E, Wong, Yim Yu (Winter 1994). The Impact of Cultural Differences on the Growing Tensions b/w Japan and the United States. S.A.M. Advanced Management Journal. Cincinnati. Vol. 59. Iss. 1. Pg. 40. 7 pgs. ­ McDaniel, Edwin Ralph (2000). Japanese Negotiating Practices: Low­context Communication in a High­context Culture. Diss. Arizona State University. ­ Mohammed Y. A. Rawwas, Ziad Swaidan, and Jamal Al­Khatib (2006) Does Religion Matter? A Comparison Study of the Ethical Beliefs of Marketing Students of Religious and Secular Universities in Japan. Journal of Business Ethics. Vol. 65.1. Pg. 69­86. ­ Morimoto, Ikuyo, Miki Saijo, Kayoko Nohara, Kotaro Takagi, Hiroko Otsuka, Kana Suzuki, and Manabu Okumura (2006). How Do Ordinary Japanese Reach Consensus in Group Decision Making? Identifying and Analyzing "Naïve Negotiation. Group Decision and Negotiation. Iss. 15.2 Vol. 157. ABI/INFORM Global. ProQuest. Rohrbach Library, Kutztown University. Kutztown. 3 Dec. 2007 . ­ Otsubo, Mayumi (Spring 1986). A Guide to Japanese Business Practices. California Management Review. Part VI: Statistics on Religion and Sales. Vol. 28. Iss. 3. Pg. 28. 15 pgs. (AN 4762574). ­ Rudlin, Pernille (18 June 2007 Edition). In Japanese Business, Apologizing for Others can be Sincere. The Nikkei Weekly. ­ Seawright, Kristie Kay (1994). Woodland Product Quality in the Automobile Industry: The United States and Japan. Diss. The University of Utah. ­ Sherman, Linda (July 1990). Breaking the Intimacy Barrier. Japan Quarterly. pg. 306. ­ Speigel, Rob (2004). Selling to Japan by Joint Venture. Electronic News. ­ Steinnhoff, Patricia and Kazuko Tanaka (Fall/Winter 86/87). Women Managers in Japan. International Studies of Management and Organization. Vol. 16. Issue ¾. Pg. 108­132. 25 pgs.

115

United States versus Japan: Are there Myths Associated with Cross-Cultural Sales Negotiations?

­ Sullivan, Jeremiah J. (1992). Japanese Management Philosophies: From the Vacuous to the Brilliant. California Management Review. Iss. 34.2. Pg. 66. ­ Tadayuki, Miyamoto (04 Mar. 2002). A Marketing Model of Japanese Buyer­Supplier Relationship. Journal of Business Research Vol. 57 Pg. 312­319. ­ Tung, Rosalie L. (Summer 1984). How to Negotiate with the Japanese. California Management Review. Iss. 26, 000004; ABI/INFORM Global. Pg. 65­73. ­ Wallace, William McDonald (1972). The Secret Weapon of Japanese Business. Columbia Journal of World Business. Iss. 7.6. Pg. 43­52. ­ Wilson, Glenn D. and Saburo Iwawaki (1980). Social Attitudes in Japan. Journal of Social Psychology Iss. 112. Pg. 175­ 180. ­ Yagi, Takashi (2006). Industrial Robot. Bedford. Vol. 33. Iss. 5. Pg. 359.

116

J. D. Williams Volume 7 – Fall 2009

APPENDIX­ A Figure 1: Cross­Cultural Negotiations

Non-task Related interaction Seller’s cultural background 1. Status Distinction

2. Impression formation accuracy

3. Interpersonal attraction Cultural ‘distance’ Task Related Interaction between seller and 4. Exchange of information buyer

5. Persuasion and bargaining strategy

6. Concession making and agreement Buyer’s cultural background 7. Negotiation outcome

Hollensen, Svend. Global Marketing, Prentice Hall. 4th e. pg. 618

Figure 2: Challenges in Doing Business: Japan & the United States (Information Exchange Prior To First Offering)

117

United States versus Japan: Are there Myths Associated with Cross-Cultural Sales Negotiations?

Figure 3: Factors Responsible for the Success of Business Negotiations

Figure 4 Americans Japanese

Task oriented goals Social relationships Goal conflict

Negative Emotions

Behavioral incompatibility Information processing

Relationship Quality Integrative Outcomes

“The model implies that “communicative conflict” affects the negotiation process and “The model implies that “communicative conflict” affects the negotiation process and outcomes outcomes through the mediating role of emotions. It is postulated that goal conflict creates through the mediating role of emotions. It is postulated that goal conflict creates negative negative emotions among the negotiators. Negative emotions affect subsequent negotiation emotions among negotiators. Negative emotions affect subsequent negotiation behaviors and behaviors and outcomes by fostering behavioral incompatibility and/or constricting outcomes by fostering behavioral incompatibility and/or constricting information processing. information processing. Behavioral incompatibility and/or constricted information processing may lower the integrativeBehavioral incompatibility and/or constricted information processing may lower the integrat‐ness of negotiation outcomes. In a worst case scenario, the parties ive­ may not even reach an agreement. Furthermore, it is postulated that behavioral ness of negotiation outcomes. In a worst case scenario, the parties may not even reach an incompatibility will also have an adverse impact on the nature of the relationship among the agreement. Furthermore, it is postulated that behavioral incompatibility will also have an parties.”adverse impact on the nature of the relationship among the parties.” SOURCE: Communicative Conflict in Intercultural Negotiations: The Case of American and Japanese Business Negotiations, RAJESHSource KUMAR,: Communicative International Negotiation Conflict 4: in 71,Intercultural 1999. Negotiations: The Case of American and Japanese Business Negotiations, RAJESH KUMAR, International Negotiation, 4:71, 1999.

118

J. D. Williams Volume 7 – Fall 2009

APPENDIX­ B

LITERATURE REVIEW General Knowledge on Japanese Culture: ­ Clark, Phillip B. (Aug 28, 2000). B2B­ U.S. Firms see Asia’s Promise. B to B Chicago. Vol .85, Iss. 13. Pg 1 & 2. ­ Jai­Beom , Kim (1999). Relationship Marketing in Japan: The Buyer­Supplier Relationships of Four Automakers. The Journal of Business & Industrial Marketing Vol 14 Pg. 118­125.

Japanese vs. American Culture – Differences: ­ Goldman, Alan (1994). Doing Business with the Japanese. Preparing U.S. Managers. State University of New York, Albany. Pg. 22 ­ Hoshino, Tomohiko and Sakakibara, Ken (11 September 2007 Edition). Client Relations Become Top Priority. The Nikkei Weekly. ­ Japan External Trade Organization ­ Rudlin, Pernille (18 June 2007 Edition). In Japanese Business, Apologizing for Others can be Sincere. The Nikkei Weekly. ­ Speigel, Rob (2004). Selling to Japan by Joint Venture. Electronic News ­ Tadayuki, Miyamoto (04 Mar. 2002). A Marketing Model of Japanese Buyer­Supplier Relationship. Journal of Business Research Vol. 57 Pg. 312­319. ­ Yagi, Takashi (2006). Industrial Robot. Bedford. Vol. 33, Iss. 5. Pg. 359.

Problems between Japanese vs. American Sales Teams: ­ Anderson, Erin and Leonard Lodish (2006). Leading the Effective Sales Force: The Asian Sales Force Management Environment. Alliance Center for Global Research and Development. 40/MKT. Pg. 7­8. ­ Japan External Trade Organization ­ Kelley, Bill (Jul 1991). Culture Clash: West Meets East. Sales and Marketing Management. New York. Vol. 143. Iss. 8. Pg, 28. 5 pgs. ­ Steinnhoff, Patricia and Kazuko Tanaka (Fall/Winter 86/87). Women Managers in Japan. International Studies of Management and Organization. Vol.16. Issue ¾. Pg. 108­132. 25 pgs.

Religion and Selling: ­ Heine, Steven (Jan 2005). Japanese Buddhism: A Cultural History. Philosophy East and West. Honolulu. Vol. 55. Iss. 1. Pg. 125. 2 pgs ­ Maher, Thomas E, Wong, Yim Yu (Winter 1994). The Impact of Cultural Differences on the Growing Tensions b/w Japan and the United States. S.A.M. Advanced Management Journal. Cincinnati. Vol. 59. Iss. 1. Pg. 40. 7 pgs.

American & Japanese Sales Processes: ­ Genestre, Alain, Herbig, Paul, Shao, Alan T. (1995). What does Marketing Really Mean to the Japanese. Marketing Intelligence & Planning. Bradford. Vol. 13. Iss. 9. Pg. 16. 12 pgs ­ (http://venturejapan.com/fast­track­sales­japan­1.htm) ­ (http://www.asianinfo.org/asianinfo/japan/religion.htm). ­ Maher, Thomas E, Wong, Yim Yu (Winter 1994). The Impact of Cultural Differences on the Growing Tensions b/w Japan and the United States. S.A.M. Advanced Management Journal. Cincinnati. Vol. 59. Iss. 1. Pg. 40. 7 pgs. ­ Mohammed Y. A. Rawwas, Ziad Swaidan, and Jamal Al­Khatib (2006) Does Religion Matter? A Comparison Study of the Ethical Beliefs of Marketing Students of Religious and Secular Universities in Japan. Journal of Business Ethics Vol. 65.1. Pg. 69­86. ­ Otsubo, Mayumi (Spring 1986). A Guide to Japanese Business Practices. California Management Review. Vol. 28 Iss. 3, Pg. 28, 15 pgs. (AN 4762574). Part VI: Statistics on Religion and Sales. ­ Frankenstein, John, and Hosseini, Hassan (July 1988). Advice From the Field: Training for Japanese Duty. Management Review. Pg.41­42. ­ Goldman, Alan (1994). Doing Business with the Japanese. Preparing U.S. Managers. State University of New York, Albany. Pg. 22 ­ Sherman, Linda (July 1990). Breaking the Intimacy Barrier. Japan Quarterly. Pg.306.

119

United States versus Japan: Are there Myths Associated with Cross-Cultural Sales Negotiations?

Cross­Cultural Negotiation Process: ­ Adair, Wendi L, Tetsushi Okumura, and Jeanne M. Brett (Sept. 1, 2004). Negotiating Across Cultures. ­ Mohammed Y. A. Rawwas, Ziad Swaidan, and Al­Khatib Jamal (2006). Does Religion Matter? A Comparison Study of the Ethical Beliefs of Marketing Students of Religious and Secular Universities in Japan. Journal of Business Ethics Iss. 65.1. Pg. 69­86. ­ Tung, Rosalie L. (Summer 1984). How to Negotiate with the Japanese. California Management Review. Iss. 26, 000004; ABI/INFORM Global. Pg. 65­73. ­ Graham, John L. (Autumn 1986). Across the Negotiating Table from the Japanese. International Marketing Review. Pg 58­ 71.

Myth­1: ­ Clarke, Clifford C and G Douglas Lipp (Feb. 1, 1998). Conflict Resolution for Contrasting Cultures. Training & Development. Pg. 20­33. Research Library. ProQuest. Rohrbach Library. Kutztown University. Kutztown. 3 Dec. 2007 . ­ Morimoto, Ikuyo, Miki Saijo, Kayoko Nohara, Kotaro Takagi, Hiroko Otsuka, Kana Suzuki, and Manabu Okumura (2006). How Do Ordinary Japanese Reach Consensus in Group Decision Making? Identifying and Analyzing "Naïve Negotiation. Group Decision and Negotiation Iss. 15.2 Vol. 157. ABI/INFORM Global. ProQuest. Rohrbach Library, Kutztown University, Kutztown. 3 Dec. 2007 .

Myth­2: ­ Wilson, Glenn D. and Saburo Iwawaki (1980). Social Attitudes in Japan. Journal of Social Psychology Iss. 112. Pg. 175­ 180. ­ Wallace, William McDonald (1972). The Secret Weapon of Japanese Business. Columbia Journal of World Business. Iss. 7.6. Pg. 43­52. ­ Giacomino, Don E, Michael D Akers, and Atsushi Fujita (1999). Personal Values of Japanese Business Managers. Business Forum. Iss. 24.1­2. Pg. 9­14. ­ Gehrt, Kenneth C., Sherry Lotz, Soyeon Shim, Tomoaki Sakano, and Naoto Onzo (2005). Overcoming informal trade barriers among Japanese intermediaries: An attitudinal assessment. Agribusiness Iss. 21.1. Pg. 53. ­ Syed H Akhter and Toshikazu Hamada (2003). Japanese Attitudes Toward American Business Involvement in Japan: An Empirical Investigation Revisited. The Journal of Consumer Marketing. Iss. 20.6. Pg. 526­535. ­ Mohammed Y. A. Rawwas, Ziad Swaidan, and Al­Khatib Jamal (2006). Does Religion Matter? A Comparison Study of the Ethical Beliefs of Marketing Students of Religious and Secular Universities in Japan. Journal of Business Ethics Iss. 65.1. Pg. 69­86. ­ Lawrence, Robert Z., Saxonhouse, Gary R. (1991). Efficient or Exclusionist? The Import Behavior of Japanese Corporate Groups; Comments. Brookings Papers on Economic Activity. Vol. 1. Pg. 311. ­ Sullivan, Jeremiah J. (1992). Japanese Management Philosophies: From the Vacuous to the Brilliant. California Management Review. Iss. 34.2. Pg. 66. ­ Maris G. Martinsons and Robert M. Davison (2007). Strategic Decision Making and Support Systems: Comparing American, Japanese and Chinese Management. Decision Support Systems. Iss. 43.1 Pg. 284. ­ James W Westerman, Rafik I Beekun, Yvonne Stedham, and Jeanne Yamamura (2007). Peers versus National Culture: An Analysis of Antecedents to Ethical Decision­making. Journal of Business Ethics. Iss. 75.3. Pg. 239­252. ­ Chiaki Nakano (1997). A Survey Study on Japanese Managers' Views of Business Ethics. Journal of Business Ethics. Iss. 16.16. Pg. 1737­1751. ­ K. Imazai and K. Ohbuchi. Conflict Resolution and Procedural Fairness in Japanese Work Organizations. Japanese Psychological Research. Iss. 44.2. Pg. 107­112.

Myth­3: ­ Tung, Rosalie L. (Summer 1984). How to Negotiate with the Japanese. California Management Review. Vol. 26. Iss. 000004. ABI/INFORM Global. Pg. 65­73. ­ Kelley, Bill (Jul 1991). Culture Clash: West Meets East. Sales and Marketing Management. New York. Vol. 143. Iss. 8. Pg, 28. 5 pgs ­ Seawright, Kristie Kay (1994). Woodland Product Quality in the Automobile Industry: The United States and Japan. Diss. The University of Utah.

120

J. D. Williams Volume 7 – Fall 2009

Myth­4: ­ Graham, John L. (Autumn 1986). Across the Negotiating Table from the Japanese. International Marketing Review. Pg 58­ 71. ­ Kent, John (Sept. 2004). Training for International Success. Purification Magazine. Vol.46. Nos.9. Pg. 1. ­ McDaniel, Edwin Ralph Japanese negotiating practices: Low­context communication in a high­context culture. Diss. Arizona State University, 2000 ­ Tung, Rosalie L. (Summer 1984). How to Negotiate with the Japanese. California Management Review. Vol. 26. Iss. 000004. ABI/INFORM Global. Pg. 65­73.

Myth­5: ­ Koldau, Claudius. Meaning of Cross Cultural Differences. ISBM Report 14, 1996. ­ Tung, Rosalie L. (Summer 1984). How to Negotiate with the Japanese. California Management Review. Vol. 26. Iss. 000004; ABI/INFORM Global. Pg. 65­73. ­ Dyer. Jeffrey H. and Wujin Chu (2nd Quarter 2000). The Determinants of Trust in Supplier­Automaker Relationships in the U.S., Japan and Korea. Journal of International Business Studies. Vol. 31. Iss. 2. Pg. 259­285. 27 pgs., 4 charts. ­ Gulbro, Robert and Paul Herbig (1995). Differences in Cross­cultural Negotiation Behavior between Industrial Product and Consumer. Journal of Business & Industrial Marketing. 08858624, Vol. 10, Iss. 3. ­ Lituchy, Terri Robin (1992). International and Intra­national Negotiations in the United States and Japan: The Impact of Cultural Collectivism on Cognitions, Behaviors and Outcomes. Diss. The University of Arizona. ­ Sherman, Linda (1990). Breaking the Intimacy Barrier. Japan Quarterly. Iss. 37.3. Pg. 304. ­ Cohen, Reymond (2001). Resolving Conflict across Languages. Negotiation Journal.

121

United States versus Japan: Are there Myths Associated with Cross-Cultural Sales Negotiations?

APPENDIX­ C

Table 1

DESCRIPTIVE STATISTICS: POOLED SAMPLE AND BY COUNTRY

Variables Pooled US Japan Korea Sig. Diff.

(n = 453) (n = 135) (n = 101) (n = 217) * * * Trust 14.11 13.63 16.37 13.35 * * * Length 21.61 32.56 41.4 12.44 * * * Face 2042.56 1245.01 4989.54 1413.41 * * * Continuity 0.78 0.71 0.91 0.77 * * * Assistance 9.83 7.39 10.15 10.51 * * * Stock 0.04 0.00 0.11 0.03 * * *

Note: 1. The last column indicates whether the country means are significantly different from each other (F­test). * * * Country samples are significantly different at α = 0.01.

Table­2 SLECTED BUYER DIFFERENCES in BUYER­SELLER STYLES International Climate Importance Pace Process Decision Making Market Polite with Great Very slow 1st, all A total Asian­Japan group importance with a lot of general process with all many w/ Lg.­term initial time items agreed levels idiosyncratic relationships spent on upon, then involved in nuances are what relationship details are the final matter most building discussed. decision.

Non­Asian­ Sometimes Of less Time is Ordered Can be United viewed as importance, money­ process, either States aggressive or focus is upon expediting 1st to last. individual or confrontational. achieving is always group final results. critical process. Graham, John L. and Philip R. Cateora (2006). International Marketing. Irwin. Pg.338.

122

J. Villacís González Volume 7 – Fall 2009

COMBINATORICS IN THE THEORY OF PRODUCTION

José Villacís González University San Pablo­CEU, Spain

ABSTRACT Applying the factors of production to production, i.e. destroying them, depends on their nature, on the quantity used and on the technology employed. This statement, while true, is not by any means the whole story. Production depends in the main – be it in terms of quantity or content – on the order in which the factors of production are employed. In this work we bring combinatorics into direct contact with the theory of production, while fully admitting the limitations inherent in technology and the nature of the production process.

Keywords: Combination, Efficient Compensation, Symmetrical Efficiency, Permutation, Processes, Repetition, Variation. (JEL D00 Microeconomic Theory)

INTRODUCTION Inputs, along with capital equipment, take part in the production process in order to produce end products. The term process is precise and at the same time broad and vague. It intuitively and scientifically conveys the idea of combination of inputs or production factors.

Our idea of combination is a very specific one and it comes from Math. It refers to the different ways of arranging or ordering the elements involved in a combination.

I. PRODUCTION AND FUNDAMENTAL COMBINATORIAL ANALYSIS Depending on the order in the application of productive inputs, we can find various levels of production. According to the general concept of combinatorial theory in Math, this order in the application is called combination in the application. Consequently, there will be different production volumes depending on the number of possible combinations in the application of inputs.

In this study, we mainly try to measure the number of combinations in the application and thus the number of possible production volumes. By no means are we measuring the production volumes, but only the number of the different possible volumes.

The word combination is a generic name and it includes particular cases of mathematical theory: ordinary variations (without repetition), variations with repetition, ordinary permutations (without repetition), permutations with repetition and combinations themselves.

When referring to the number of combinations in the application of inputs, we are looking at ordinary permutations. But these represent a particular case within the ambit of this study. The number of combinations in the application of inputs can be calculated using the formula m! … m being the number of inputs. If we consider the manufacture of pants, then the inputs will be: the fabric, the buttons, the thread and the zipper. Four inputs in total. The possible combinations are 4! = 4.3.2.1=24. But production follows its own process and each process has a specific combination ­ or order in the application. In the example given, the zipper cannot be sewn if the fabric is not cut, and the fabric cannot be cut if the roll has not been previously bought, etc. Similarly, you cannot fry an egg if you didn’t previously put oil in the pan. In this sense, the combinatorial possibilities are limited by the required nature of each productive process. Each process means a linked sequence or part of combination. However, there are combination possibilities in production: agriculture, the chemical industry and, of course, gastronomy provide us with many examples. An efficient producer will look for that combination which provides him or her with the largest production volume. That will be the optimum combination; the rest will be under­optimum combinations. In this study we seek to count or enumerate the number of possible combinations.

123

Combinatorics in the Theory of Production

II ORDINARY OR NON­REPETITIVE VARIATIONS

We call an ordered process that arrangement of factors where the order in which the factors v1 are placed is important, and not just the proportion in which the factors are combined. Each process is defined by a specific arrangement of the factors (oil, garlic, parsley  parsley, oil, garlic , etc.). Let us allow for the possibility that one or more factors do not apply; the existence of the different orders that define each process will be made possible by the reality implied by the nature of each production process and by the technology (there are techniques that allow buildings to be erected from the roof down). Despite the content of this article, we would say that order, a specific order, is fundamental to each process and that changing the order makes production at source impossible. Example: pan, oil and egg; it could never be egg first, then the pan and then the oil.

Let us consider two groups of factors: some are necessary and others are dispensable (we do not mean replaceable). When building a road the chippings, machinery, asphalt, etc. are necessary and fundamental, and the road signs, lighting and traffic might be considered dispensable.

Principles II.1. Let us define variant arrangements of possible production processes defined by m factors (v) arranged in groups of h elements. The number of possible processes will be Vm,h ; h

II.2. Since each process is defined by an arrangement and each output, q, is determines the level of output. This means that the different groupings of h elements will determine different groups of different processes.

II.3. The sum of the groups of possible processes in each type and the sum of the processes of all the possible h groups define the total possible output. This is an extension of points II.1. and II.2.

II.4. For each arrangement of h elements there will be an efficient process and this will be the one that determines maximum output q. The rest of the processes (for each h grouping) will be sub­optimum.

II.5. There will be a chain of efficient processes, in the form of a divergent series as the h grouping orders increase: Vh < m, ... h+1, h+2, h+3, h+n = m. This statement is a consequence of points II.3. and II.4. Thus we can state that, as we introduce more non­repetitive factors v, we achieve higher output. Since each arrangement involves a number of processes, we shall take the efficient ones in each grouping. We call this principle the principle of increasing output arrangement.

II.6. Heaving defined an output function: q = f (v1, v2, … vi . . .vn), we can say that, if increasing variant arrangements are possible: vm,h < vm,h+1 < vm,h+2 < ......

II.7. The difference between the output for two successive groupings: vm,h+1 and vm,,h is a positive value of q: q.

II.8. Considering two successive differences, for example, of the type in point II.7, such as those relating to the processes defined by one of the arrangements vm,h+2 and vm,h+1, another by vm,h+1, we find, as we have said, positive output values. These increases can be of two types, referred to as: II.8.1  q1 <  q2, in which there would be economies of scale. II.8.2  q1 >  q2, in which there would diseconomies of scale.

II.9. If we establish a difference between the outputs relating to the processes defined by variations of the same order, in particular, between the efficient one and another, which is sub­optimum or inefficient, we can say that there will have been a variant economy of scale. We call this principle variant economy of scale.

II.10. If, for any variant arrangement of any order h < m, the corresponding process is invalid: vm,h  q = 0, this will necessarily mean that a fundamental or necessary factor V is missing in one of the arrangements.

124

J. Villacís González Volume 7 – Fall 2009

II.11. If, given two successive arrangements of groups h and h+1, both defining an equally efficient process, or two efficient processes that each determined the same level of output qi, then a useless or dispensable or indifferent factor exists in the final arrangement. If it is a repeated variation it can be regarded as a dispensable factor that is repeated and generates diseconomies of scale. But if it is an ordinary variation such as h < m, the final factor of h+1 is irrelevant and does not exist in the output function. We call this principle the principle of absolute inutility.

II.12. Given any two arrangements of the same order they will determine processes with different levels of efficiency and thus output, but whatever the latter, these will always result in higher output than those relating to the variant arrangements of orders lower than h­1, provided that there are no irrelevant factors (in consequence of points II.1., II.2. and II.3.). We call this principle absolute utility principle.

II.13. Given two arrangements of the same order h, which each determined a process and an output associated with each process, one level of production will be higher and the other lower. But if in either of the two a factor vi is replaced by another vi, the value of h remaining constant, i.e., the same order, and higher output is achieved, we would say that these factors are more efficient than the ones replaced. We call this the principle of factor efficiency.

II.14. If, given two arrangements of the same group or order h, one efficient and the other defining a less efficient process, in the less efficient process a value vi is substituted for another vi, and this factor is efficient, it will achieve output equal to that of the efficient process. There will thus be within each category of factors so far established— necessary and dispensable factors—some that will be efficient and others that will be less efficient. We call this principle the principle of efficient compensation.

II.15. Given two arrangements of the same order h, both with at least one efficient factor vi, if we consider the less efficient one we shall see that it defines a process that generates a level of production qi. If there is a better arrangement of the same order h, or what amounts to the same thing, that defines a more efficient process, but one without that efficient factor vi so that, even without this factor, the level of output is at least the same, we would say that this is a compensatory arrangement of efficiency. We call this principle the principle of compensatory efficient arrangement.

II.16. If a production process generated by an arrangement vm,h is efficient, it can engender an output qi, which would be the same as another engendered by a particular process by an immediately higher arrangement vm,h+1, which was not efficient. If this is the case, we will say that there exists a compensation of efficiency over and above the number of factors. We call this principle the principle of efficient compensation over and above the number.

II.17. Given different levels of output, each level or amount can be the product of different processes generated by arrangements of different values of h. Thus efficient processes with lower orders of h will coexist with inefficient processes with higher orders of h. This is a consequence of point II.16.

II.18. A vm,h arrangement could never occur in which a necessary factor did not appear and, however the order included, the case in which h = m­1 cannot compensate for any lower arrangement, including h = 1, if a necessary factor is present in the latter case (production of cigarettes: if there are no tobacco leaves, any number of other factors can never compensate another arrangement or process in which there are only tobacco leaves). We call this principle the principle of impossibility of compensation.

II.19. If, in an arrangement that defines a process the order (a fixed order) is important for a set n1 of factors and another set n2, the order can be altered, so that n1 + n2 = h, the number of possible processes will be less than the set of processes in which the order of all the factors can be varied. The number of processes possible will be a variation of order n2 = h ­ n1, i.e., vm,n = vm,h ­ n, logically producing the following inequality: vm,h­n < vm,h. We shall call the set of n1 factors fixed arrangement and the process thus fixed the fixed order process.

II.20. If we accept, in a function of production, the existence of necessary resources, we can consider two types of fixed arrangements within a large set of fixed arrangements of order n1: one will be that of the factors necessary to each other and which we shall call n1.1 and another, n 1.2, of necessary factors together with a subgroup of certain

125

Combinatorics in the Theory of Production

unnecessary or dispensable resources, which are susceptible to forming a fixed arrangement with the necessary ones, n1, we shall call them relatively dispensable factors. II.20.1 We thus have n1.1 Є n1, n1.2 Є n1, n1.1 + n1.2 = n 1 and n1 + n2 = h, h being the arrangement concerned (h

II.20.2 We shall call the fixed arrangement n1.1 solid fixed arrangement, since without it production would be completely impossible. And we shall call the fixed arrangement n 1.2 the complementary fixed arrangement.

II.21. Given a level of income, the economic unit will append all its income on the h factors that determine the efficient arrangement, without the order affecting the expenditure in practice (production p1 = price of the factor v1, and p2 the price of the factor v2 thus income p1v1 + p2v2 = p2v2 + p1v1). The product’s commutative property is applicable to expenditure.

II.22. Since various processes exist for each arrangement h, one efficient and others that are sub­optimum (the rest), sub­ optimum processes of different higher orders for various orders will coexist with optimum process of lower orders (only one optimum) (in accordance with point II.16., “principle of efficient compensation over and above the number”). The combination of processes that, originated by variations of factors of various orders, determine the same level of output will be called: homogeneous field.

II.23. Efficient production processes are found within the homogeneous output field. And if with the same level of expenditure of an income a number of factors can be acquired and the arrangement of these factors determines an efficient process, the inefficient combination will never be chosen. We call this principle the principle of efficient expenditure.

II.24. The homogeneous output field that does not correspond with efficient and thus cost­effective arrangements, is called inefficient order universe, since it covers the sum of all the variations for all possible orders, in turn, do not define efficient processes.

II.25. If we call cost at consumption or destruction valued in monetary terms, of factors that are present in production, there will be a cost for each specific arrangement of factors. An since there are various processes for arrangement h that each determine different outputs, there will be as many functions of unit cost as the number of arrangements. There will be a number vm,h of functions of unit cost.

II.26. It follows from points II.5, II.23, II.24 and II.25. that there will be various cost curves associated with the inefficient order universe and, since they correspond with lower output levels, unit costs will be higher. For this reason, we can stated that the lowest unit cost curve relating to a group of processes associated to each variant arrangement h, and to all the increments of h, (h­1 . . . h+n) is that relating only to the efficient processes. As we said in point II.4., for each order group h of factors there will be a process that will be the efficient one and it is the one achieves the highest levels of output (with the same number h of factors v). II.27. If two arrangements h formed by different factors/elements separately determined two possible processes, then the possibility that they can be used together exists. We call this principle the principle of ordered activity.

II.28. If two arrangements of different of different orders such as h and 1 (h ≠ 1) can each define separate processes, then used together they will define a different process. We call this process the principle of strict additivity.

II.29. If two processes are defined by two arrangements vm,h+∆x and vm,h, when ∆x 0 and the difference between their two consecutive levels of output is qh+∆x ­ qh = ∆ q i so that ∆ qi  0 also, we shall then say that the arrangement is continuous. We call this principle the principle or order continuity

126

J. Villacís González Volume 7 – Fall 2009

III. REPETITIVE VARIATIONS Concept We shall consider those processes defined by a specific arrangement of factors, accepting the possibility that one or more is repeated and that one or more is not involved in the arrangement. It is a type of variant arrangement in which the order is important in the sense we have been expounding: that a particular order defines a process and thus there will as many productive processes (and thus, outputs q) as there are arrangements.

Principles III.1. We define arrangements as repeated variant arrangements when the order of the factors can be altered, one or more factors can be dispensed with and one more factors can be repeated. Each arrangement will determine a process and each process will engender an output.

III.2. The number of possible arrangement is vhm and consequently this will be the number of processes for each arrangement h. Thus the number of processes will be mh, m being, as we have seen, the number of these productive factors involved in each process. As the orders h + 1, h + 2 vary, the number of possible processes will increase.

III.3. For a group of possible processes defined by an arrangement h ( vhm ) there will be an efficient solution.

III.4. Considering all the possible arrangement 0 < h ≤ m, we shall also consider all the possible processes. If each arrangement determines an efficient process, all the others being sub­optimum, there will be a set of efficient processes for the set of all the possible arrangements.

III.5. It follows from point II.17. of ordinary variations that if, given an arrangement h, inefficient processes exist (except the efficient one) that engender an output qi, the efficient solution of the orders below order h will determine some processes that also engender an output of qi. Thus there will be levels of production that are determined by more than one process of different orders. We call this principle, as in point II.16. of ordinary variations, principle of efficient compensation over and above the number.

III.6. If, given an arrangement of m factors of order h, starting from h, more orders are taken as repeated production increments decrease, we say that h­ is the order saturation point for repetition The essence of this argument is the law of diminishing returns when the same factor is repeatedly applied and saturation point is reached. We call this principle the principle of saturation through repetition

III.7. Given a repetitive variant arrangement vhm in which a fundamental factor appears, there could never be another process equally efficient however large its arrangement, even h + n = m ­ 1, if, in the arrangement that defines that process the necessary factor does not appear. This is because if the necessary factor is not present the process is invalid and its output zero. Example: fertilizers, water, tractors, perpetual artificial light, working the land, etc. but no seeds. We call this principle the principle of impossibility of compensation

III.8. Given two repetitive arrangement vhm and another, vh+1m in which the necessary factors are common to both, if saturation has not been reached, the processes vh+1m will generate higher output than those of vhm. This means that both the factors and any repetition of the factors are productive. We call this obvious principle the principle of factor productivity

III.9. Given two variant repetitive arrangements within the set of order h: vh=k=k1+K2m, they will result in different processes and also different output if the factor that is repeated k1 times and that is vi, and the one that is repeated k2 times is the other factor vi (vi and vi are found in the arrangement vh=km). This consequence is due to the fact that the productivity of different factors is also different. We call this principle the difference of factors principle

III.10. The acceptance of processes defined by ordinary or non­repetitive variant arrangements does not necessarily imply the acceptance of the corresponding variant repetitive arrangement. The explanation for this is found in the possibility that the repetition of any factor may reduce output once saturation point has been reached.

127

Combinatorics in the Theory of Production

III.11. Given two variant arrangement, one ordinary or non­repetitive and the other repetitive, both of the same order h, the number of possible processes is greater in the repetitive variation vhm = mh than in the non­repetitive variation ym,h = m. (m­1) (m­2) . . . . (m­h+1); Mh > m(m­1) (m­2) . . . (m­h+1).

III.12. In general and due to saturation caused by repetition, a process derived from a specific arrangement (one in particular) is more efficient (we do not say totally efficient), if it is an ordinary arrangement then if it is an ordinary arrangement than if it is a repetitive arrangement. This generalisation is not valid in all cases. In particular it is not valid for repetitive economies of scale.

III.13. It follows from points II.11. and II.12. that if the number of possible processes is greater in ordinary variations and diseconomies or scale or saturation due to repetition occur more frequently in repetitive variations, the probability of finding inefficient processes is greater in repetitive variations than in ordinary variations. We call this principle probability of inefficiency in repetition principle.

III.14. It follows from point II.13. and from point II.7. that the above principle of probability of inefficiency in repetition is partially invalid in the case where the fundamental or necessary factor is repeated. Example: two seeds are more than one seed and three more than two. Since the saturation of necessary resources is also possible we can only say that saturation or inefficient processes associated with repetition of necessary resources is lower than in the case of unnecessary or dispensable resources.

III.15. Since the existence of certain economies of scale q1 < q2 < qn is also possible in the case of some unnecessary or dispensable factors, if these coexist with diseconomies of scale q1 < q2 < qn of the necessary factors, the probability of increased output resulting from the former being offset or absorbed by the latter is indeterminate.

III.16. Given any arrangement, if we move onto a higher arrangement, we can say that output increases proportionally if the factors are complementary in a proportion k, and in addition, this proportion continues to apply after the repetition. We call this principle complementary repetition. It can be expressed as V h=h1+h2m, if h1/h2 = k If _ > 0 and _. h, then _ (h1 + h2) = _ h1 + h2, then _ h1/ _ h2 = k. If a process associated with Vmh determines a level of output qi, after h1 and h2 (complementary) have been repeated _ times _ ql will occur, it confirms our proportional increase.

III.17. Given a repetitive arrangement, if a factor Vi that is repeated _ times, and the former ceases to be repeated _ times and the other is repeated _ times more so that the output associated with a process remains constant, we say that it is a perfect repetitive substitute.

III.18. The existence of a set of processes deriving from repetitive variant arrangements requires the existence of non­ saturated productive factors and that the arrangement, whatever it is, the technically possible. We shall call this logical and simple principle the principle of productive rationality.

III.19. The existence of repetitive variant arrangements of order h, in which at least the necessary factors that determine an inefficient processes in that arrangement are present, implies that all the processes greater than h (h + n = 1 < m – 1) are also inefficient.

III.20. The economic unit will spend its income on acquiring factors of a process generated by an efficient arrangement. In terms of expenditure, the arrangement is immaterial and only the number of factors matters. We call this principle the principle of efficient expenditure.

III.21. As the income of the business increases, the acquisition of factors of successively greater orders that originate with each order h, an efficient process, will also increase. The union of infinite points of expenditure in efficient processes will determine the repetitive variant isocost line.

III.22. The variant isocost line will be different if economies of scale exist in the acquisition of repetitive factors (such as discounts for bulk purchases) than if there are no discounts.

128

J. Villacís González Volume 7 – Fall 2009

IV. GENERAL DISTRIBUTION OF OUTPUT IN VARIATIONS IV.1. Given an efficient process in a specific arrangement within the set of repetitive and non­repetitive arrangements of order h, the return obtained by the businessman is the difference between the least efficient process and the most efficient one (which is the one we are considering).

IV.2. The return obtained by the factors is the output obtained in the least efficient process within a specific arrangement within a set of repetitive or non­repetitive arrangement of order h (argument implicit in point II.24.).

V. PERMUTATIONS Concept Let us consider the various arrangements possible when all the elements or factors of production are present. There are no dispensable factors in permutations and therefore they are all necessary. Permutations are closer than variations to the function of classic production of microeconomics in which all the factors participate.

V.1. The number of processes associated with each arrangement will be equal to the number of all possible permutations. This means that it is equal to m!

V.2. The number of processes associated with permutative arrangement is less than the processes associated with repetitive variations and, since repetitive variations determine a greater number of processes than non­repetitive variations, we can conclude that the greater number of processes associated with an arrangement is the repetitive variation. On the other hand, the number of processes associated with a permutation is greater than that associated with an ordinary variation (logically we have the same number of factors in either case). Then in the set of possible h arrangements and their classes the number of possible processes will follow this order in decreasing fashion: V m h (repetitive variations) = m > pm (ordinary permutations) = m! > Vm,h (ordinary variations) = m (m­1) ... (m­h+1).

V.3. There will only be one order h = m (out of all of them) in the permutative arrangement and within this order there will be an efficient process, all the others being inefficient or sub­optimum. We call this principle the sole efficiency principle.

V.4. For each case of ordinary variant arrangements, with repetition and permutation, the one with relatively more sub­ optimum processes in each case is the repetitive variation. This statement is a consequence of point V.3., and the fact that in variations an efficient process exist for each order, so for various orders there will be a series of efficient processes. This is not the case with permutations that only have one order h = m.

V.5. Since for a given income, the economic unit obtains the efficient arrangement and, in addition, all the factors are necessarily obtained, we can say that the isocost line is one point. We call this statement sole efficient cost principle.

V.6. Since m! arrangements exist, and thus m! processes, the number of inefficient or sub­optimum processes is m! – 1.

VI. REPETITIVE PERMUTATIONS Concept In the arrangements of the repetitive permutation type all the elements are present, and one or more of them can be repeated. This means that all the productive factors are present and can be repeated.

VI.1. If, as we have been considering, a process exist which is associated with each arrangement, the number of a,b,c processes to be considered will be P m = m!/ a! b! c! a, b and c each being the number of times that each factor can be repeated, so that: a + b + c + ... = m.

VI.2. As we allow a greater number of times that each factor can be repeated, the smaller number of the processes there will be (m!/a! > m!/b!, if b>a).

129

Combinatorics in the Theory of Production

VI.3. Given a number of processes associated with a repetitive permutation, there will be one that will be the efficient one, the rest being sub­optimum. The number of sub­optimum processes will be (m!/ a! b! … ­1).

VI.4. Given two permutations m!/a! and m!/b! each one will have its number of processes and there will be two efficient processes, one for each of them.

VI.5. Given two repetitive permutations m!/a! and m!/b! in which a = b but the factor repeated is Vi in the other, if the a b efficient process associated (out of all of them) with P m produces higher output (is more efficient) than P m, (even when both have the same number of arrangements), we say that factor vi is more efficient than vi.

a,b c VI.6. Given two repetitive per mutative arrangements of the same order v m and v m and which a + b = c. we can say that the efficient process associated (out of all of them) with the first is greater than the second. This is interpreted in the sense that the process in which more than one factor is repeated is more productive than another in which one single factor is repeated intensively (in both, as can be seen, the total number of repetitions in the same, c = a + b). This is the result of decreasing output.

VI.7. If an alternative exist in the repetition of different factors, for example: vi alone, or vi on its own or as part of vi or part of vi and so on with all the factors, provided that all the permutations have the same order a + b +c = n, there will be an efficient process better then the rest for each arrangement.

VI.8. It is possible that in a repetitive permutation two arrangements might occur that would determine more than one equally efficient process (that are not of maximum efficiency) in terms of the same level of output.

VII. RATIOS BETWEEN VARIATIONS AND PERMUTATIONS VII.1. Given a process originated by an ordinary variation of degree h, the acceptance of all the remaining factors converts it into a sufficient process, which is an ordinary variation of the degree h = m, that is, into an ordinary permutation.

VII.2. It follows 1 that, given an ordinary variation that determines an efficient process, in order to convert it into a sufficient and also efficient process, that is, a permutation, the arrangement of the remaining factors would also have to be efficient. We shall call this principle the principle or efficient symmetrical complementariness.

VII.3. Given an ordinary variation of degree h that determines an efficient process, if it multiplied by _ > 0{_ _ R+}, each one of the factors is transformed into a variation with repetition. If the product qi also increases by _ , so that _qi, we would say that the process deriving from the variation with repetition is a homogeneous transformation of the ordinary variation.

VII.4. If, given a variation with repetition of the type described in point VII.3., it is a homogeneous transformation of the ordinary variation, if the factors missing are accepted until it becomes a permutation, such tat those factors that are missing are also multiplied by , they must also be efficiently ordered. This is a consequence of point VII.2., the principle of efficient complementariness.

VIII. COMBINATIONS This type of arrangements of m elements of order h (groups of factors of group h) in which h are different factors that can be grouped. Example: the arrangement {a, b} excludes that of {b, a}.

VIII.1. The number of possible processes derived from a combination of h elements will be the number of these combinations; Cm,h = m! / h! (m – h)!

VIII.2. In a certain sense this is the consideration of process in microeconomics, since it is irrelevant to speak of a process comprising 10% water and 90% fertilizer, it is the same as 90% fertilizer and 10% water, as we have said. The distinction we make is that certain factors can be omitted and arrangements of various orders can be made.

130

J. Villacís González Volume 7 – Fall 2009

VIII.3. In each number of processes defined by a combination C m,h there will be one that is efficient. Thus, for each order there will be an efficient process, hence as the orders vary, the number of efficient processes will vary.

VIII.4. Any arrangement of a combination will give a positive ( + ), or rational, process or ∆q > 0, provided that the necessary factors are never missing (paper, at least, in the production of books). Conversely, if a necessary factor is missing, whatever the combinatorial order the process will be invalid (∆q = 0) or irrational.

VIII.5. If all the factors or elements are necessary and the number of dispensable factors is nil, then all of the will form part of the combinatorial set and the order h = m and therefore the number of possible combinatorial will be equal to the unit: Cm,h = m! / m! (m­m)! = 1 Since, when all the factors strictly comply with the function of general production q = f (V1, V2, . . . Vn). We shall call this principle the principle of unitary combination or complete combination.

VIII.6. When all the factors are necessary and therefore the complete combination is equal to the unit, it will determine a single process and this process, which is the only one possible, is then an unmistakably efficient process. This is a consequence of point VIII.5. We call this principle the principle of the single solution ( Cm,h > 1)

VIII.7. Given a level of income, there will be a cost in factors of an arrangement that defines an efficient process. The combination of the cost rations with factors of efficient processes relating to various combinatorial orders will be the combinatorial isocost line.

VIII.8. In each group of combinations of order h, one efficient process exists and the rest that are inefficient or sub­ optimum. Them it is possible that combinations of various orders might have equally inefficient processes in terms of output. The coexistence of inefficient processes of a higher order with an efficient process of a lower order in terms of output will also b possible.

VIII.9. If efficient processes of lower orders exist they will never be chosen and, therefore, factors of inefficient combinations of higher orders will be bought, if both the achieve the same level of output.

VIII.10. If two equally efficient processes within the same combinatorial order (not the most efficient, since there will only be one), it cannot be concluded that for higher orders there is the same number of processes of equal efficiency.

VIII.11. Given two combinations of different orders that determine efficient processes, the combination of the intermediate processes of indeterminate orders does not guarantee that all these points are efficient.

VIII.12. It follows from point II.29. of ordinary variations that if there are two combinations of orders h and + ∆x respectively, when ∆x0, there are also two processes that determine two outputs qh+∆x and qh whose difference is a ∆q0, and we can say that it is a continuous combination. We shall call this principle the principle of continuous combination.

IX. REPETITIVE ORDINARY VARIATIONS, REPETITIVE ORDINARY PERMUTATIONS AND COMBINATIONS From point II.29. of ordinary variations and point VIII.13. of combinations, it follows that, given a set of arrangement of any type (variations, permutations, combinations, etc.), defined between two orders h and h + n, including the two (closed) a process will exist that achieve a maximum output value and a minimum value a some point between these two arrangement h and h+n.

X. CONCLUSION We use the combinatorial theory in Math to analyse different areas of microeconomics. We believe that in the theory of production, just as in the theory of utility, the idea of combination has been given a generic consideration, as if it were a set of goods, which will in a particular way participate in production. Such particular way is our focus of consideration and is explained through the combinatorial theory.

131

Combinatorics in the Theory of Production

At the same time, in Math, combinatorial theory is a field that covers other more specific areas, including variations, repetitive and non­repetitive permutations and combinations themselves. Each of those areas take part in the theory of production as analytical tools. However, the very nature of production places highly strict limitations on the combinatorial dimension. Each production process must inevitably follow a certain combination. It is useless to arrange the mineral first if there is no oven, and the oven is no good if it is not heated first. The examples are manifold. On the contrary, there are more flexible productions, which offer broad combinatorial possibilities – something that cooks know well.

We believe that this is a new scientific approach in microeconomics.

REFERENCES Auriol E. and Michel Benaim.(2000) Standardization in descentralized economics, American Economic Review, 90, 550­570. Arrow, J.K. (1963). Social choice and individual values, 2ª edit., 1951, Nueva York, Wiley. ___ Alternative (1951). Approaches to the theory on choice in risk­taking situations, in Econometrica, 19, 404­37. Arrow,K.F.­Debreu Gérard, (1954). Existence of an equilibrium for a competitive economy, en Econometrica, 22, 265­90. Arrow, J.K.,­Han, F.H (1971). General competitive analysis, Edimburgo, Oliver&Boyd. Arrow, J.K.­Hurwicz,L.: (1972). An optimality criterion for decision­making under ignorance, in C.F. Carter, J.L.(comps.), Uncertainty and Expectationin Economics, Oxford, Basil Blackwell. Baumol , W. J. (1959). Business behaviour, value and growth, 2ª Edt., 1967, New York. Becker, G. A. (1965). A theory of the allocation of time, Economic Journal, 75 , 493­517. Black, D (1948).On the rationale of group decision­making, Journal Publication Economics, 23­24. Black, J. (1962). The technical progress function and the production function, Econometrica, 29, 166­167. Clark, J.B. (1893). The genesis of capital, Yale Review 2, 302­315. Clark, J.M. (1961). Competition as a dynamic process, Washington, Brooking. Debreu, G. (1951). The coefficient of resource allocation, Econometrica, 19, 273­92. Evans C. and Harrigan J. (2005). Distance, time, and specialization: lean retailing in general equilibrium, American Economic Review, 292­313 Ford , J.L. (1893). Choice, expectation and uncertainly, Oxford, Basil, Blackwell. Glazer, J. and Rubinstein A.(2004). On optimal rules of persuasion, Econometrica,72, 1715­1736. Gnedenko, Boris and Khinchin Alexander. (1945). An elementary introduction to the theory of probability. New York: Dover. Hicks, John R. (1936).Value and capital, Oxford: Oxford University Press (1945), _____ capital and growth(1965). Oxford, Oxford University Press. Jevons, W.S. (1871). Brief account of a general mathematical theory of political economy, B.A,4ª edit. Jevon. Kaldor, N.­Mirrlees, J.A.(1962). A model of economic growth, Economic Journal, 67, 591­624. Koopmans, T.C. (1957). Measurement without theory, Review of Economic and Statistics. A.E.A.(1965). Leontief, W.A.(1947). Introduction to a theory of the internal structure of funtional relationship, Econometrica,15, in Leontieff 1966. _____(1976). Essays in economics,Vol.2, Oxford, Basil Blackwell. Machlup, F.(1955). The problem of verification in economics, Southern Economic Journal, 22, 1976, 1­21. Marshall, A. (1890). Principles of economics, 8ªedit.; edit. C.W. Guillevaud, Londres, McMillan, 1961. Meade, J.E. (1955). Trade and welfare. The theory of international economic policy, vol.I, London. Samuelson, Paul A. (1947). Foundation of economic analysis. Cambridge: Harvard University Press. Tukey, John W. (1962). Statistical and quantitative methodology. Trends in social Science (D P Ray, ed.). New York: Philosophical Library. Villacís, José (1994). Combinatorial theory applied to the study of production, Esic Market 79, 43­57. ______(2003). Preferencias y orden combinatorio en economía, Anales de la Real Academia de Doctores de España. Volumen 7, 191­208. ______(2004). Caos y orden combinatorio en economía, Anuario Jurídico y Económico Escurialense. ÉpocaII, Número XXXVII­ISSN: 1133­3677, 143­168. _____(2004). Entropía, caos y teoría combinatoria en la economía, Anales de la Real Academia de Doctores de España, Volumen 8, pp. 143­168. ____ (2005). Business combinatorial theory and decision making, The Journal of American Academy of Business, Cambridge, Vol.VI, nº1, march,117­122. Walras, Leon. (1874). Elements of ure economics. Translation by William Jaffé. London: Allen&Unwin.

132

R. Varjavand Volume 7 – Fall 2009

THE U.S. ECONOMIC CRISIS: IDEOLOGY VERSUS REALITIES

Reza Varjavand Saint Xavier University, USA

ABSTRACT There is no shortage of bad news when it comes to the U. S. economy. Just when you thought economic turmoil was tamed, the government discloses another piece of discouraging economic news. The latest unemployment rate, for instance, which was released yesterday, shows that the U. S. economy has been shedding more jobs than ever before, almost 600,000 jobs per month. The current unemployment rate stands at 8.9%, translated roughly to 13.8 million people officially unemployed. Add to this number those jobless people who are not officially counted as unemployed, then the extent of joblessness will be even far greater than what official data shows. Given the fact, that unemployment rate is a lagging economic indicator; there is no improvement in the employment situation in sight. The rate is expected to climb to more than 10.5% by the end of this year. Be advised, however, even if the unemployment rate ascends to that level, it is nothing compared to the nearly 27% that occurred during the Great Depression. So, call this crisis whatever you want but please don’t call it a Great Depression. Likewise, the statistics about the growth rate of this economy are not any better than the employment condition. The latest official statistics shows that the U.S. economy has contracted during the first quarter of 2009 at a rate of 6.1%. My intent in writing this article is not, of course, to reiterate the bad news; it is rather to focus on a few issues that have not been given the attention they deserve. First, what does this crisis says about capitalism? Second, why this crisis is so different from the ones that happened before, and finally what can we learn from the Chinese triumphant experience?

WHAT DOES THIS CRISIS SAY ABOUT CAPITALISM? It says a lot. It simply shows that the system is vulnerable to misuse and manipulation by irresponsible profiteers. Capitalism is, of course, based on sound principles such as monetary incentives, competition, limited government involvement, and economic freedom. However, sometimes the outcome of the system may not be desirable, equitable, or acceptable to everyone. Without a doubt, when people are motivated by monetary incentives – the crux of capitalism –they do good things for themselves and for the society. In particular, when they are challenged by competition, they try to behave responsibly and as efficiently as possible. Nonetheless, they may occasionally succumb to the temptations and overcome by greed, thus resorting to deceptive practices and fraudulent tactics. This is not, of course, the first time something as disruptive as this crisis is taking us by surprise. It is however deeper and more wide­spread this time. In the past, we have experienced instances of companies in distress because of improper decisions, or deceitful practices. Arthur Anderson and may represent two good examples. However, if there are only one or two companies in trouble, it would be easier and less consequential for the economy to let them go bankrupt. But, if there are too many, bankruptcy will be extremely costly for the whole economy. That is why it is almost impossible to find a painless solution at this time. Politicians have tried to find an expedient way out, but there is none.

French president Mr. Sarkosy once rightfully said “the financial crisis is not the crisis of capitalism it is the crisis of a system that has distanced itself from fundamental values of capitalism, the system that betrayed the spirit of capitalism.” To operate honorably, capitalism requires; first, complete transparency; well informed consumers that are protected by law and are able to judge correctly the value of what they are getting or what are at stake when they engage in business transactions. Second, monetary integrity; people in capitalistic countries have bestowed upon the government the monopoly right to issue money. Money is a contract between government and its people that should be fulfilled fairly and honestly. Creating money out of thin air under fiat money systems to disguise the government’s fiscal capriciousness makes the monetary system subject to abuse by government and it jeopardizes the price stability. Those who blame the Federal Reserve, the U.S. central banking system, for reinforcing this crisis may have a valid point in the sense that fiat (paper) money can be potentially misused by government. The key purpose of money is to facilitate transactions and to allow people to store values for future use. If money is misused by government, public confidence will deteriorate and fiat money loses its value and high inflation will ensue. To deal with the current crisis, the U.S. government has financed 40% of its stimulus package by creating money out of thin air. We cannot easily get away from the effects of such a policy. They will materialize sooner or later.

133

The U.S. Economic Crisis: Ideology versus Realities

UNIQUE ASPECTS OF CURRENT CRISIS I believe this crisis, unlike the ones that happened before, is mostly due to structural changes especially in the global economy. Such changes have been in the process of developing, for a number of years. First, this crisis was triggered by disaster in the financial sector, it happened after the triumph of market economy for a number of years. Disintegration of the Soviet Union and China’s rising to power, along with a few other countries, gave capitalism a big boost. Furthermore, there was the revolution in information technology and the consequential expanding interconnectedness among nations, and the ensuing international equalizations that leveled the playing field. Technology and information moved swiftly throughout the world. In attempt to improve their bottom line, the U.S. companies moved their production facilities to cheap­resource countries. Formerly less developed nations that used to be the net importer of manufactured product from the U.S. and western countries and the exporter of cheap but strategic raw materials became more developed and the exporters of manufactured products themselves.

Take the case of Iran, for instance. Human rights issues aside, this country has improved economically to a considerable degree in recent years. The U.S. and its allies lost this lucrative market by imposing economic sanctions imposed on this country. It is now an economic as well as political power in the Middle East that the U.S. and the West have to reckon with. Iran is the key source of experts to neighboring countries, especially to war­torn Iraq, from manufactured products to foods, and construction materials, even building bricks. The country now enjoys a high degree of self­sufficiency, manufacturing everything from fighter jets to automobiles and to launching of satellites to orbit.

In a fascinating book entitled: “The Tyranny of the Dead ideas” Mr. Matt Miller argues that this economic crisis is generally due to some irreversible forces of globalization that have not served the U.S. favorably. He explains while the world economic paradigm has changed, we have been oblivious to that and have held on to many old­fashioned ideas that no longer serve us well. The current economic turmoil, Miller argues, has led to the erosion of our confidence in the free market system and its workability in the 21st century. Because of globalization and technological changes, the U.S. economy has changed structurally and so should its economic system. However, our business and political leaders, and conservative economists have not done enough to ameliorate it. These people are resistant to change because they are enslaved to what the author calls the “dead ideas.” For instance, despite the fact that globalization has led to equalization of the relative wage rate, American workers are still reluctant to accept lower wage rates and prefer to remain unemployed. American companies can no longer make enough money to pay for expensive benefits that these companies are required to provide for their employees in the face of global cut­throat competition by foreign counterparts that are subsidized by their home governments. Employer­ based healthcare makes jobs vulnerable to economic cycles, he says. When companies downsize or go out of business during an economic downturn, workers’ healthcare and pension security are at jeopardy. The escalating cost of healthcare for business firms, as well as for the individuals, is eating away the real income, and is weakening the competitiveness of our industries in the global market. It is time to remove these additional burdens from private companies and expect government to provide healthcare and pension benefits for American workers.

We should challenge the notion that unconditional free trade is good for America, he argues, on the grounds that there is a conflict of interest between the companies benefiting from free trade and the national interest. His solution is to provide incentives for those companies that are hurt by free trade, possibly through changes in corporate tax system such as basing it on value added. No doubt, people who shop at Wal­Mart or Target stores are reaping the benefits of free trade by being able to buy inexpensive made­in­China products. However, those losing their jobs to cheap imports do not like the free trade.

As societies advance, the demand for public services provided by government increases, consequently the cost of government operation. That leaves no choice for government but to increase taxes because they are the only major source of public revenue for government, especially in developed countries. Often, the gap between revenue and outlay leaves a huge deficit in the government budget; one of the ill effects of this gap is the accumulation of national debt and financial dependency on foreign countries. The best way to avoid deficit is for government to collect more taxes, according to Mr. Miller. Taxes will go up regardless of what kind of administration is in power and what economic philosophy dictates its decisions, he argues. Although politicians may not tell us publicly, taxes do go up. It is a mathematical law. So, let’s increase taxes and save the economy.

The current economic crisis has also revealed the unfairness of corporate compensation plans, and how the link between CEOs compensation and their performance is nothing but a myth. Many of these CEOs were rewarded handsomely by the board of directors despite their mediocre, or often dismal, performance. The average annual compensation of a CEO in the United States is at least 36 times larger than the average earnings of an ordinary employee who works for his company. The

134

R. Varjavand Volume 7 – Fall 2009 author believes that growing public awareness of such an unfair income distribution scheme in this country “is potentially explosive,” implying that public revolt against it is probable.

More importantly, the development of a dangerous idea that we can always live beyond our means and somehow get away with it has created a sense of apathy in American consumers as well as government officials. In other words, we think that we can constantly gamble our future for the sake of current gratification, a Realities mentality that has survived for many decades and has forced us to the impasse. The inability to find a sensible solution to our debt problem has forced us to resort to crafty strategies, some of which have further immersed many of us into financial over­commitments and further debt­related problems and the ensuing massive default. This situation has further contributed to the widening economic inequality since poor people do not enjoy equal access to loans and credit.

In fact, such viewpoints seem logical and relevant indicating that we, as a nation, are faced with a much deeper and wider crisis of a different nature. It is mostly the result of causes that are external to our economy; hence they are not entirely controllable. The traditional fiscal and monetary policy, therefore, may no longer work effectively to get us out of this crisis.

CAN WE LEARN ANYTHING FROM CHINESE EXPERIENCE? The data supplied by the government shows that the Chinese economy grew between 8 to 12% per year from 1991 to 2008. Even though, all other major economies are in turmoil, this economy is still growing at the rate of 6.1% per year which is projected to increase to 7.5% for next year according to IMF. The Chinese GDP has been growing on a non­stop basis for the past 17 years, defying the law of business cycles. It has been very successful in exploiting the market forces to its advantage. Since 1978 its GDP has increased more than twelvefold to becoming one of the highest total GDP countries in the world, it is currently ranked number 3 in the world.

Thanks to practical implementation of the market system. Deng Xiaoping once described their economic system as “Capitalism with Chinese characteristics”. He also once said, “It doesn’t matter whether the cat is black or white, so long as it catches mouse” indicating the emphasis on pragmatism by the Chinese leadership when it comes to economic approach as well as to foreign policy, which is primarily based on building welcoming relationship with other countries, especially with its neighbors, the so called “smile Diplomacy”. The successful Chinese experience will definitely point to the fact that a sensible diplomacy, based on mutual respect, plays a decisive role in economic development at the age of globalization. Positive image building has been at the top of the Chinese global agenda as manifested by its successful hosting of the 2008 Olympics. Today, many of its old enemies are its strong trade partners. In other words, China has tried to diffuse the legacy of fear and domination and engage in confidence building and securing support of its neighbors and major trade partners. To that end, trade barriers have been dismantled and export­promoting policies have been put into operation, especially through public enterprises (P.E.’s). As a result, China’s heavy industries have grown, advanced, and are operating profitably. They saved a big share of their profit to plow back into the state­owned companies as investment. Consequently, China need not rely on individuals for saving; most of the investible funds come from public enterprises. To seek the support of other countries, China has extended its financial aids to many countries with no string attached, especially to the African nations. Accordingly, there has been a tremendous flow of valuable resources to this country from Africa and from the Middle East.

One of the costly drawbacks of the Bush administration’s foreign policy, which was mainly based on a unilateral approach, was that it helped China to shun away from being the focus of world criticism for its poor record of human rights because that honor had gone to the U.S. The decline in the role of the U.S. since the invasion of Iraq has served China quite well in gaining its power as the moral and economic authority, and has turned it into a dependable world stakeholder. Chinese politics have been influenced by its desire to progress economically. Its government has been very decisive, and this decisiveness has helped to successfully attract foreign investment by providing these investors a sense of assurance and a safety net backed by a strong and a resolute government.

What makes this country distinctively successful in its quest for economic prosperity are: 1) high rate of saving, making this country self sufficient and not at the mercy of other countries for funds. The rate of capital formation has been high; more than 25% per year in recent years. Most of such investments are in infrastructures of the country, thus facilitating the long term growth of the economy. Even though the foreign multinational enterprises have invested in China, theirs has played a minor role and most of the public investment in China is financed by savings by government enterprises; 2) the cost of capital has been tremendously low or possibly near zero giving Chinese business firms a competitive advantage over foreign counterparts. In addition, exports promotion policies have been very successful in placing this country at the top of high export countries, with huge surplus and huge foreign exchange reserves which is nearly $1.5 trillion at this time; 3) Chinese

135

The U.S. Economic Crisis: Ideology versus Realities government does not allow excessive speculation in the stock market, especially by individuals. Avoiding such destabilizing speculations creates an additional safety net for investors. Even though this country has experienced a boom in real estate, just as the United States did, if the price bubble burst in China, the consumers are not going to suffer as much as they did in the U.S., because most of the real estate properties are owned by institutional investors and by the public enterprises. Therefore, if there are losses, most of them will be absorbed by the Chinese public sector; 4) China has a reputation for being a low­price producer in the world, thanks to inexpensive labor and to the state­owned subsidized business entities. That is why consumers in the U.S. can enjoy low prices at Wal­Mart and at almost all the other stores; finally, 5) another unique positive factor concerning China is that since most of its government’s revenue is generated from public enterprises, the tax burden for the private sector is fairly light. The corporate sector pays only 7% of the total taxes. Such a system has made government budget more unwavering because its revenue is almost resection­proof.

No doubt we will see the end of the crisis in the U.S. and the beginning of economic recovery; however, after all the dust has settled, we may end up reconciling with less than what we used to, a lower standard of living. Because the U.S. will be the big loser from current economic/financial turmoil compared to any other country.

136

M. M. Campbell Volume 6 - Fall 2009

NAFTA’S MAIN OBJECTIVES INCLUDED THE ACHIEVEMENT OF ECONOMIC GROWTH & DEVELOPMENT IN THE FIRST FIFTEEN YEARS: WERE THESE GOALS REALIZED?

Michael M. Campbell Florida A&M University, USA

ABSTRACT The North American Free Trade Agreement (NAFTA) between Canada, Mexico, and the United States of America eliminated tariffs on trade between these countries. From the Agreement’s implementation in 1994 there have been many reports, from proponents and opponents alike, The purpose of this study is to determine if the agreement had accomplished its objectives of increased trade, economic growth and development in the region during its first fifteen years of implementation. Changes in trade (exports & imports), Gross Domestic Product (GDP), wages earned, and employment activity within the Region, for the period 1994 through 2008, were calculated and analyzed. The economic data examined confirmed that NAFTA had a positive impact on the North American Region with Mexico being the greatest beneficiary economically. Changes in trade were subject to regression analysis that confirmed they were significant at the 95% confidence level. Continued growth in employment and wages were evident during the period, with both Canada and Mexico experiencing substantially more growth than the United States. The data supported the premise that Mexico had experienced sustained positive changes in its imports/exports, GDP, wages and employment from 1994 through 2008. Although growth in employment and wages earned could not always be directly credited to NAFTA, it was evident that economic growth did have a positive impact on the region through the dissolution of tariffs and trade barriers.

NAFTA AGREEMENT The North American Free Trade Agreement (NAFTA) was approved by the U.S. Congress, the U.S. Senate, and the Mexican Senate in November 1993. The Canadian government's approval followed shortly after in December 1993. This agreement’s main goal called for a complete removal of trade barriers within 15 years. However, within the first seven years of its implementation, many trade barriers had already been dissolved.

The objectives of this Agreement, as elaborated more specifically through its principles and rules, included national treatment, most­favored­nation status, and transparency were to:

(a) eliminate barriers to trade , and facilitate the cross border movement of goods and services between the territories of the Parties; (b) promote conditions of fair competition in the free trade area; (c) increase substantially of investment opportunities in their territories; (d) provide adequate and effective protection and enforcement of intellectual property rights in each party's territory; (e) create effective procedures for the implementation and application of this Agreement, and for its joint administration and the resolution of disputes; and (f) establish a framework for further trilateral, regional and multilateral cooperation to expand and enhance the benefits of this Agreement.

PURPOSE OF STUDY The main purpose of this study was to determine the impact of the agreement on member nations by examining changes in economic activity within the Region for the period 1994 to 2008. The study attempted to determine if the main goal of the agreement, which called for a complete removal of trade barriers within 15 years, and first two objectives which were; i) eliminate tariffs and facilitate the cross border movement of goods and services between the territories of the Parties, and ii) promote conditions of fair competition in the free trade area were achieved. The second purpose of the study was to examine the agreement’s third objective, which is, to increase substantially, investment opportunities in their territories through increased economic growth.

137

Do Excessive IPO’s and ‘Irrational Exuberance’ Drive or Hinder New Innovation?

DO EXCESSIVE IPO’S AND ‘IRRATIONAL EXUBERANCE’ DRIVE OR HINDER NEW INNOVATION?

Laura Blake Mitchell College and Pace University, USA

ABSTRACT The existence of stock market bubbles has been discussed extensively in the literature, particularly of late given our recent real estate market bubble and the accompanying collapse of the mortgage and banking sector. Stock market bubbles have given rise to behavioral finance theory, particularly with the study of cognitive biases (Shiller, 2000) that may drive the mindset of investors to groupthink and even herd behavior. Other theoretical explanations of stock market bubbles have included rational theory (DeLong, Shleifer, Summers & Waldmann 1990), intrinsic theory (Froot, and Obstfeld 1991) and mimetic contagion (Topol 1991). This research proposal seeks to further explore the issue of the irrational exuberant mindset or “a heightened state of speculative fervor,” (Shiller, 2005) with regard to intangible capital growth and innovation investment. The period of the 1920’s saw significant advances in technological innovations before the stock market Crash of 1929. Following the crash, the period of the 1930’s decade experienced economic growth based on the foundation of intangible capital developed the decade before, producing such technologies as radio, automobiles and aviation. (Nicholas, 2005) With an emphasis on technology stocks, I examine whether the multitude of IPOs and irrational exuberant dynamic of the 1990s decade actually drove new innovation. In other words, within a free and open IPO market, does ease of funding actually lead to greater odds that real innovation progress will occur? Or does irrational exuberance hinder innovation progress as unrealistic ideas get the lion’s share of attention and funding.

INTRODUCTION The existence of stock market bubbles has been discussed extensively in the literature, particularly of late given our recent real estate market bubble and the accompanying collapse of the mortgage and banking sector. Stock market bubbles have given rise to behavioral finance theory, particularly with the study of cognitive biases (Shiller, 2000) that may drive the mindset of investors to groupthink and even herd behavior. Other theoretical explanations of stock market bubbles have included rational theory (DeLong, Shleifer, Summers & Waldmann 1990), intrinsic theory (Froot, and Obstfeld 1991) and mimetic contagion (Topol 1991).

This research proposal seeks to further explore the issue of the irrational exuberant mindset or “a heightened state of speculative fervor,” (Shiller, 2005) with regard to intangible capital growth and innovation investment. The period of the 1920’s saw significant advances in technological innovations before the stock market Crash of 1929. Following the crash, the period of the 1930’s decade experienced economic growth based on foundation of intangible capital developed the decade before, producing such technologies as radio, automobiles and aviation. (Nicholas, 2005)

With an emphasis on technology stocks, I examine whether the multitude of IPOs and irrational exuberant dynamic of the 1990s decade actually drove new innovation. In other words, within a free and open IPO market, does ease of funding actually lead to greater odds that real innovation progress will occur? Or does irrational exuberance hinder innovation progress as unrealistic ideas get the lion’s share of attention and funding.

PAPER STRUCTURE The paper is structured as follows: first, in the introduction and literature review section, I’ve outlined some background and history of the topic and briefly described the streams of research concerning the issues related to stock market bubbles. Next, in the hypothesis section, I’ve addressed the specific aspect of intangible capital (patents) to generate specific hypotheses. Within the methodology section, I suggest the methods used to pursue and explore the hypotheses identified by briefly describing the variables and controls included to examine the relationship between investor exuberance and patent performance. In the final discussion, Results/Conclusions, I’ve presented the outcomes of my data analysis along with

138

L. Blake Volume 7 – Fall 2009 suggestions for further inquiry as to whether the accumulation of intangible capital will influence business survival rates after the bubble has burst.

LITERATURE REVIEW Nasdaq, IPO Mania and the Tech Stock Bubble “The technology stock bubble has been characterized as a period of investor exuberance in technology stocks from the years 1995 through 2000. On January 3, 1995 the NASDAQ stock index stood at 743.6. The index rose dramatically and stood at 2406 on March 10, 1999. In the next twelve months the index more than doubled as it peaked in March of 2000 to over 5100.” Following this peak, the index continued to decline. By October of 2002 it closed at a low of 1163, losing five trillion dollars of market value. (Bear, McSherry & Preys, 2009)

Of that period, Shiller argued that “the present stock market displays the classic features of a speculative bubble; a situation in which temporary high prices are sustained largely by investors’ enthusiasm rather than by consistent estimation of real value.” (Shiller 2000) , Federal Reserve Chairman, referred to this phenomenon as “Irrational Exuberance” in a speech given in 1996, just days after Shiller had met with him.

Some have argued that the Nasdaq bubble was caused by the rise of the Internet and its associated financial market activity, (van de Ven, 2003) since coinciding with this dramatic growth was the rising number and volume of Internet IPO’s at the time. Internet entrepreneurs introduced e­commerce concepts ranging from the successful .com to the not so fortunate Pet.com which early on, was exiled to the dot­com graveyard.

Compare to 1920’s Technology Bubble ­ Erratic market swings are not new. Within our history, one of the more significant bubbles experienced by the U.S.; the American stock market bubble of the 1920’s resulted in the famous Crash of 1929. The 1920’s was the genesis of tremendous new technological innovations including radio, automobiles, aviation and the deployment of electrical power grids, driving the economic growth at the time. (Nicholas, 2005) However, studies by DeLong and Shleifer (1991) and Rappoport and White (1994) “estimate that the 1929 stock market was overvalued by around 30 percent.” (Nicholas, 2005) Others contend that the market was “undervalued due to the large proportion of intangible capital held by firms” in the range of about 60 percent. (Nicholas, 2005) Innovation came about from the basic research that only the larger firms with ample resources were able to conduct. “The interaction between science, R&D and demand created unprecedented growth in productivity and living standards” (Nicholas, 2005: 4) Essentially, the 1920s became an important time characterized by “technological progress and intangible capital growth.” (Nicholas, 2005: 12) Would the 1990’s tech bubble prove to be equally significant?

Rational versus Irrational Phenomenon The role of media has been examined in its culpability on the formation of bubbles. “The bull run in the roaring 1920s occurred when the radio ownership became widespread, whereas the rise in equity prices in the 1960s coincided with the arrival of the television in middle­class circles.” (van de Ven, 2003: 9) Similarly, speculative activity also surrounded the 1990’s period as Internet and e­commerce technologies came to the fore. Some investors dismissed concerns about overpriced asset valuations amid the claims of a “New Economy” referring to the notion that “traditional measures of value were no longer valid because technology was changing the world so quickly and dramatically.” (PC Magazine)

Emotional and cognitive biases often drive the irrational bubble phenomenon.(Shiller, 2005) Rising share prices attract the attention of investors. Typically in financial markets a self­adjustment or negative feedback system kicks in: as prices increase, sellers take a profit. Some buyers are discouraged to purchase at such high prices; so equilibrium and self adjustment prevail. However, in the instance of bubbles, instead of a negative feedback loop, we see positive feedback, whereby buyers continue to purchase in anticipation of an ever rising stock price. Investors participate in this bubble phenomenon as they rationalize that the risks of not doing so outweigh the benefits. (Blodget, 2008)

Productivity and Progress in the Tech Sector – 1920’s and 1990’s Bubbles Stock market bubbles generate a bevy of Initial Public Offerings, often within a “hot sector” (technology in the 1990s), creating a speculative fervor. The question becomes, do these hot markets lead to a misdirection of funding based on a speculative trend rather than to firms that are capable of generating long standing economic value?

139

Do Excessive IPO’s and ‘Irrational Exuberance’ Drive or Hinder New Innovation?

Nicholas (2005) provides evidence that most of the “productivity advances which made the 1930s the most technologically progressive decade of the twentieth century may have depended on the stock of inventions accumulated in the 1920s,” claiming firms that invested heavily in R&D contributed “disproportionately to productivity growth a decade later.” (Nicholas, 2005: 15) RCA, Dupont, Westinghouse, General Electric and Kodak all contributed to the new “frontier of technological knowledge,” filing thousands of patents between 1920 and 1929. (David, 1990)

Likewise, the 1990’s dot­com bubble was characterized by significant growth in technology stocks. But were the intangible assets equally important? This was an unusual climate unlike any other in our history; dot­com failures such as Pet.com, and Kozmo received capital funding in the millions (see Table 1; Appendix) with few if any concrete business plans.

HYPOTHESIS DEVELOPMENT Is irrational exuberance good or bad for innovation? “Stock market bubbles frequently produce hot markets in Initial Public Offerings, since investment bankers and their clients see opportunities to float new stock issues at inflated prices. These hot IPO markets misallocate investment funds to areas dictated by speculative trends, rather than to enterprises generating long standing economic value.” (Wiki)

Looking at patenting frequency, hypotheses are developed to examine whether a relationship exists between irrational exuberant investing behavior on the level of intensity of intangible capital during the tech stock market during the period of 1990 to 2002. With a free and open IPO market, does the ease of funding lead to greater odds that real progress will occur?

H0: There is no relationship between irrational exuberance and innovation progress.

H1: Irrational exuberance, with its bubble­psychology of abundant money flows, helps drive innovation progress.

H2: Irrational exuberance hinders innovation progress; as unrealistic ideas get attention and funding.

DATA AND METHOD Descriptive statistics and bivariate correlation analysis are used to examine the key variables of intangible capital (operationalized as number of utility patents granted per year) and compared against the number of Initial Public Offerings announced during the period for which data were available 1990­2002. Ideally I was anticipating collecting data from 1995 – 2005 allowing for the five years preceding and following the burst of the 2000 bubble; however obtaining IPO data for that period proved arduous. Data was obtained from the USPTO (Number of Patents Granted by Year of Patent Grant) and IPO data from University of Florida database; Professor Jay Ritter’ website.

FINDINGS, CONCLUSIONS & LIMITATIONS The results reflect an unclear relationship between the two variables rejecting H1. The r = ­.04 indicates a slight degree of negative correlation; a reflection of the continued slight rise of patents against the drop of IPOs by the late 1990s. Further testing the strength of the relationship between IPOs and patents, I’ve squared the correlation coefficient and multiplied by 100 to obtain the resulting statistic known as variance explained (or R2). R2 = 16% of the variance in Y is "explained" or predicted by the X variable; not significant. R = ­.04 a slight negative correlation does not significantly support the hypotheses. The scatter plot below depicts the lack of correlation between the two variables.

140

L. Blake Volume 7 – Fall 2009

Exhibit 1

Scatter Plot

200000

150000

100000 Patents Patents 50000

0 0 200 400 600 800 IPOs

year Patents IPOs 1990 90365 104 1991 96511 273 1992 97444 385 1993 98342 483 1994 101676 387 1995 101419 432 1996 109645 621 1997 111984 432 1998 147517 267 1999 153485 457 2000 157494 346 2001 166035 76 2002 167331 69 Column 1 Column 2 Column 1 1 Column 2 ­0.4006 1

Rejecting H2, irrational exuberance and heavy IPO activity appear not to have hindered the progress of intangible capital. However, further analysis hints that perhaps this exuberant behavior influenced the number of patents filed. When examining year to year percent changes of the two variables (Exhibit 2) we see that year to year percentage of changes in IPO’s dropped as the bubble effect took hold, however patents did not follow the same trend.

Exhibit 2 1990­2002 Annual % changes

200

150 100 Patent 50 IPO 0 1 2 3 4 5 6 7 8 9 10 11 12 -50

-100

Patents continued to hold steady with slight or moderate year to year fluctuations but did not decline along with IPOs. Of particular note, is a spike in patents in 1998 followed by a spike in IPO’s in 1999; perhaps reflecting a delayed response effect

141

Do Excessive IPO’s and ‘Irrational Exuberance’ Drive or Hinder New Innovation?

– similar to that in marketing where current marketing expenditure influence future sales revenue ­­ of the irrational exuberance mindset, whereby innovators and inventors may have believed that an opportunity existed even though the economic decline of the market was beginning. As such, further study in this area of behavioral finance could prove fruitful by examining a fourth and fifth hypothesis; Is there a delayed effect of irrational exuberance on innovation after the bubble has burst? And also to Nicholas’ (2005 ) point regarding the growth of the 1930’s and the sustainability of companies who held high patent rates, does the accumulation of intangible capital (patents) influence business survival rates after a bubble bursts? This could be ascertained by future research, collecting additional data (if available) adding the important variable of venture capital funding amounts; measured against business successes and failures over the period and into the following decade, beyond the scope of this paper.

REFERENCES Bear, McSherry & Preys, (2009) Investing in the Newest of New. Discussion Paper, March 28, 2009. Blodget, Henry, (2008) Why Wall Street Always Blows it The Atlantic, December 2008 http://www.theatlantic.com/doc/200812/blodget­wall­street Carr, David S. (2001) The Technology Stock Bubble – Review and Outlook. Working Paper April 25, 2001 David, Paul 1990. The Dynamo and the Computer: An Historical Perspective on Modern Productivity Paradox. American Economic Review, Papers and Proceedings 89 (2): 355­361. DeLong, B & Shleifer, A. 2004 The Stock Market Bubble of 1929: Evidence from Closed­End Funds. Journal of Economic History 52(3): 675­700. Froot, K. and Obstfeld, M. (1991) “Intrinsic Bubbles: The Case of Stock Prices. American Economic Review 81 (5): 1189­ 1214/ Nicholas, Tom (2005) Do Intangibles Cause Stock Market Bubbles? Evidence from the Great Crash. London School of Economics, March 4, 2005 Rappoport, P. and White, E. (1994). “Was the Crash of 1929 Expected?” American Economic Review: 84 (1): 271­281. Shiller, Robert (2002) The Irrationality of Markets. The Journal of Psychology and Financial Markets 2002, Vol. 3. No. 2 87­93. Shiller, Robert J. (2000) Irrational Exuberance. Princeton University Press, Princeton 2000 Topol, Richard (1991) Bubbles and Volatility of Stock Prices: Effect of Mimetic Contagion. The Economic Journal 101 (407) Van de Ven, Johannes (2003) Wall Street – Driven by Optimal Rationality or Irrational Exuberance? SCG Occasional Paper #11 – Louvain – March 2—3 pp. 1­36. http://www.pcmag.com/encyclopedia_term/0,2542,t=New+Economy&i=47933,00.asp Ritter, Jay IPO Data University of Florida http://bear.cba.ufl.edu/ritter/ipodata.htm

142

L. Blake Volume 7 – Fall 2009

APPENDIX

Annual percentage changes. Patent IPO 1991 6.8 162 1992 0 41 1993 1 25 1994 3.3 ­19 1995 0 11 1996 8 43 1997 2 ­30 1998 32 ­38 1999 4 71 2000 3 ­24 2001 5 ­78 2002 0 ­9

Name Years of Type of Business Amount of Money Lost Operation (in U.S. dollars) Webvan 1999­2001 Home grocery delivery 1.2 billion Pets.com 1998­2000 Pet Supplies 82.5 million Kozmo.com 1998­2001 Fast home delivery for items such as snacks and DVD’s 280 million Flooz.com 1998­2001 Online currency (alternative to credit cards) 35 million Boo.com 1998­2000 Online fashion store 160 million MPV.com 1999­2000 Sporting goods endorsed by sporting celebrities 85 million Go.com 1998­2001 A portal to Disney company sites 790 million Kibu.com 1999­2000 Social networking site for teenage girls 22 million GovWorks.com 1999­2000 Portal for working with Municipal Governments 15 million Source: Harrington, 2008

143

The Performance of Pipeline System in the Supply Chain of Water for Industry in Thailand

THE PERFORMANCE OF PIPELINE SYSTEM IN THE SUPPLY CHAIN OF WATER FOR INDUSTRY IN THAILAND

J. Wachirathamrojn and S. Adsavakulchai University of the Thai Chamber of Commerce, Thailand

ABSTRACT Water resources development and management are striving to be the pioneer in focus on diversifying water related business for complete integration, encompassing water management technologies, pipeline system and distribution operating. Water for industry in Thailand is principally engaged in the businesses of development and management of the major water distribution pipeline systems in the eastern seaboard area and procurement of raw water from government agency sources for commercial distribution to end users. This vision is realized through an emphasis on infrastructure development and utilizing modern technologies to ensure customer satisfaction. The goal is to serve area with strong potential across the services areas. Water from natural resources is the upstream supply chain of water for industry services involves the pumping and distribution of raw water from the main storage reservoirs through the major water pipelines network. The main objective in this study is to analyze the applicability performance to the water distribution pipelines and an annual distribution capacity. As a result, this research aims to propose the performance modeling of pipeline management of raw water distribution system which serve industrial clients in the eastern seaboard as a role stochastic frontier model of a business with operation conscious at high level.

Keywords: Pipeline System, Water for Industry, Thailand.

144

T. Wiwatthanathorn and M. Baramichai Volume 7 – Fall 2009

RISK MANAGEMENT FRAMEWORK FOR AGRO­FOOD SUPPLY CHAIN: A CASE STUDY OF AGRO­FOOD SUPPLY CHAIN IN THAILAND

T. Wiwatthanathorn and M. Baramichai University of the Thai Chamber of Commerce, Thailand

ABSTRACT Agro­Food sector is one of the industries that have the greatest risk exposure. Agro­food’s risk factors are composed of both controllable and uncontrollable factors such as market demand, regulation, weather condition, pests, disease, global climate, and volatile market prices. However, due to the effect of globalization on business, there is a major evolution on the risk management of the Agro­Food industry. Tradition perception on risks that is restricted only on the problems caused by climatic and natural phenomena has become insufficient. Nowadays, the growing importance of risk factors affecting Agro­ Food industry business is accentuated directly and indirectly by local, regional, and global economic, marketing efforts, agricultural policies, and technology advancement. It is critical for risk management practice to consider every circumstance along the entire supply chain as possible risk factors since one circumstance could create a chain reaction impacts on the others. Having the integrated perception on risks within the Agro­Food supply chain will stimulate all relevant parties including farmers, mediators, distributors, processors, transportation providers and exporters in the chain to put their collaborative efforts on risk management activities.

The purpose of this paper is to illustrate the development of risk identification framework for Thailand Agro­Food supply chain. Thailand is one of the countries that agriculture is considered as strategically important economic sector. To develop the framework, we conducted some literature researches on the generic risk management framework and then apply it to the argicutural practices in Thailand. In addition, to ensure the coverage of all global issues, we also gathered the information directly from the major Agro­Food world exporters in Thailand in order to develop more understanding toward their actual risks. In our risk identification framework, all relevant risks are classified according to their basic sources into six groups, production risks, marketing risks, financial risks, legal risks human resource management risks and natural risks. Each of these risks is then related back to the relevant parties in the Agro­Food supply chain based on whether they are risk objects or risk factors. In addition, together with the risk identification framework, we also developed the simulation­based model framework to be used as a basis for evaluating the risk impact magnitudes.

Our risk identification framework attempts to provide the integrated approach for managing risks in Agro­Food supply chain. The other industries with similar characteristics could possibly adapt our framework and apply it to their risk management system

Keywords: Agro – Food Supply Chain, Risk Management, Agriculture

145

Comparing Warehouse Management System between Retail and Wholesale Business in Thailand

COMPARING WAREHOUSE MANAGEMENT SYSTEM BETWEEN RETAIL AND WHOLESALE BUSINESS IN THAILAND

S. Adsavakulchai1 and K. Juthamanee2 University of the Thai Chamber of Commerce1 and Boontavorn Co. Ltd.2, Thailand

ABSTRACT Warehouse Management System (WMS) is a system to control movement and storage of materials within a warehouse, the role of WMS is expanding to including light manufacturing, transportation management, order management, and complete accounting systems. Retail business using WMS facilitates coordinated movement of merchandise and information throughout the distribution process. In addition, some retail business in Thailand created a set of centralized processes to automate, manage and integrate replenishment and distribution and simplify the change management process. While wholesales business using WMS generally shipped in larger quantity which minimizes the lifting of single units of product. Currently one of the wholesale businesses in Thailand is to develop cash­and­carry– self­service wholesale store network upcountry from mass­marketing to direct marketing. On­going research is to do a standardization of supply chain practices across the business and is responsible store systems, optimization and information technology solutions for a steep change in distribution center productivity and capability.

Keywords: Warehouse Management System, WMS, Retail, Wholesale, Thailand

146

S. Kitti and S. Adsavakulchai Volume 7 – Fall 2009

WEB APPLICATION OF PREVENTIVE MAINTENANCE FOR PRIVATE BUS IN BANGKOK

S. Kitti and S. Adsavakulchai University of the Thai Chamber of Commerce, Thailand

ABSTRACT Fleet maintenance management is geared towards fleet owners who operate a sizable fleet of vehicles and to whom the maintenance and management of these assets is of critical importance. Preventive maintenance (PM) leads to more efficient operations and therefore substantial cost­savings and reduced pollution. A well designed PM program can extend the life of vehicles and equipment and reduce costly repairs. This program provides information necessary to track and schedule preventive maintenance that allows fleet owners to improve the productivity of vehicles, equipment, facilities, and personnel. Moreover, the capacity of the fleet owners is enhanced by developing with the understanding that "prevention is better than cure".

Keywords : Web Application, Preventive Maintenance, Private Bus

147

Web Application of Preventive Maintenance for Private Bus in Bangkok

AGRO – FOOD SUPPLY CHAIN MANAGEMENT IN DEVELOPING COUNTRIES

A. Sutcharitrungsee and M. Baramichai University of the Thai Chamber of Commerce, Thailand

ABSTRACT The efficient supply chain has been evaluated by many as a driving force for the future growth of agro­food industries in developing countries. The difficult of developing such as integrated value chain is transforming the quality demand at wholesale or retail stage into good production process at producer stage and to develop a supply­chain management for a reliable tracing and tracking system. This paper used the SCOR model as a tool to identify, measure and evaluate agro ­ food supply­chain management. This framework focused on five decision areas of the supply chain: PLAN, SOURCE, MAKE, DELIVER, and RETURN for analysis, improve, and communicate within supply chain members. The results show the structure of agro – food supply chain management and the relationship and collaboration of the members in five decision area. There are describing the process and all activities of supply chain performance including the strategies of demand and supply balancing, supplier network management, production, quality control, and transportation network. In each area is a link in the supply chain that is critical in getting a product successfully along each area. The supply chain strategy is the most important for the competitive position of all supply chain members in developing countries.

Keywords: Agro – Food Supply Chain, SCOR Model, Developing Countries

148

SECTION 2 SCIENCE & TECHNOLOGY SCIENCE &

Establishing the Existence of Localized Structure using Variational Dynamics

ESTABLISHING THE EXISTENCE OF LOCALIZED STRUCTURE USING VARIATIONAL DYNAMICS

Thomas K. Vogel Stetson University, USA

ABSTRACT In the mid 17th century Isaac Newton helped formalize the idea of mathematically describing the evolution of an observed process by summing the underlying forces involved with the process. About two centuries later, in the early 1830’s, a by the name of William Hamilton offered a paradigmatic reformulation of Newtonian mechanics. Hamilton successfully demonstrated that the whole of physics could be described in such a way as to never use the concept of force. Hamilton was able to establish a relationship between certain quantities representing the energy of a physical process and the governing equations of motion established by “summing the forces”. Hamilton’s theory required a more sophisticated version of calculus than was developed by Newton and his contemporaries. This more sophisticated calculus came to be known as the calculus of variations. The mathematical rigor behind variational calculus was developed by Leonard Euler and Joseph­Louis Lagrange. By exploiting the duality between these two different (but equivalent) views of modeling physical phenomena it becomes possible to reverse engineer physical systems. This paper will examine a technique by which certain types of localized phenomena (known in as ) can be established in nonlinear dynamical systems by taking advantage of this dual formulation of a physical system. This paper will detail this approach to constructing solutions using two physical systems which are of current interest in research. The first model describes ion transport across a cell membrane which is of great significance to research in Biology, Microbiology, and Biochemistry. The second model which is reversed engineered using variational Principles is an evolutionary description of one­dimensional wave propagation through what are known as microstructured solids. Microstructured solids represent a hot bed of research activity in such areas as reconstructive surgery, robotics, and are being considered as a possible replacement for hydraulic systems in high performance aircraft.

Keywords: Solitons, Variational, Optimization, Nonlinear, Dynamics

1. INTRODUCTION The modern perspective on formulating the mathematical model for a physical system is based on two separate paradigmatic views. These differing paradigms, though, arrive at a completely equivalent result. The first perspective is commonly attributed to work done by Isaac Newton in the mid 17th century. This Newtonian approach argues that the evolution of the state of a process can be described entirely in terms of the forces involved with the process [3]. This approach can be thought of as a manifestation of an Aristotelian “cause and effect” view of the universe. The philosophical and mathematical framework for the second perspective on model construction is rooted in two publications [1,2] in the 1830’s by a physicist named William Hamilton. Within these papers, Hamilton proposes a theory of dynamics which describes the whole of classical (Newtonian) physics without the use of forces. Instead, the physical process under consideration evolves in such a way asto extremize the integral of the difference between the kinetic and potential energies.Philosophically this is in stark contrast to the development offered by the Newtonian approach. Instead of viewing the evolution of a process as the result of external influences, the Hamiltonian approach argues that physical processes have a calculable intent to evolve in a particular manner.

Examples of variational problems in mathematics can be traced back several thousand years through history. One of the earliest examples of variational problems can be found in Virgil’s Aeneid [4]. According to Virgil, Dido, daughter of the king of the Phoenician city­state of Tyre, fled to the North African coast after her brother Pygmalion killed her husband Sychaeus. There she pleaded with the local ruler, King Jambus, for land. The King granted the woman as much land as could be enclosed with the hide of a bull. Legend says that she cut the hide into very thin and long strips and laid it out in a semi­circle with the sea forming the remaining side. The land later became the city of Carthage (c. 814 BC) and the woman its ruler, Queen Dido. While legend and fact may not necessarily agree, this demonstrates that the idea of extremum principles have existed for millennia. About eight centuries later, Hero of Alexandria (c 10­70) proved the first recorded scientific minimum principle. He was able to show that the trajectory of a reflected light ray is a minimum if the angles of incidence and reflection are equal. This idea was later formulated as a least time principle by Fermat in the early part of the seventeenth century.

150

T. K. Vogel Volume 7 – Fall 2009

The mathematical machinery necessary to investigate variational problems was found to require more than the elementary calculus developed by Newton [3] and Leibniz in the mid­seventeenth century. In 1696, Johann Bernoulli proposed the brachistochrone problem which can be summarized as 'If two points are connected by a wire whose shape is given by an unknown function y(x) in a vertical plane, what shape function minimizes the time of descent of a bead sliding without friction from the higher to the lower point.' [6]. This problem was addressed by several of Bernoulli's contemporaries, and what arose from these investigations was a new type of calculus. This calculus is known today as the calculus of variations and was developed into a full mathematical theory by Euler around 1744 [7]. The mathematics developed by Euler was extended by Joseph­Louis Lagrange (1736­1813). Lagrange discovered that Euler's equation for minimizing a functional integral (later to be named the Euler­Lagrange equation) could be expressed in a compact way by simply using integration by parts. It was Lagrange who introduced the integrand of the functional appropriate to mechanics, i.e., the difference between potential and kinetic energies. Euler had essentially only considered the kinetic energy which amounted to requiring additional conditions to get a correct picture of classical mechanics [8].

Since the solution(s) of a dynamical system is one which extremizes this quantity called the action, it follows from this principle that methods can be devised for finding approximate solutions. Consider taking an anstaz (trial function) which represents the actual (exact) solution. In general this ansatz will contain fewer degrees of freedom than the exact solution. Even with a reduction in the number of degrees of freedom, Hamilton's Principle will still lead to a solution which, in some sense, will be as close as possible to the correct solution. However, due to the reduction in the number of degrees of freedom, it cannot generally be expected that the function will achieve the actual extremum of the full problem. One can still expect to find an extremum, however, which should be ``near" the exact solution. This method has long been used in applications from geometry through quantum mechanics. In particular, it was in this period that the "Raleigh­Ritz method" was developed independently by Raleigh and Ritz for finding eigenfunctions and eigenvalues of linear differential equations. This method is known in modern mathematical literature as the Raleigh­Ritz method [9]. In fact this usage of variational methods continued into the first half of the 20th Century, and even up into the late 1980's. During this time, Quantum Mechanics and all of Modern Physics were born, and with these, there came a critical need to be able to obtain numerical values for comparing with experiments. Up until the 1950's, there were no electronic computers, and even in the latter half of the 20th century, such electronic devices were generally only available at large government labs or universities. There arose a need for methodologies which aided in obtaining some kind of approximate solutions without the high end personal computing power enjoyed today. As an answer to this need, there were major efforts to utilize variational methods for developing approximate solutions, as well as perturbation expansions of such, for various physical systems. In this time period, nonlinear problems were generally not studied as such, except to the degree that one could expand about some solvable linear problem, or some known analytical solution. Thus it is not surprising that all the work in this period mainly concentrated on linear eigenvalue problems and their perturbations, with variational methods playing leading roles. The need for this development of variational methods began to decline after the 1970's, with the increasing availability of calculators and computers. This marked the end of an era and the beginning of a new one. Within a decade, it would no longer be important to carefully and analytically expand solutions of equations of motion, in order to obtain a three­to­four place accuracy in their numerical values, when with the touch of a few keys, the value desired would appear almost instantly and with an 8­to­14 place accuracy.

Solitons are a certain types of mathematical solutions which occur in a vast number of nonlinear evolution equations. The first recorded physical observation of a was made by in 1834. John Russell was a civil engineer by trade, and among his many accomplishments was the development of a system of hull construction which revolutionized 19th century naval architecture. He was the first person to offer steam carriage service between Glasgow and Paisley around 1834. John Russell is also responsible for some of the first observations of the Doppler shift of the sound frequency of a passing train. One day in year 1834, John Russell was working on establishing a conversion factor between steam power and horse power. To this end, John had rigged an apparatus in which a couple of horses were tethered to a boat along the Union Canal outside of Edinburgh. As the horses rode along the union canal with the boat in tow, the apparatus binding the horses to the boat snapped. As the boat came to an abrupt halt, John observed a great swell of water which formed around the bow of the boat. Suddenly this “mound” of water which had been gathering around the boat sprang forward and began propagating down the Union Canal. What struck John Russell as odd about this occurrence was the absolute lack of dissipation or attenuation of the propagating water wave. He followed it for several kilometers before the wave finally exited the canal. The odd thing about this particular phenomenon is that solutions to the water wave equations (as they were formulated in the early part of the 19th century) did not allow for such behavior. John dubbed this water wave as a “wave of permanent form”. Much skepticism surrounded John’s claim, and he spent many of his remaining days trying to recreate this wave in an experimental water table in his garden.

151

Establishing the Existence of Localized Structure using Variational Dynamics

It wasn’t until 1895 that two , D.J. Korteweg and G. de Vries, successfully constructed a mathematical model which affirmed John Scott Russell’s observation sixty years earlier. The key to establishing such solutions is in the correct formulation of the governing model. Korteweg and de Vries derived (what is known today as) the KdV equation (1) which correctly describes the behavior of shallow water waves. The governing equation is a nonlinear evolution equation. Russell’s contemporaries’ disbelief was due to the fact that their models of the behavior of water wave dynamics were linear in nature. Due to the complexity of solving nonlinear evolution equations, research in the area of solitons stalled out until the 1960’s. This was, of course, due to the advent of the modern computer.

ut uux uxxx  0 (1)

In the mid 1960’s a by the name of Martin David Kruskal ran the first numerical simulation of interacting solitons in the KdV equation. This early work contributed much to our current understanding of these rather exotic types of mathematical constructs. What Kruskal was able to demonstrate not only advanced applied mathematics, but forced mathematicians and to reconsider the very notion of what is meant by interacting waves. Kruskal found that when two solitons interact, they will exhibit destructive/constructive interference much like any wave phenomena which is observed in nature, though they do not linearly superimpose on one another. The difference is that upon interacting, the solitons will return to their original state. That is to say, once the interaction has taken place, the soliton re­establishes its original shape, velocity, and other governing physical characteristics. Understanding these types of mathematical constructs has lead to some of the more profound advancements in the last few decades. Most notable of these advances are fiber optic and wireless communication over a global network. This paper will introduce a novel way to establish such localized structure (i.e., solitons) without the difficulties encountered by techniques which require working with the equation from the Newtonian perspective.

2. THEORY Throughout the last several decades many techniques have been developed in establishing solutions to nonlinear differential equations. These techniques are characterized by their limited reach in solving large classes of problems. Many of these techniques deal with methodologies targeted at the differential equation itself (i.e., the system as it is developed in the Newtonian sense via considering the net forces acting on a system). Consider a nonlinear differential operator,  , for which there exists some solution, u, such that equation (2) is satisfied.

[u ] 0 (2)

This operator represents the Newtonian formulation of the system under consideration. Depending on the structure of  , solutions to equation (2) most likely cannot be found directly. In fact, it is difficult to make any generalization about (2) without imposing further structure on  . Instead of restricting  to a certain class of nonlinear differential operators, consider a paradigmatic reformulation of (2). Suppose this nonlinear operator is the "derivative" of some associated "energy" functional, L as given in equation (3).

[u ]   (L [u ]) (3)

Equation (2) may now be written in terms of the energy functional as indicated in (3). This establishes a duality in which solutions to equation (2) may be equivalently recognized as the critical points of the functional, L  . This is, in essence, the heart of Hamilton's Principle. This approach enabled William Hamilton to describe the whole of Newtonian mechanics without having to consider the evolution of a system in terms of external forces. In modern mathematics this "energy functional" has a name; it is referred to as the Lagrangian.

(L [u ]) 0 (4)

Suppose the Lagrangian, L, corresponding to the physical system of interest is known, and that this Lagrangian is a functional of the variable u(t). (The variable u(t) may be a scalar, vector or tensor quantity.) In the present work, we shall only consider a one­dimensional scalar case, where the integration variable is t. The action, S[u], is defined by equation (5) where D is the domain of support of the function u.

S   L[]u dt (5) D

Hamilton's Principle states that the evolution of a dynamical system between two specific states is an extremum of the action functional given by (5). More formally, Hamilton's Principle states that the solution to a given dynamical system, u(t), is

152

T. K. Vogel Volume 7 – Fall 2009 determined by (6) for any bounded variation u()t , provided that this variation vanishes at any and all end points of the  L domain D. Note that this also defines the quantity , which is called the (first) variational derivative of L. u

S[u (t )u (t )]S [u (t )]  L lim [u (t )]u (t )dt 0 (6)  0  D u

In terms of the nonlinear differential operator  , this establishes a connection between the governing equation(s) of motion and the the first variational derivative of Lagrangian as seen in equation (7).

 L[] [] (7) u

This paradigm shift offered by Hamilton allows for a rather novel approach to approximating solutions of evolution equations for which a Lagrnagian can be established. Suppose the physical characteristics (be it geometric or otherwise) of a particular type of solution to the equation of motion given by (2) are known. For instance, an ordinary soliton could be described in terms of a traveling "lump" having some associated amplitude and width. Of course depending on the governing system the solution could have other identifying characteristics such as position, velocity, chirp, phase, etc. An ansatz can then be constructed in terms of parameters representing those physical characteristics. Let u0 (;)t qi be the ansatz, where qi is some finite collection of parameters representing the aforementioned physical characteristics, on which u0 is dependent. Note that these parameters could also be dependent on any other independent variables, such as t. With the functional form of u0 fixed, we know vary the parameters qi . The variations of the qi ’s will give the set of equations determined by (8).

S L[u ]dt 0 (8)  0 qi qi D

Note that (8) presumes the structure of the qi ’s is constant. If the parameters are assumed to be dependent on time, the partial derivative would become a functional derivative. Once this is done, we have the q's determined in the sense that we have the equations (either algebraic or differential depending on the structure of the qi ’s) whose solution(s) determine a "best­fit" for the parameter values as per Hamilton's Principle.

3. EXAMPLE I: A MODEL FOR ION TRANSPORT ACROSS A CELL MEMBRANE 3.1 The Model A model used in ion transport across a cell membrane is given by the generalized transport equation determined by equation (9). As detailed in section 3 of [10], this equation is somewhat familiar to applied mathematicians. Equation (9) is simply a generalization of the Burger’s equation. The authors of the aforementioned paper reduce the partial differential equation to an ordinary differential equation using a standard Lorentzian transformation, then establish the existence of ordinary solitons by calculating the corresponding homoclinic orbits in phase space. This approach is not uncommon, and to the author’s credit, the approach was successful. It is just somewhat complicated. There is a much simpler way to establish soliton solutions of equation (9) using Hamilton’s Principle.

vtt vvx vt vxx  f ()v (9)

The nonhomogeneity in the equation of motion has a polynomial structure, f ()()v  Pn u , as is common with transport models. The coefficients higher­order temporal and spatial derivatives are real­valued; ,  R  , as is the solution itself 22  v(,):(,)x t x00t  R . For the choice of the polynomial determined by f ()()v  v v  z0 , where  , z0  R , we obtain a modified form of this generalized Burgers equation given by equation (10).

22 vtt vvx vt  vxx  v(v z0 )  0 (10)

153

Establishing the Existence of Localized Structure using Variational Dynamics

In order to reduce the PDE to an ODE a procedure similar to that used in [10] is implemented. Traveling wave (TW) solutions of equation (10) are readily found through the use of the Lorentzian transformation x t . This leads to a reduction of the model given by (10) to a nonlinear ordinary differential equation determined by (11) where h 2  and the prime notation indicates the derivative with respect to  .

22 hv'' ( v )v ' v (v z0 )  0 (11)

1  z2 1 1 Under the scaling u  v and  0 , equation (11) may be written as (12) where k  and c   . 2 z0 h h z0

u''kcu '  (k  1)u u3  0 (12)

Equation (12) represents the model from the Newtonian perspective. It has terms which can be interpreted as force, acceleration, damping, nonlinear driving, etc. Instead of approaching the search for soliton solutions by way of this formulation of the model, let us instead consider an equivalent Hamiltonian formulation.

3.2 The Variational Approximation The search for localized structure begins with an analysis of the eigenvalues of the spectrum of the linearized equation. That is to say, we consider the possible values of the extrinsic parameter, k, and the wavespeed, c, for which localized solutions (ordinary solitons) may possibly exist. Any localized solution will have a vanishing amplitude for large  . Hence it is necessary that the eigenmodes of the associated linear problem be exponential in nature. The linearization of (12) has eigenvalues corresponding to (13).

11   kc k 22c 4(k  1) (13)  22

This necessarily mandates that the wavespeed of the TW solution has a minimum propagation rate given by (14) as well as imposes the condition on the extrinsic parameter k>1.

4(k  1) c2  (14) k 2

Hamilton's principle [1,2] states that the evolution of a dynamical system described by the generalized coordinates q  (q12 ,q ,...) between two specific states q11 q()t and q22 q()t is an extremum of the action functional given by (15).

t2 S[q (t )]  L (q ,q ,t )dt (15) t1

That is to say that the solution to a dynamical system with an associated action defined by (15) must satisfy (16).

 S  0 (16)  q()t  2 u A The localized solutions to (12) will be approximated by the Gaussian trial function 0 ( ) exp2 . Hamilton's 

Principle will be employed to determine whether or not localized structure such as u0 () exists for a given value of the extrinsic parameter k and wavespeed c. The Lagrangian for which the governing equation of motion (12) arises from is determined by (17).

1 1 1 L  (u ')2  (k  1)u 2  u 4 exp(kc ) (17) 2 2 4

154

T. K. Vogel Volume 7 – Fall 2009

The action is then calculated (over the entire real line) using (17) evaluated at the variational trial function, u0 () . The result 1 of the integration leads to the action determined by (18) where exp(k 2c 2 2 ) . 16

A2 S   4A2 2  2 (4  (4  4k  c 2k 2 )) (18) 16 

Hamilton's Principle (16) would be satisfied if the action were evaluated at the exact solution. While u0 () is by no means the exact solution, it is representative of solutions which likely do exist in the equation of motion (12). Thus, as our lowest  S  S order approximation to the VA, (12) will be imposed on (18). Requiring that  0 and  0 leads to the algebraic  A  constraint equations for A,  , k, and c given by (19) and (20) respectively.

2 2 2 2 2 2 2 2 2 2 4A   2168(22    k c k )  c k (44 k c k )   0 (19)

22222 22222 224 A (8 c k  )2168(22  k c k )  c k (44k c k )   0 (20)

For a fixed value in the parameter space (k,c) satisfying (14), corresponding solutions to (19) and (20) given by (A,  ) represent the corresponding (zeroth order) variational solution. Most often in nonlinear systems, solitons occur in infinite families. That is to say, for instance, geometric features (amplitude, width, etc.) of the localized solution are continuous functions of some feature of the model (such as a propagation rate). With this in mind, equations (19) and (20) were solved numerically by first choosing a value of the extrinsic parameter, k. The width may then be determined implicitly by equation (21) as a function of the propagation rate, c.

2(168(22  k c22222k )  c k (44 k c 224k ))    2 (8 c 222k  )0  (21)

In turn the amplitude of the variational solution may be established by equation (22) using the ordered par (k,c, i ) where i satisfies (21).

(4 (4  4k  c2k 2 ) 2 ) A2  (22) 22 2

Numerical plots of the variational solution space for fixed values of k are provided in figures 1 and 2. Note that from considerations of the linear spectrum, k must be larger than 1. It was difficult to obtain solutions to the transcendental algebraic system determined by (19) and (20) for values of k larger than about 8.

Figure 1: Variational Solutions; Plots of Solutions to Equations (19) and (20)

3.3 A Refinement of the Variational Approximation The utility of the results in the previous section resides in finding data corresponding to the geometric characteristics (i.e., amplitude and core width) of the trial function which represents the soliton for a given value of the propagation rate and extrinsic parameter k. Since this obtained data approximates the information (,;,)A  k c , it can still prove difficult to find the exact numerical solution. To this end, a technique will be employed to refine the results of the VA. It is possible to improve

155

Establishing the Existence of Localized Structure using Variational Dynamics upon the accuracy of the results in section 3.2 by examining the first order correction to the variational calculation contained therein. One of the earliest methods to estimate the validity of a variational approximation had been given in [15], where Dexais, Anderson and Lasik investigated how well a variational solution would preserve the next higher order invariant beyond the Hamiltonian. In 2007, the author of this paper along with David Kaup devised a simple variational perturbation scheme in which the first order correction to the variational approximation could be obtained. This procedure (as well as the analysis behind the technique) is outlined in [11]. Consider taking a perturbation expansion for the exact soliton solution, ue , as indicated by (23). In this expansion,  represents a small ordering parameter.

ue u01(;)()qi  u   (23)

The term u0 in the above expansion is the variational solution obtained as per the results of section 3.2. If the perturbed soliton solution given by (23) is substituted into the action (15) and the result is expanded about u  u0 , the result is the expansion given by (24).

L 22  L S L[]u dt  (u  u )dt u ()(')'()t u t dtdt  O  3 (24) 0 u 1 2   u t u t 1 1 D D 0 2D D '   ( )  ( ') 0

By varying the first order term (order ) in u1 , the result is simply the equation of motion evaluated at the variational trial function. While the trial function will not satisfy Hamilton’s Principle (25) exactly, it can almost be satisfied if u0 is close to an exact solution.

 L  0 (25) u 0

The nonzero residual from resulting from the “distance” between the exact solution and the variational trial function will then 2  L be balanced by the next higher order term ( ). This is accomplished by taking to be of order . This results in u 0 equation (26).

 L   R[q ](t ) (26) u t  () 0

If equation (26) is substituted into equation (24), the expansion no longer contains terms of order . Instead the next lowest 2 order is of  and upon varying u1 , we obtain equation (27). This equation governs the first order correction to the variational approximation.

 2 L u (t ')dt 'R [q ](t ) (27) u t u t 1 D ( ) ( ') 0

Thus for the results in section 3.2, the first order correction is the corresponding solution to equation (28) where the nonhomogeneous term R() is given in (29).

u1''kcu 1 '  (k  1  3u 0 )u 1  R ( ) (28)

1322    R A 2Ack 2 2k A 3 ( )4 exp 2  4   2    2    1  exp  2  (29)      

While (28) cannot be solved analytically, it can be solved numerically. It is worth noting that equation (28) is linear. This is not coincidence; in fact, this procedure will necessarily produce a linear differential equation governing the first order correction to the variational approximation. This particular equation (28) was solved numerically for the correction in the variational amplitude by utilizing a linear shooting method. The boundary conditions on (28) are u1  0 as   . Equation (28) will have two homogeneous solutions which will converge to a linear combination of the eigenfunctions exp( ) as

156

T. K. Vogel Volume 7 – Fall 2009

   plus a particular solution. Observe that the eigenvalues  are defined in (13). With the data obtained from the lowest order variational approximation in section 3.2 and the first order correction to the amplitude of the variational solution from section 3.3, it is now time to use the data to find the solutions numerically.

3.4 Results In order to find localized solutions in equation (12), it becomes necessary to know in advance the values of the extrinsic parameters as well as the initial data. This information can then be fed into any standard ODE integration routine. This information is precisely what has been obtained through the variational approximation. Using the data displayed in figure 1, values of k, c, A and the core width for prospective localized solutions can be obtained. For a particular value in (k,c) space (as determined by the variational approximation), equation (12) is numerically integrated with initial conditions u(0)  A and u (0) 0 . The initial condition for the derivative at zero stems from the symmetry of the soliton­type solutions. The peak of the soliton core will be taken in alignment with the origin (i.e., where the derivative vanishes). The data for the initial amplitude is also obtained from the analysis seen in figure 1, but is adjusted to account for the first order correction as presented in section 3.3. Figures 2­4 contain the results of this process for some selected solutions.

Figure 2: u() for k=2, c=1, A=1.07973 Figure 3: u() for k=3, c=1, A=1.86985

Figure 4: u() for k=5, c=1, A=3.775

4. EXAMPLE II: ONE­DIMENSIONAL LONGITUDINAL WAVE PROPAGATION IN MICROSTRUCTURED SOLIDS 4.1 The Model In recent years, interest in applied research pertaining to microstructured solids has become more prevalent. One such area of interest pertains to what are known as Shape­Memory Alloys (SMA). SMA's have potential application in such areas such as aeronautics, development of reconstructive surgical tools, and robotics [12]. Recent work [13,14] has lead to the realization of the existence of solitary wave solutions (localized structure) in such models. The work done in finding localized solutions was based entirely on numerical integration with periodic boundary conditions. This section examines the use of a variational approximation to find localized ordinary solitons. The accepted model for 1­D longitudinal wave propagation through microstructured media [13,14] is based on a higher order KdV­type equation. The model (30) incorporates third and fifth order , as well as first and third order nonlinearities. The nonlinear potential, P[u] is given by (31).

157

Establishing the Existence of Localized Structure using Variational Dynamics

ut [P (u )]x duxxx buxxxxx  0 (30)

1 P()u  u24  u (31) 2

In the above cited paper [13], the authors numerically solve for solitary wave solutions in the logarithmic parameter range given by 0.8 log(b ) 2.4and 1.4 log(d ) 4.8 . It turns out that equation (30) can be scaled in such a way as to reduce the dimension of the extrinsic parameter space. This extra degree of freedom was left in the model when it was numerically integrated by Salupere et. al [13]. The transformation given by (32) will accomplish this very task.

1 v(x ,t ) u 2 d  (32) 2  

Under this transformation, the stationary state ODE (33) governing 1­D longitudinal wave propagation reduces to a 1­ dimensional parameter space. It also scales a coefficient of the nonlinearity. Note that in equation (33) the coefficient of the b fourth order dispersion is determined by  . 2d 2

11 cv v  v v24 v  0 (33)   24

4.2 The Variational Approximation Once again, the search for localized solutions begins with an analysis of the eigenvalues of the linearized spectrum. That is to say, we consider the possible values of the extrinsic parameter,  , and the wavespeed, c, for which localized solutions (ordinary solitons) may possibly exist. Any localized solution will have a vanishing amplitude for large  . Hence, for solitons to exist, all four eigenvalues must remain real. The linear problem has eigenvalues corresponding to (34).

1  1  4c  1  1  4 c (c ;  )  ,  (c ;  )   (34) 22

1 To keep all four eigenvalues real, it becomes necessary that the parameters obey the conditions   0 and 0  c . An 4 image of this region can be seen in figure 5.

Figure 5: Permissible region for (the possible) existence of ordinary solitons

The modality for the variational approximation employed here will be identical to that used in section 3. The Lagrangian from which equation (33) is established is determined by (35). The search for localized solutions will be facilitated by using a  2 guassian trial function u A as in the previous section. 0 ( ) exp2 

158

T. K. Vogel Volume 7 – Fall 2009

1 1 1 1  L  cu2 u 3 u 5 (u ') 2  (u '') 2 (35) 2 6 20 2 2

Integrating the Lagrangian density over all space yields the action (36) which forms the basis of the variational method.

2 3  5  2  3 2  S cA2 A 3  A 5  A 2  A 2  (36) 4 18 100 4 4 3

The algebraic contraints for which the variational solution exists can be determined by the associated Euler­Lagrange equations. This is accomplished by varying the action with respect to the core amplitude, A, and the core width,  . This algebraic system is given below in equations (37) and (38).

302c4 103A  4  35A 3  4  302  2  902   0 (37)

225 2c4 50 3A  4  9 5A 3  4  225 2  2  2025 2   0 (38)

The width of the core can be determined explicitly in terms of the amplitude and wavespeed (39). Numerical solution curves of these associated Euler­Lagrange equations are given in figures 6 and 7. Using  as control parameter, the equations are solved in terms of amplitude, A, and wavespeed, c. The equations are cubic in the amplitude, hence there are 3 possible branches of solutions. As the numerics indicate, there are solutions along two of these branches for 0   1.1, and only a single branch of solution for  1.1. A bifurcation in the variational data occurs somewhere in a neighborhood of  1.1.

195 2  (195 2)23  4(255 2c  60 3A  12 5A )(  1935 2 )  2  (39) 2(255 2c  60 3A 12 5A3 )

Figure 6: Variational solution curves beneath the bifurcation point  1.1

159

Establishing the Existence of Localized Structure using Variational Dynamics

Figure 7: Variational solution curves beyond the bifurcation point  1.2

As outlined in section 3.3, a first order correction to variational data can be obtained. Substituting the expansion of the solution determined by (23) into the governing equation of motion (33) and following the same procedure as in the previous model, the first order correction to the amplitude u1 can be established as the solution to (40).

//// // 3 u1u 1 u 1()()u 0 u 0 c  R (40)

In the above equation, u0 is the trial function evaluated at the variational solution data obtained for the lowest order approximation. The nonzero residual which drives the solution for u1 is given in equation (41). As complicated as this equation may seem, keep in mind that it is still a linear differential equation and can solved numerically with most commercially available mathematics software packages.

 2  22 2 3   Ae 22 R( ) 4c 8  8 6  16 2 4  192  2 2  64 4  2A  8e   A 3  8e (41) 4 8 

4.3 Results Once again, the results obtained from the variational approximation offer values for the extrinsic parameters and the initial data necessary to integrate the stationary state ODE given by (33). Integrating this equation is a bit trickier than with the stationary state ODE considered in section 3. Since this model is a fourth order differential equation, additional information is needed with respect to the initial conditions. The first and third derivatives will vanish at   0 by virtue of the symmetry of the solution. The initial amplitude, u(  0) A , is taken to be the lowest order variational amplitude plus the first order correction for the appropriate values in (,) c space. This leaves open the question of the initial value of the curvature u (  0)  . As it turns out, the governing equation (33) has a constant of motion which will precisely supply this information. If equation (33) is multiplied by u and integrated then equation (42) is obtained.

d 1 1 1 1 22 2 3 5 (42) cu u u u  u u  u    0 d 2 6 20 2 2

160

T. K. Vogel Volume 7 – Fall 2009

With the above observations regarding the third order derivative in mind, this constant of motion can be solved for u allowing for the determination of the curvature at   0. This equation for the curvature given in equation (43) depends explicitly on the variational data obtained in the previous section.

21 2 1 3 1 5  cA A  A (43)  3 10

Figure 8 offers such a solution, overlaid with the variational trial function evaluated at these parameter values. As can be seen from this graph, the variational approach to establishing localized solutions does so with relatively high accuracy. The bulk of the “error” can be seen in the tails of the soliton. This is somewhat expected since the Gaussian trial function will have a different decay rate than the standard soliton­type solution forms (such as a sech2 ( ) type structure).

Exact vs. Variational solution for A=1.747, =-0.1, c=-1 2 .00

1 .50 Result of numerical integration 1 .00 (exact solution) U 0 .50 Variational Solution 0 .00

-0.50 -10 .0 -5.0 0 .0 5 .0 10.0  Figure 8: Plot of exact solution (solid curve) vs. variational solution (dashed curve)

5. CONCLUSIONS The results obtained in this paper demonstrate the robustness (and relative accuracy) of using Hamilton’s Principle to obtain localized structure in nonlinear evolution equations. Note that these techniques can be implemented for just about any type of solution to a nonlinear (or linear) partial or ordinary differential equation, if there exists some general understanding of the geometric characteristics of the desired solution (i.e., it is necessary to construct a reasonable anstaz). It has been shown [11] that the variational method can fail to give reasonably accurate results in situations such as tracking soliton vs. soliton interactions in a governing system. The approach outlined in this work has the advantage of being able to establish solutions with relative ease when compared to some of the more complicated approaches available (e.g., Inverse Scattering Techniques, calculating homoclinic orbits in phase space, etc.). In fact this methodology is accessible enough that advanced undergraduate students with a good deal of mathematical maturity can use it in their own research projects. The author of this paper has had a couple of undergraduate students pursue this type of research at Stetson University (where there is a robust year long undergraduate research program for seniors), and those students have had a good deal of success with those respective projects.

REFERENCES [1] William R. Hamilton, On a General Method in Dynamics, Philosophical Transactions of the Royal Society of London, part II for 1834, p. 247­308. [2] William R. Hamilton, Second Essay on a General Method in Dynamics, Philosophical Transactions of the Royal Society of London, part I for 1835, p. 95­144. [3] Isaac Newton, Andrew Mott (translator), Principia, Prometheus Books, UK, 1995. [4] Virgil, Robert Fitzgerald (translator), Aeneid, (Vintage; Reissue Ed. 1996). [5] Gardner, C.S., Greene, J.M., Kruskal, M.D. & Miura, R.M., Method for solving the Korteweg­de Vries equation, Phys. Rev. Lett. 19, 1095­1097 (1967). [6] Margie Hale, Essentials of Mathematics, Mathematical Association of America (2003)

161

Establishing the Existence of Localized Structure using Variational Dynamics

[7] W.W. Rouse Ball, A Short Account of the History of Mathematics, (Martino Publishing, 2004). [8] http://www­history.mcs.st­andrews.ac.uk/Mathematicians/Lagrange.html [9] George B. Arfken and Hans J. Weber, Mathematical Methods for Physicsts, Academic Press, 4th ed. 1995. [10] Vsevolad A. Vladimirov, Ekaterina V. Kutafina and Anna Pudelko, Constructing Soliton and Kink Solutions of PDE Models in Transport and Biology, Symmetry, Integrability and Geometry: methods and Applications, Vol. 2 (2006), Paper D61 15 pages [11] D.J. Kaup, T.K. Vogel, Quantitative measurement of variational approximations, Physics Letters A 362 (2007) 289­297 [12] http://www.cs.ualberta.ca/~database/MEMS/sma_mems/sma.html [13] Olari Ilison, Andrus Salupereit, On the propagation of solitary waves in microstructured solids, ICTAM, 15­21 August 2004 Warsaw, Poland [14] Maugin, G.A., Christov C., Nonlinear duality between elastic waves and quasi­particles in microstructured solid, Proc. Estonian Acad. Sci. Phys. Math , 46 :78­84, 1997 [15] M. Dexais, D. Anderson, M. Lasik, Phys. Rev. A 40 (1989) 2441

162

K. Phongkusolchit and T. Velasco Volume 7 – Fall 2009

DEVELOPMENT OF AN EXPERT SYSTEM FOR GEM IDENTIFICATION

Kiattisak Phongkusolchit1 and Tomas Velasco2 The University of Tennessee at Martin1 and Southern Illinois University Carbondale2, USA

ABSTRACT The objective of this research is to develop an expert system for gem identification. This expert system can identify 73 gems, classified as precious, semi­precious, and synthetic stones. The needed information could be collected from the naked eye, and traditional non­destructive instruments to identify a gemstone. Gemstones not included in the knowledge base, require higher technological measurements or very specialized instrumentation. Since this research is aiming for people with no background in gemology, the developed expert system provides fundamental information and leads the user through basic gem identification. This system questions the user about the stone to be identified, provides explanation facilities to better understand terminology, and uses a graphic interface to provide a clear vision of the identification process. After the user responds to the questions, the system processes the collected information and looks for a match in the knowledge base, identifying the possible gems, according to the given information.

Keywords: Expert System, Expert System Development, Gem Identification, Knowledge­Based System, Computer Application.

INTRODUCTION The concern of artificial intelligence (AI) is intelligent behavior in artifacts (Nilsson, 1998). Expert Systems are one of the most successful applications in AI. Application of expert systems helps us solve existing problems or make decisions (Lee, Liu, & Lu, 2002; Prasad, Ranjan, & Sinha, 2006; Zhang, Chu, Chen, Zha, & Ji, 2006). Expert Systems were developed for the collection of knowledge about human endeavors in any specialized field in response to the lack of availability of experts in that arena. These include, but not limited to, medicine, engineering, and business. Gemology is a specialized area related to both science and business; however, few people have knowledge about gemology.

Gemology is the science of gemstones. Gems consist of natural, synthetic, and treated materials, along with simulant diamonds and imitation colored stones (Lu & Shigley, 2000). Stones considered as gems are composed of the following five qualities including beauty, durability, rarity, portability, and fashionability. Some people may not know that diamonds have a variety of colors besides colorless. There are green, pink, red and other colored diamonds in the business as well. Not only diamonds, but also other gemstones have this same characteristic. Therefore, it is confusing, sometimes even for an expert, to identify a stone (Nassau, 2000). For example, if there is a loose transparent blue stone, it is very difficult to tell what that stone is by appearance. It could be a tanzanite or iolite; however, it could be a more expensive stone such as a diamond or a sapphire. In general, people may not be familiar with either tanzanite or iolite, and it would be a costly mistake if a customer decided to purchase a piece of jewelry without having certain knowledge of its nature.

Although the jewelry business produces small pieces as products, this type of business generates large revenues from all societies. Since jewelry as a business is very profitable, people have tried to take advantage of this factor, this is the reason why false gemstones in the market. False gemstones are either materials produced by humans to use instead of gemstones, or real gemstones intentionally used to imitate another gemstone. The problem is that people who do not have knowledge of gemology cannot tell the difference between real gemstones and false gemstones. Even though all the information regarding a particular stone is available, they may not know if the information given is true.

The objective of this research is to develop an expert system for gem identification. This expert system is an alternative to gemological expertise, particularly in gem identification. The science of gems is delicate and sophisticated, and because of their similar appearance and characteristics, people have difficulty identifying them with the naked eye. In fact, false gemstones have been used in the business for years (Lu & Shigley, 2000). This study is significant because it helps people who do not have a background in gemology, identify gemstones. Moreover, some stones were invented particularly to trick technology, such as diamond detectors (Shigley, Koivula, York, & Flora, 2000). Consumers should also be aware of people

163

Development of an Expert System for Gem Identification who are trying to take advantage of their unwitting and uneducated customers (Jeffery, 2001). In summary, an expert system such as this would help people identify stones and raise awareness when purchasing gemstones.

This research makes use of stones and their characteristics for building a knowledge base. Most types of stones used in this particular research are major stones in the business; however, some uncommon stones that can be found on the market are also included in this study. This research embraces not only real gemstones, but false gemstones as well; using the information that can be collected from the naked eye and traditional non­destructive instruments, as a way to identify a stone. Visual appearance and gemological properties can be almost the same for certain gemstones and their counterparts (Lu & Shigley, 2000). Therefore, this expert system is not able to identify a stone that is not in the knowledge base or cannot be characterized using traditional measurements.

LITERATURE REVIEW Several methods can be used to identify gemstones. However, gems are different from minerals in terms of value. Geologists or mineralogists can use any methods to identify the questioned stone, including destructive or non­destructive sample methods. Unfortunately, gemologists can only use the latter methods because the sample might be very valuable (Lu & Shigley, 2000). In general, gem identification uses non­destructive sample methods, including the testing of optical and physical properties of a stone to determine its species and genuineness.

Optical Properties Optical properties are the effects of a substance upon light (Tungsupanich, 1998). The properties that can help identify the stone include color, transparency, phenomena, refractive index, and optic character.

Color Some stones may have a unique color while others may not. For example, rubies are red but garnets can be found in different colors. The color depends on the chemical composition of the stone and the structure of the atom. Each gem can absorb the light wave differently, letting us see the gems in a variety of colors. For example, a stone looks green because the stone absorbs all colors except green. Therefore, we can only see the reflected color. There are two groups of gems. One is idiochromatic gems, and the other is allochromatic gems. Idiochromatic gems are stones that have an element that is one of the components of the stone. The stone cannot be formed without a particular element such as garnet, malachite, and turquoise. Allochromatic gems are colorless stones unless they have impurities, which caused the color. This type of stone includes diamonds, rubies, and sapphires.

Transparency This property shows how well a stone can transmit light. There are five levels of transparency for gems.

1. Transparent: A transparent material transmits light, and objects can be seen clearly even through considerable thickness. 2. Semitransparent: A semitransparent material transmits light, and objects can be seen through the stone, but not clearly. 3. Translucent: A translucent material transmits light, but objects cannot be resolved through it. 4. Semi­translucent: A semi­translucent material hardly transmits light. 5. Opaque: Light cannot pass through the material at all.

Phenomena Special characteristics, called phenomena, can be found in some gems. These phenomena are created from inclusions, physical structures, and differential selective absorption of light in the stone.

1. Chatoyancy (cat’s eye) is formed by needle inclusions parallel to each other in only one plane. This phenomenon is commonly found in quartz and chrysoberyl. 2. Asterism (star) is created by needle inclusions like the chatoyancy phenomenon but intersecting more than one plane. Star rubies and star sapphires are typically found in market. 3. Aventurescence is the reflection of light from fine platelets in the stone. This phenomenon can be found in aventurine quartz.

164

K. Phongkusolchit and T. Velasco Volume 7 – Fall 2009

4. Color change results from absorption of different wavelengths of light. Due to this special characteristic, some stones can be seen in different colors under fluorescent light and incandescent light. For example, alexandrite chrysoberyl can be seen green under fluorescent light and red under incandescent light respectively. 5. Play of color is found only in opals. Due to the different sizes of silica crystal patterns in the stone, the interference and diffraction of light created colors in an opal. Different colors come from different sizes of silica crystals. 6. Labradorescence appears because of the interference of light in repeated twinning inclusion planes in the stone. This blue sheen can be found in labradolite feldspar. 7. Adularescence is formed by the reflected light from a feldspar platelet inclusion in the stone. This white or blue sheen can be found in orthoclase moonstone. 8. Iridescence is the spectrum color originated by the interference of light. This can be seen in fire agate. 9. Orient is a phenomenon found in pearls. The interference and diffraction of light from aragonite layers show the spectrum colors.

Optic character The following terms describe the specific optic character of a stone.

1. Isotropic: Isotropic stones are singly refractive. They can be either in a non­crystalline (amorphous) or in a cubic crystal system (isometric). 2. Anisotropic: Anisotropic stones are doubly refractive. They can be in any crystal system besides isometric. 3. Uniaxial: Any gemstone that has one optic axis belonging to the tetragonal, hexagonal, or trigonal crystal systems. 4. Biaxial: Any gemstone that has two optic axes belonging to orthorhombic, monoclinic, or triclinic crystal systems.

Refractive index Refractive index: The major key used to identify a stone is refractive index or R.I., the ratio between the velocity of light in the air and the velocity of light in the stone. R.I. is (Tungsupanich, 1998).

The velocity of light in the air R.I.  1 (1) The velocity of light in the water

Since the density of the air is always less than the density of any solid material, the velocity of light in the air is always faster than the velocity of light in the stone. Hence, R.I. is always greater than 1. For example, diamonds have a refractive index of 2.417. That means the speed of light in the air is equal to 2.417 times the speed of light traveling in the diamond.

Physical Properties Physical properties include hardness, specific gravity, and magnetism. These properties can help identify the stone. However, physical properties are only a part of gem identification; therefore, whoever wants to identify a gem still needs other methods to confirm the final identification.

Hardness In gemology, hardness is the resistance of a substance to being scratched. Gemologists use Mohs’ scale, which is a relative scale, as a standard hardness. The scale starts from 1 as the softest to 10 as the hardest. Gems lower than 7 in Mohs’ scale are quite soft. Nevertheless, some gems, like pearls, are really soft, but they are also fashionable and expensive. Hardness testing is not recommended for finished good stones because scratching is a destructive method. Thus, it is not a preferable method. The test, if needed, should be done at the position that has the least effect on the beauty of the gem.

Specific gravity Specific gravity or S.G. is the ratio between the weight of a substance in the air and the weight of water that has the same volume of that substance. Each gemstone has its own specific gravity; however, some might share the same specific gravity. There are two ways to find the specific gravity as explained below.

1. Hydrostatic: A balance is used to find the weight of a substance in the air and under the water, with the result used to calculate the specific gravity. This method gives a more accurate specific gravity. 2. Heavy liquids: This method can only approximate the specific gravity by comparing the specimen in different specific gravity liquids.

165

Development of an Expert System for Gem Identification

The most widely used method is hydrostatic. Due to the fact that gemstones have a wide range of specific gravities, this manner has no limitations (Tungsupanich, 1998). The formula below shows how to calculate specific gravity using the hydrostatic method.

Weight in Air S.G.  2 (2) Weight in Air  Weight in Water

This method provides the best result for the calculation of specific gravity; nevertheless, there is a limitation for this particular method. The disadvantage is that it is not suitable to be used with a stone less than half a carat, due to the fact that this method is derived from calculation. Therefore, the less weight a stone has, the less accuracy can be achieved (Liddicoat, 1987).

Magnetism This is a capability of either attracting a magnetic needle or being attracted by a magnet. This property is crucial in order to support the obtained information to identify a gem, especially diamonds. Because of metallic flux inclusions, synthetic diamonds react to the magnet (Goff, 1999). Although not all synthetic diamonds possess this property, no such reaction has been observed in natural diamonds (Gemological Institute of America, 1992).

Expert Systems Computer programs that perform sophisticated tasks, which can typically be done only by human experts, are known as expert systems (Benfer et al., 1991). These systems can be used in specialized areas of knowledge, such as engineering, business, medicine, etc. becoming priceless resources for organizations. One reason why expert systems were developed is that the experts’ talents are valuable, and it would be unfortunate if such expertise become unavailable (Durkin, 1994). An expert system is composed of the structures demonstrated in Figure 1.

Figure 1: A Typical Expert System Structure

Human experts A person is considered an expert when he or she has specialized knowledge in a particular field. This type of knowledge, or domain knowledge, is stored in the long­term memory (LTM) of the expert. While giving advice, the expert obtains facts related to the problem and stores them in his or her short­term memory (STM). Then, the expert reasons the problem by using facts and data from both LTM and STM. With this process, the expert can draw a conclusion and give a recommendation. Similarly, in an expert system, the knowledge base, working memory, and inference engine act as LTM, STM, and reasoning respectively, while the user plays a role as an advisee. The human expert and expert system’s problem­solving are demonstrated in Figures 2 and 3.

166

K. Phongkusolchit and T. Velasco Volume 7 – Fall 2009

Figure 2: Human Expert Problem Solving

Figure 3: Expert System Problem Solving

Knowledge base An organization that contains the knowledge needed in order to solve a particular problem is known as a knowledge base. This knowledge can be obtained from an expert, text, journal, or any other document related to the problem, and coded in the knowledge base using knowledge representation. One of the most widely used models for representing knowledge is rules (Wu, 1993). A rule is an IF/THEN structure. This structure relates information contained in the IF part to other information contained in the THEN part.

Working memory Problem facts revealed during the session are stored in working memory (Durkin, 1994). The information for a current problem is entered into working memory, and new facts are inferred from matching the information in knowledge base. Then working memory is keyed in with these facts, and the matching process continues. Finally, a conclusion is derived and entered it into working memory.

Inference engine The inference engine is the essential part of the expert system to match the facts in working memory with the knowledge in the knowledge base and draw conclusions for the problem (Ignizio, 1991). The inference engine derives the new information by working with facts in working memory and knowledge in the knowledge base. It looks for the rules that match their premises with the given information stored in working memory. After that, the rule’s conclusion is added to the working memory. Then, the search for new matches continues.

Explanation facility In an expert system, this facility provides information to the user. The given information assures the user and makes the user more confident to answer the question. This information explains why or how the expert system draws the conclusion.

167

Development of an Expert System for Gem Identification

User interface User interface allows the interaction between a user and expert system utilizing a natural language. Such interaction could be highly evolved and similar to humans’ conversation. The fundamental design requirement of the interface is to ask questions. To obtain reliable information from the user, an extra emphasis may need to be applied for the question’s design because the user sometimes requests the ability to explore or change information contained in the working memory.

METHODOLOGY Determining System Specifications This expert system will provide information and lead the user through basic gem identification. In fact, it is general practice that gemologists should not draw a conclusion and give identification if they do not have enough evidence to identify a questioned stone. However, this research is primarily designed for people who have little to no knowledge of gemology; therefore, this system does not follow the restrictions of a gemologist or gem expert.

In this study, the program is designed to ask fundamental questions about the stone being investigated. The system will process the collected answer and look for a match in the knowledge base. For people who are not gemologically educated, the system is useful in that it will give all possible gemstones matching the criteria, when the given information is not enough to exactly identify the gem.

Knowledge Engineering The process of building an expert system, knowledge engineering, is very iterative. At first, the system is partially built, tested, and then modified. Figure 4 illustrates the development of this expert system.

Figure 4: Expert System Development

Knowledge Acquisition The information or knowledge elicited from the expert is used to provide both insight into the problem and the material for the design of the expert system. To gain the needed knowledge about the problem, the knowledge engineer works closely with the expert. However, the knowledge can be acquired not only from the expert, but also from any source of expertise in a particular problem. Technical literature is very helpful for knowledge acquisition.

In the early stages of development, the researchers acquired and studied the general nature of gem identification from appropriate sources including manuals, texts, encyclopedias, and also from expert gemologists. The objective was to uncover

168

K. Phongkusolchit and T. Velasco Volume 7 – Fall 2009 the key concepts and general problem­solving methods used by gemologists. The following are the results of knowledge acquisition extracted from related resources.

1. The major key to identify gemstones is the refractive index. 2. Inclusions of the gemstones are the key to support and distinguish the identification of natural and synthetic gemstones. 3. The criteria for basic gem identification with minimal scientific instruments include color, transparency, phenomena, optical characteristics, refractive index, specific gravity, inclusions, and magnetism.

Information Analysis And System Design When the knowledge is extracted, the knowledge engineer can analyze the information or knowledge and relay it into the most appropriate form in the expert system. This covers the knowledge in gemology, the expert system development tool, and system design.

Information analysis The information or knowledge extracted from the experts and related sources provided the fundamentals for identifying gemstones within the specified scope.

Selection of gemstones: Due to the fact that this particular research is designed for non­expert users, most gemstones used for the development of this expert system for gem identification are well­known gems. Furthermore, some gemstones found in the marketplace are included as well as synthetic gemstones. The 73 gemstones included in this research are categorized into the following three groups: precious stones, semi­precious stones, and synthetic stones.

1. Precious stones. There are only two species considered as precious stones; however, they are distinguished, valuable, and well­known. One is diamond and the other is corundum. The researchers separated diamonds into two groups, diamonds and fancy diamonds. Generally, it is assumed that diamonds are always colorless, when in fact diamonds are found in many colors. Diamonds, in this research, will be colorless or near colorless diamonds; fancy diamonds will be other colored diamonds. Corundum includes all colored sapphires; however, only the red one is called a ruby. The rest are specified by color, such as yellow sapphire, green sapphire, and pink sapphire. If the term “sapphire” is used alone, it generally refers to blue sapphire. Nevertheless, in this study, rubies and blue sapphires are separated from others. This statement means that other types of corundum besides the ruby and blue sapphire are included as fancy sapphires. In addition, another type of gemstone considered a precious stone is the emerald. The emerald is the only variety of beryl to be included in the precious stone group because of its beauty, popularity, and rarity. 2. Semi­precious stones. Most of the gemstones in this research belong to the second group. Semi­precious gemstones are typically not as expensive; however, if they have a good size and quality, they could be just as valuable as precious stones depending on the rarity of the stone. In general, people are familiar with most stones in this category. Some examples are topaz, opal, pearl, and garnet. Nevertheless, other semi­precious gemstones can be uncommon such as morganite, kunzite, and iolite. 3. Synthetic stones. Due to the fact that there are many types of artificial gemstones in the marketplace, this study includes only synthetic gemstones widely used in the jewelry business. Some synthetic gemstones were created and are used instead of their own counterparts such as synthetic diamonds, synthetic emeralds, and synthetic rubies. On the other hand, some were invented to imitate other natural gemstones, such as synthetic cubic zirconia, synthetic rutile, and strontium titanate. Some are even more sophisticated because they were invented to trick the technology used to identify them; for example, regular diamond detectors cannot distinguish synthetic moissanites from real diamonds. Gemstones selected for the development of this expert system are listed in Table 1.

169

Development of an Expert System for Gem Identification

Table 1: The list of gemstones included in “Expert System for Gem Identification” categorized into three groups.

Precious Stones Blue sapphire, blue star sapphire, color change sapphire, diamond, emerald, fancy diamond, ruby, sapphire, star ruby, and star sapphire

Semi­precious Alexandrite, almandite garnet, amethyst, ametrine, andalusite, andradite garnet, aquamarine, beryl, Stones calcareous coral, cat’s eye, chrome tourmaline, chrysoberyl, citrine, conchiolin coral, demantoid garnet, grossularite garnet, hawk’s eye, hematite, hessonite, iolite, jadeite jade, kuntzite, labradorite, lavender jade, malachite, moonstone, morganite, nephrite jade, opal, Paraiba tourmaline, parti­color tourmaline, pearl, peridot, pyrope garnet, quartz, rhodolite garnet, rubellite, spessartite, garnet, spinel, spodumene, sunstone, tanzanite, tiger’s eye, topaz, tourmaline, tsavorite, turquoise, zircon, and zoisite

Synthetic Stones Gadolinium gallium garnet, strontium titanate, synthetic blue sapphire, synthetic cubic zirconia, synthetic diamond, synthetic emerald, synthetic moissanite, synthetic ruby, synthetic rutile, synthetic sapphire, synthetic spinel, synthetic star ruby, synthetic star sapphire, and yttrium aluminium garnet

Instruments for gem identification: Quite a few instruments have been developed to confirm the best identification; however, advanced instruments are very costly. By using standard or traditional gem­testing instruments and techniques, trained gemologists can identify most gems (Lu & Shigley, 2000). The following are instruments that were used in this research to collect information for gem identification.

1. 10x Triplet­Type Loupe. One of the most portable and widely used instruments is the loupe (Matlins & Monanno, 1997). Many inclusions and blemishes cannot be seen with the naked eye but can be viewed with the loupe. In general, loupes have different magnifications, from 6x, 10x, 14x, 20x, to 24x. A 6x loupe can show a picture that is as large as 6 times the actual size. In general, gemologists use a 10x loupe, and it should be a triplet­type for gem identification. The triplet­type loupe is recommended because it has been made to correct two problems that other types of magnifiers have: both, the presence of traces of color (chromatic aberration) and visual distortion (spherical aberration) are usually found at the outer edges of the lens. Using higher­power magnification can be more difficult and result in major errors in identification. It is a common misconception that a higher­power lens would help them see inclusions in a stone better; however, that is not exactly true unless they know how the magnification works and how to focus properly. With a 10x loupe, there is a one­inch field. This means anything present in a stone at a distance of one inch from the end of the loupe will be in focus. Once something is seen, the loupe can be moved to focus on it more sharply. 2. Refractometer. This instrument is used to find an R.I. Generally, the higher the R.I., the more brilliant the stone. Since most stones have a unique R.I., they can be identified with this instrument. Nevertheless, it will not distinguish between natural gemstones and their synthetic counterparts (Matlins and Bonanno, 1997). The refractometer is used most easily with stones that have at least one flat, polished surface. On the other hand, a spot method can be used for cabochons, but it is more difficult. The major shortcoming of most refractometers is that they will not work with the very high R. I. stones, such as diamonds, certain diamond imitations, and certain varieties of garnet. 3. Polariscope. The polariscope is a desktop instrument used to detect optical properties of gems. It is used to easily and quickly determine whether a stone is singly or doubly refractive, and to determine the presence of strain in diamonds and other gems. It is also being used increasingly because of its value in distinguishing synthetic amethyst from genuine. It is the only affordable instrument currently available that can make this separation. 4. Binocular microscope. This is a desk or countertop instrument used primarily for magnification. This microscope must have both dark­field and bright­field illumination, and a light source at the top of the instrument to reflect light from the stone being examined. For new­type synthetics, a magnification capability of 60x is required. A magnification of 30x is enough for other gem identification. 5. Synthetic diamond detector. Synthetic gem­quality diamonds are in the marketplace. Several sophisticated instruments have been developed to quickly separate synthetic from natural, but the machine is very costly and lacks portability, making it impractical in many cases. Fortunately, there are other useful and inexpensive tools although they are not 100% effective. A special rare earth magnet (neodymium boron iron magnet) has an extraordinarily strong magnetism. With this type of magnet, a synthetic detector can, at least, give a response indicating whether the stone is synthetic or not, due to the fact that most synthetic diamonds have magnetic properties. Since some synthetic diamonds do not possess magnetic properties, the final conclusion cannot be drawn without further appropriate testing.

170

K. Phongkusolchit and T. Velasco Volume 7 – Fall 2009

Data organization: Once the gems to be included in this expert system were selected, the next step was to organize the data extracted from the expert and related sources. From the knowledge elicitation, the criteria for basic gem identification are color, transparency, phenomena, optical characteristics, refractive index, specific gravity, inclusions, and magnetism. The information for each gemstone was organized by the mentioned criteria.

Expert system development tools: This research used LEVEL5 OBJECT, one of the expert system shells available on the market.

LEVEL5 OBJECT: LEVEL5 OBJECT is a complete expert system development tool that contains the instruments necessary to solve a wide range of problems. LEVEL5 OBJECT has, among other tools, a graphical user interface development editor, forms, and display builders. These instruments help control the overall aspects of the user interface (Information Builders, 1995).

Microsoft Excel: Although LEVEL5 OBJECT has an object­oriented knowledge base integrated, a spreadsheet can build the knowledge base more easily. Due to the fact that the knowledge base for all gems is sizable, Microsoft Excel, a conventional spreadsheet, was a very useful tool for constructing the knowledge base in this expert system.

System Design An expert system minimally consists of a knowledge base, an inference engine, and a user interface (Byrd, 1995). These principals directly influence the system and prospective users. For the best gem identification, it is critical that the designed system draw the most accurate data from the user, and the more information the user provides, the better the gem identification the user gets.

Knowledge base: For extracting knowledge, human experts use conditions to solve problems; for this reason, knowledge base rules, a similar method, are utilized in this system. The following example explains how rules or the IF/THEN method works.

RULE: IF Premise THEN Conclusion

Rules were constructed using the matter of the problem in different procedures. There are more than 100 rules in this system. While the system is working, the given answers or facts from the user are placed in working memory. Then the inference engine combines the facts stored in working memory with the rules in knowledge base, and conclusions are drawn. Figure 5 demonstrates the rules methodology.

Figure 5: Rules Method

Inference Engine: Since the goal of this expert system is the conclusion or, in this case, the identification of a stone, the forward­chaining method is used. This method derives new facts using rules that promise to match known facts collected from the user. This process continues until a goal is reached, or until no more rules have premises matching the derived facts. Figure 6 illustrates the forward­chaining method.

171

Development of an Expert System for Gem Identification

Figure 6: Forward­chaining Method

User interface: The user interface window for this program is similar to a general application window. For this program, the user interface can be categorized into three groups:

1. Data input. Data input will be in the form of questions, and possible answers will be offered in three different formats: (1) radiobutton group, (2) checkbox group, and (3) promptbox. For the radiobutton group, the user can select only one answer from the group. On the other hand, more than one answer is possible for the checkbox group. A number can be placed in the promptbox to answer some questions. Figure 7 shows the three formats of inputs.

Figure 7: Example of Inputs (a) Radiobutton, (b) Checkbox, and (c) Promptbox

2. Explanation facility. This module provides explanations about the questions or the answer to the user. Due to the fact that there are several technical terms in an expert system for gem identification, the explanation facility provides the meanings and pictures for those terminologies to ensure that the user understands and answers the questions correctly. Moreover, this expert system also includes how to obtain expert information, such as refractive index and optic character. This information is offered in a different window from the question’s window. The additional window demonstrates what types of instruments were used for gem identification. An explanation facility example is shown in the Use of the System section.

172

K. Phongkusolchit and T. Velasco Volume 7 – Fall 2009

3. Data output. The output window shows the best identification, or all possible gemstones derived from the information given by the user. This window also provides a picture for the best identification. In addition, some important information about the selected gemstone is offered to the user. The Use of the System section includes some examples of data output windows.

System Development Since all required information and tools (i.e., the knowledge base, the inference engine, and the user interface) were developed in LEVEL5 OBJECT, additional development tools included Adobe Photoshop and Microsoft Excel.

RESULTS Introduction Before the user begins the process of gem identification, the title window for gem identification is shown and asks whether the user wants to proceed or not. Figure 8 illustrates the title window for the expert system for gem identification.

Figure 8: The Title Page

Use Of The System The purpose of this expert system is to identify gemstones. Even if the user cannot give all of the required information for identification, the system will display all possible gemstones that possess the given characteristics. The following procedure will guide the user in the use of the expert system for gem identification by asking a series of questions. For example, the first question is “What color does the stone have?” as shown in Figure 9. The system asks the user about the color of the stone. Each window is integrated with information section to help the user answer the question in both text and graphic. In Figure 9, the box in the bottom left corner explains why the color is important in gem identification. The user can select one or more colors if needed.

173

Development of an Expert System for Gem Identification

Figure 9: The Color Question

Not all characteristics of a stone have more than one attribute. The system is designed to be mistake­proof; for example, transparency is the ability of a material to transmit light. Using the radio button, the user can only choose one answer as shown in Figure 10.

Figure 10: The Transparency Question

Some questions can be very technical. The Info button (shown in Figure 11) will bring the user to the window providing more information related to such question illustrated by Figure 12. However, if the user does not know the answer, the “Unknown” button illustrated in Figure 11 is available for each question.

174

K. Phongkusolchit and T. Velasco Volume 7 – Fall 2009

Figure 11: The Optic Character Question

Figure 12: An Example of a “Return” Button

The best identification of the stone is provided in the next window, once the users have gone through the whole series of questions. Examples of best identification are shown in Figure 13 and 14. At this point, the user must realize that the identification is based on the given information or facts. Hence, the user can change the given answers before the identification is presented by the system. If the user is not sure about the given information, the user can click the “Back” button to go to the desired window and change it. To proceed with the identification, the user must click the “Next” button. However, the user can quit or restart the program by clicking the “Exit” or “Restart” button at any time. These buttons are illustrated in several Figures throughout this system.

175

Development of an Expert System for Gem Identification

CONCLUSION Due to the scarcity of gemologists and media in gemology, an expert system for gem identification can help people, who have little knowledge in gemology, identify a stone. This system not only helps the user identify the stone, but also provides gemological knowledge for those using the expert system.

Figure 13: The Identification Display for “Hematite”

Figure 14: The Identification Display for “Synthetic Diamond”

This expert system for gem identification was developed in a computer­based format. This system includes information for gem identification from many sources in the field of gemology. The researchers collected all information available from experts, textbooks, journals, and other related sources so that this expert system could be used as a reference in this particular subject. This system contains a number of pictures related to the field that make the process more graphic and provide the user a greater understanding of the technical knowledge associated with this subject matter. In addition, the expert system for gem identification can be used as a training tool because the system explains how the instruments should be used and how the characteristics of gemstones are determined.

176

K. Phongkusolchit and T. Velasco Volume 7 – Fall 2009

An expert system can only provide as much knowledge as the knowledge engineer puts into the system. This expert system for gem identification does not have the ability to automatically learn or add knowledge into the system; therefore, the system needs to be maintained to keep it up­to­date.

REFERENCES Benfer, R., Brent, E., & and Furbee, L. (1991). Expert Systems. Thousand Oaks, CA: Sage. Byrd, T. (1995). Expert systems implementation: Interviews with knowledge engineers. Industrial Management & Data Systems, 95(10), 3­7. Durkin, J. (1994). Expert systems: Design and development. New York: Maxwell Macmillan International. Gem identification: Occasional tests. (1992). Santa Monica, CA: Gemological Institute of America. Goff, R. (1999). Hard science. Forbes, 163(7), 7. Ignizio, J. (1991). Introduction to expert systems: The development and implementation of rule­based expert systems. New York: McGraw­Hill. LEVEL5 OBJECT reference guide. (1995). New York: Information Builders. Jeffery, A. (2001). A gem of an investment. Asian Business, 37(2), 57. Lee, W., Liu, C., & Lu, C. (2002). Intelligent agent­based systems for personalized recommendations in internet commerce. Expert Systems with Applications, 22(4), 275­284. Liddicoat, R. (1987). Handbook of Gem Identification. Santa Monica, CA: Gemological Institute of America. Lu, T., & Shigley, J. (2000). Nondestructive testing for identifying natural, synthetic, treated, and imitation gem materials. Materials Evaluation, 58(10), 1204­1208. Matlins, A, & Bonanno, A. (1997). Gem Identification Made Easy: A Hands­on Guide to More Confident Buying and Selling. Vermont: GemStone Press. Nassau, K. (2000). Synthetic moissanite: A new man­made jewel. Current Science, 79(11), 1572­1577. Nilsson, N. (1998). Artificial intelligence: A new synthesis. San Francisco: Morgan Kaufmann Publishers. Prasad, R., Ranjan, K., & Sinha, A. (2006). AMRAPALIKA: An expert system for the diagnosis of pests, diseases, and disorders in Indian mango. Knowledge­Based Systems, 19(1), 9­21. Shigley, J., Koivula, J., York, P., & Flora, D. (2000). A guide for the separation of colorless diamond, cubic zirconia, and synthetic moissanite. The Loupe, 9(3), 8­10. Tungsupanich, V. (1998). Colored Stones and Deposits, Bangkok: Thansettakij. Wu, X. (1993). LFA: A liner forward­chaining algorithm for AI production systems. Expert Systems, 10(4), 237­242. Zhang, Y., Chu, C., Chen, Y., Zha, H., & Ji, X. (2006). Splice site prediction using support vector machines with Bayes Kernel. Expert Systems with Applications, 30(1), 73­81.

177

GEM and the Leptonic Width of the J(3097)

GEM AND THE LEPTONIC WIDTH OF THE J(3097)

D. White Roosevelt University, USA

ABSTRACT Because the J(3097) exists in the “asymptotically free” region of energy space, the leptonic partial widths of the J(3097) associated with e+e­ and + ­ decays do not depend in any great measure upon the masses of the respective products. Hence, to a high degree of approximation, the above­mentioned partial widths may be regarded as equal and calculable from the formula, J­ee  J­  ( / s) J­H , where J­ee represents the partial width of the J(3097) associated with electron / positron decay, J­ represents the partial width of the J(3097) associated with muon / anti­muon decay, J­H represents the hadronic width of the J(3097),  represents the fine structure constant = (1 / 137.036), and  s represents the strong coupling parameter. Now, GEM (the Gluon Emission Model) has been shown to have yielded highly accurate determinations of the hadronic widths of all vector mesons in their ground states, as well as of  s over the entire range of energy where vector mesons occur. However, via GEM, J­ee above is only 2.31 Kev, far less than the experimentally determined value of (5.40  0.22) Kev reported by the Particle Data Group. In the following work we suggest an ansatz which could plausibly explain the disparity between experiment and the simple application of GEM above regarding the leptonic partial widths of the J as follows: Since GEM predicts that the J decays via a four­momentum transfer from a cc* state (where “c” represents the charm quark and the “*” signifies an anti­quark) to an excited ss* state (where “s” represents a strange quark), followed by a spin­flip of the ss* system upon hadron emission, due to the fact that the J is too light to have two charm quarks involved in its decay products, we postulate that most leptonic decays involve the ss* system spin­flip, but not all. We calculate the fraction of leptonic decays stemming from the original cc* system to explain the experimental results and find it to be fairly small (1/9). By taking into account the consideration expressed immediately above, GEM is seen to yield the hadronic, the leptonic, and, thereby, the full width of the J(3097) essentially exactly.

Keywords: J(3097); Gluon Emission Model; Leptonic Partial Width; Hadronic Partial Width

INTRODUCTION It is well­known that, at least as it pertains to lepton pair production, the J(3097), henceforth referred to as simply “the J” in the following work, exists in the “asymptotically free” region of energy space, in which the decay rates associated with the two physically possible types of purely leptonic decay products, i.e., electron/positron (signified by “e+e­” or “ee” herein) or muon/anti­muon (signified by “+ ­” or “”) pairs, exhibit essentially no dependence on the masses of the emerging leptons. See for example page 78 of the 2004 Meson Table, published by the Particle Data Group (PDG) (PDG (2004)) on which is stated that the branching ratio for J  e+e­ is (0.0593  0.0010) and that for J  + ­ is (0.0588  0.0010). Under such conditions, it is logical to assume that the partial leptonic decay width of the J associated with e+e­ decay, for example, may be theoretically expressed as

J­ee  (/ s) J­H (1) where  represents the fine structure constant = (1/137.036),  s represents the strong coupling parameter, and J­H represents the purely hadronic width of the J.

Of course, in order to actually calculate J­ee on a purely theoretical basis, one needs a reliable theoretical structure that would allow for the determinations of both, J­H and  s. We believe such reliable theoretical structure does exist as the Gluon Emission Model (GEM), as evidenced in D. White, “The Gluon Emission Model for Hadron Production Revisited” (White (2008)). The key elements associated with the decay of the J meson illustrated in the above­mentioned paper are (1) its basic cc* structure being too light to allow for decay products involving both the c and c* quarks necessitates a very rapid transfer of four­momentum to an excited ss* system, so rapid that the form factor for so doing equals essentially one, as are all gluon couplings involved; from there (2) in accord with the basic precept of GEM, the ss* system experiences a spin­flip (see, for example, Dalitz (1977)), so that the square of the matrix element vital to the J’s width calculation associated with the presently

178

D. White Volume 7 – Fall 2009

4 4 considered facet of the decay is proportional to qs , rather than to qc . An exactly analogous situation as above pertains to the (9460) (henceforth denoted as “”), so that for it, the basic bb* (where “b” represents the bottom quark) system transitions very rapidly to an excited cc* system, which then decays via spin flip, so that the square of the matrix element analogous to 4 the one above is proportional to qc . With such provisos GEM then yields highly accurate width determinations of all known vector mesons and, as well, accurate values of  s over the entire range of energy from the  to the . Our aim, then, is to employ GEM to determine J­ee on a purely theoretical basis and “go from there”, as it were. We will see that a major discrepancy between the GEM­calculated result for J­ee and that which is reported for same by the PDG in PDG (2004) is in evidence. However, we will also see that a very plausible assumption can be made, the implications of which can be seen to bring about agreement between theory and experiment.

THE DETERMINATION OF J­ee via GEM The Gluon Emission Model assumes that vector mesons arise by virtue of quark spin­flip with accompanying gluon emission. Keeping in mind elements 1 and 2 above associated with the decay of the J, from Eq. 4 of White (2008) we can describe the hadronic width of the J via GEM as

3 4 ­1 J­H(GEM)  (1960 Mev)(M/MJ) (qs )[ln(MJ/50 Mev)] (2)

where M = mass of the  meson = 776 Mev, MJ = mass of the J meson = 3097 Mev, and qs = strange quark charge = (­1/3). Hence,

J­H(GEM)  92.25 Kev (3)

In addition Eq. 9 of White (2008) indicates that  s at the J energy as determined by GEM is given by

­1  s(J­GEM)  1.2[ln(MJ/50 Mev)] (4)

Hence,  s(J­GEM)  0.2908 (5)

Therefore, in accord with Eq. 1, we find the partial width of the J associated with electron/positron decay as determined by GEM as

J­ee(GEM)  (1/137.036)(1/0.2908)(92.25 Kev) (6)

Hence, J­ee(GEM)  2.31 Kev (7)

Denoting J­ee(PDG) as the published value of J­ee in PDG (2004), we note

J­ee(PDG) = (5.40  0.22) Kev (8) a figure about two and a third times J­ee(GEM) !

REFINEMENT OF THE J­ee(GEM) CALCULATION Whereas all hadronic decays of the J must involve the transition to the excited ss* system as an intermediate state, due to the J being too light to be able to produce hadronic products involving c and c*, it is still energetically possible for the J’s original cc* system to decay directly to leptons, specifically e+e­ and + ­. Assuming that the fraction, β, of leptonic decays stem from the original cc* system, agreement with experiment via GEM can be obtained if we set the experimentally obtained electron / positron partial width, J­ee(PDG) = 5.40 Kev, equal to the refined GEM calculation of same, denoted by J­ee(GEM)refined, according to:

J­ee(GEM)refined = J­ee(PDG) = 5.40 Kev = 2.31 Kev [16 β + (1 – β)] (9)

4 in which equation the factor “16” stems from using qc (qc = (2/3)) in Eq. 2 above (in conjunction with Eqs. 6 and 7) as 4 associated with the fraction, β, described above, whereas the remainder, (1 – β), goes with qs . Solving Eq. 9 for β yields:

β = 0.089 (10)

179

GEM and the Leptonic Width of the J(3097)

CONCLUDING REMARKS If the above ansatz is correct, i.e., β = 0.089, as descriptive of the fractional contribution to the leptonic decays of the J stemming from the J’s original cc* system, the full width of the J as per GEM would be given by:

J(GEM)full = J­H(GEM) + 2 J­ee(GEM)refined = [92.25 + 2 (5.40)] Kev = 103.05 Kev (11) a figure in excellent agreement with the experiment of Armstrong, who obtained the width of the J as (99 ± 12) Kev (PDG (2004). However, given that β ≠ 0, indicating that some non­zero fraction of the original cc* states “stay behind” to subsequently decay via lepton pair emission, while the rest transition very quickly to the excited ss* systems, it would be reasonable to ascertain whether such circumstance has any influence on the purely hadronic width of the J. To that end we recast

3 4 ­1 J­H(GEM) as: J­H(GEM) = f (1960 Mev)(M/MJ) (qs )[ln(MJ/50 Mev)] (12) where f represents an adjustment to the afore­mentioned cc* to ss* transition form factor. In Eq. 2 we assumed f = 1. The extent that f differs from one is basically the extent that the small fraction of cc* states decaying directly to lepton pairs also mitigates the purely hadronic decay width of the J.

To solve for f, we set the adjusted hadronic width of the J (Eq. 12) equal to the PDG’s determination of the experimentally­ determined hadronic width of the J of 80.20 Kev (PDG (2004)).

To that end we have:

f(92.25 Kev) = 80.20 Kev (13)

Hence, f = 0.8694 (14)

Such mitigation as seen above makes necessary the recalculation of β, as Eq. 9 above must now be rewritten as:

f J­ee(GEM)refined = J­ee(PDG) = 5.40 Kev = 2.01 Kev [16 β + (1 – β)] (15) from which now β → 0.112 ≈ (1/9). Now, as f = 0.8694 ≈ (1 – β), a clear picture as to the structure of the J and its subsequent decay has emerged. Coincidentally or not, β can be expressed as

2 β ≈ qs = (1/9) (16)

Nearly exact agreement with experiment as to the full width of the J can now be reached if we assume the cc* → ss* form 2 2 factor to be f = (1 ­ qs ) = (8/9) due to qs = (1/9) of the original cc* states decaying directly into lepton pairs. With the above assumption we obtain the theoretical hadronic partial width of the J as

2 3 4 ­1 J­H(GEM:Theoretical) = (1 ­ qs ) (1960 Mev)(M/MJ) (qs )[ln(MJ/50 Mev)] = 82.00 Kev (17) with the theoretical leptonic partial width given by

2 2 2 J­l(GEM:Theoretical) = (1 ­ qs ) (2.31Kev)[16 qs + (1 ­ qs )](2) = 10.95 Kev (18)

2 (See Eq. 9 above; the factor of “2” in Eq. 18 enters in to include muon pairs, and the factor (1 ­qs ) obtains because, again, f = 2 (1 ­ qs ).) Hence, the full width of the J according to the theoretical picture developed herein is expressed as J­ full(GEM:Theoretical) = 92.95 Kev, which is essentially a match to the PDG’s assessment for same, viz., (91.0 ± 3.2) Kev (PDG (2004)). We see, therefore, that, though obtained by means of an iterative process, the structural characteristics of the J as put forth by GEM are completely internally self­consistent with the original root assumptions of such model. The form factor, f, for example, is reduced from the assumed value of one by one ninth only because one ninth of the assumed original cc* states do not take part in the same decay pattern as the other eight­ninths do. Rather, they “linger behind” to decay exclusively into lepton pairs, thus correctly giving rise to the anomalously large leptonic decay width of the J, compared to the prediction consistent with the seen­to­be­erroneous assumption that 100% of the cc* states were to convert to ss* excited states as descriptive of the decay process of the J.

REFERENCES PDG (2004), “Mesons”, accessed online Nov. 7, 2008, pdg.lbl.gov/2004/tables/mxxx.pdf.

180

D. White Volume 7 – Fall 2009

White, D. (2008), “The Gluon Emission Model for Hadron Production Revisited”, Journal of Interdisciplinary Mathematics, 11 (4), pp.543 – 551. White, D. (1985), “Calculation of the Strong Coupling Constant, αs, from Considerations of Virtual Synchrotron Radiation Resulting in Hadron Pair Emission”, International Journal of Theoretical Physics, 24 (2), pp. 201 ­ 216. Dalitz, R. H., “Glossary for New Particles and New Quantum Numbers”, Proceedings of the Royal Society of London, Series A, Mathematical and Physical Sciences, Vol. 355 (1683) (1977), p.601.

181

Force-Modeling Theory: Melodic Motion and the Real-World Attributes of Tones

FORCE­MODELING THEORY: MELODIC MOTION AND THE REAL­WORLD ATTRIBUTES OF TONES

Michael D. Jones Kirkwood Community College, USA

ABSTRACT Metaphorical descriptions of music utilizing concepts and terminology borrowed from the physical domain, such as space and motion, have existed since the earliest writings on music theory. These metaphorical descriptions of music, however, have been vague by comparison to descriptions of the actual physical phenomena in terms of which they are made. The more precise metaphorical descriptions of music developed in this paper can provide valuable analytical tools to the theorist that may reveal new and interesting aspects of melodic motion. The theory developed in this paper, which I call force­modeling theory, utilizes methods of analysis proper to the description of physical objects and their motions to describe musical objects and their motions. Specifically, Isaac Newton’s second law of motion is mapped into the musical domain to create what I call the “Newtonian force” which is responsible for the motion of tones, possessing mass, through melodic lines. This is the first time these two bodies of knowledge—the real­world attributes of tones and the analytical methods of physics—have been brought together. Through analyses of melodies by J. S. Bach, Antonio Carlos Jobim, and Domenico Scarlatti, tones and their motions are shown to be analogous to physical objects and their motions in several ways. Thus, force­modeling theory not only makes use of methods of analysis appropriate to physical motion to describe and quantify musical motion, but also demonstrates analogies between the physical and musical domains.

Keywords: Melody, Force, Tone Mass, Velocity, Momentum, Physics, Metaphor, Newton.

INTRODUCTION Metaphorical descriptions of music utilizing concepts and terminology borrowed from the physical domain, such as space and motion, have existed since the earliest writings on music theory. These metaphorical descriptions of music, however, have been vague by comparison to descriptions of the actual physical phenomena in terms of which they are made. In light of recent research by George Lakoff and Mark Johnson (1980, 1998), Mark Turner (1996), and others, a greater understanding of the cognitive functioning of metaphors has been reached. The work of Michael Spitzer (2004) and Roger Scruton (1997) focuses on musical metaphors specifically but, even so, metaphorical descriptions of music remain imprecise. This is a situation whose time for close examination is long overdue. More precise metaphorical descriptions of music can provide valuable analytical tools to the theorist that may reveal new and interesting aspects of melodic motion. In addition, if listeners conceive of musical objects, space, and motion in term of physical objects, space, and motion, then force­modeling theory has the potential to enrich these listener conceptions.

Force­modeling theory makes an important distinction between notes and tones that the reader should keep in mind throughout this paper. In force­modeling theory a melody is considered to consist of a single tone moving from note to note. In the case of a single melodic line, a single tone moves through the various positions either indicated by the notes of a score, if one is present, or through the conceptualized positions one imagines as a work is being heard. This way of thinking of melodic lines has previously been expressed by Ernst Kurth (translated by Lee Rothfarb, 1988) and Victor Zuckerkandl (1956).

Adopting the stance that knowledge from the physical domain can be metaphorically mapped into the musical domain, force­ modeling theory asserts that listeners can conceive of tones as being objects. Also, from experience, listeners know that physical objects possess mass. I propose, therefore, that listeners can conceive of tones as being objects possessing mass. Furthermore, from their experience with common objects possessing mass, listeners know that physical objects behave according to Newton’s second law of motion, which predicts the forces necessary for the motions of physical objects. Knowledge of Newton’s second law is innate, having been gained from everyday experience with the physical world. This knowledge is integral to the way one experiences and operates in the world. Based on this familiarity with the behavior of

182

M. D. Jones Volume 7 – Fall 2009

physical objects, force­modeling theory also asserts that listeners can conceive of tones as behaving according to Newton’s law and can therefore conceive of the forces necessary for the motion of physically conceived tones. Newton’s second law is therefore utilized in the musical domain as the basis of a quantitative measure I call the “Newtonian force.”

The following presentation includes definitions of tone mass, the time­log(f) plane, tone position within the time­log(f) plane, tone displacement, tone velocity, tone momentum, and finally, Newtonian forces in the melodic line. Each of these topics is addressed separately beginning with tone mass. The first five notes of the subject of J. S. Bach’s C Minor Fugue from Book One of The Well­Tempered Clavier shall serve as an example as the theory is developed in the following sections. This excerpt is shown as Figure 1.

Figure 1: The opening of J. S. Bach’s C Minor Fugue from Book One of The Well­Tempered Clavier.

TONE MASS Force­modeling theory utilizes the concept of tone mass to model the intuition that low tones are heavier than high ones. One need only consider musical examples such as Saint­Seans’ Carnival of the Animals or Prokofiev’s Peter and the Wolf to confirm that composers often map low­pitched instruments onto large, heavy objects—people and animals in these cases— and high­pitched instruments onto small, light objects. Based on the conception that low­frequency tones are heavier than high­frequency tones, tone mass is defined to be an inverse function of a tone’s fundamental frequency. This general relation between tone mass and frequency is expressed formally as Equation 1. The value of the constant of proportionality c will be determined shortly.

c m  f Equation 1: Tone mass, preliminary version.

 In order to determine the precise relation between tone mass and frequency, some note must be chosen to serve as a reference so that the value of the constant c can be determined. The reference note is chosen to be A4, having a frequency of 440 Hz, which is defined to have a mass of 1 tone mass unit (tmu) 20.

Given the reference note and its mass equal to 1, Equation 1 requires that 1 = c /440. Therefore c = 440 and the precise relation between tone mass and frequency is shown as Equation 2.

440 m  f Equation 2: Tone mass, final version.

 Using Equation 2, the masses of the tone at the five notes of the Bach fugue excerpt, shown in Figure 1, are calculated in Figure 2.

20 Strictly speaking, since the units of frequency are Hertz, which are 1/seconds (s­1), the units of tone mass are seconds. This, however, could be misleading suggesting that tone mass is proportional to note duration, which is not the case. It is decided, therefore to use the designation “tone mass unit” or tmu as the base unit of tone mass.

183

440 440 m1  440  440  0.84089Force-Modelingtmu Theory: Melodic Motion and the Real-World Attributes of Tones m1  440f1  523440.25  0.84089 tmu m1  440f1  523440.25  0.84089 tmu f 523.25 m1  4401  440  0.84089 tmu m  f1  523.25  0.84089 tmu 1 440f 523440.25 m2  4401  440  0.89089 tmu m2  440f 2  493440.88  0.89089 tmu m2  440f 2  493440.88  0.89089 tmu f 493.88 m2  4402  440  0.89089 tmu m  f 2  493.88  0.89089 tmu 2 440f 493440.88 m3  4402  440  0.84089 tmu m3  440f 3  523440.25  0.84089 tmu m3  440f 3  523440.25  0.84089 tmu f 523.25 m3  4403  440  0.84089 tmu m  f 3  523.25  0.84089 tmu 3 440f 523440.25 m4  4403  440  1.1224 tmu m4  440f 4  392440.00  1.1224 tmu m4  440f 4  392440.00  1.1224 tmu m  f  392.00  1.1224 tmu 4 440f 4 392440.00 m4  4404  440  1.1224 tmu f 392.00 m5  4404  440  1.0595 tmu f 415.30 m5  4405  440  1.0595 tmu m  f  415.30  1.0595 tmu Figure 5 4402:5 Tone mass calculations for the five notes of the Bach C minor fugue excerpt.440 m5  440f 5  415440.30  1.0595 tmu m f 415.30 1.0595 tmu 5  5   f 5 415.30  The concept of tone mass demonstrated above is lacking in previous theories of musical motion and, therefore, acts as the  catalytic element allowing force­modeling theory to expand upon and bring into closer alignment with listener perceptions the   idea of motion and forces in melodic lines. For convenience, and since it is assumed that most readers do not know the  frequencies of all notes of the piano keyboard (the pitch range being used in this paper), these frequencies and the resulting masses of these notes are presented on the following two pages as Table 1.

Table 1: Frequencies and tone masses of the eighty­eight notes of the piano keyboard. Pitch Octave Frequency (Hz) Tone Mass (tmu) Pitch Octave Frequency (Hz) Tone Mass (tmu) C 8 4186 0.10511 D 6 1174.7 0.37456 B 7 3951.1 0.11136 Db 6 1108.7 0.39686 Bb 7 3729.3 0.11798 C 6 1046.5 0.42044 A 7 3520 0.125 B 5 987.77 0.44544 Ab 7 3322.4 0.13243 Bb 5 932.33 0.47193 G 7 3136 0.1403 A 5 880 0.5 Gb 7 2960 0.14865 Ab 5 830.61 0.52972 F 7 2793.8 0.15749 G 5 783.99 0.56122 E 7 2637 0.16685 Gb 5 739.99 0.59459 Eb 7 2489 0.17678 F 5 698.46 0.62995 D 7 2349.3 0.18729 E 5 659.26 0.66741 Db 7 2217.5 0.19842 Eb 5 622.25 0.7071 C 7 2093 0.21022 D 5 587.33 0.74914 B 6 1975.5 0.22273 Db 5 554.37 0.79368 Bb 6 1864.7 0.23596 C 5 523.25 0.84089 A 6 1760 0.25 B 4 493.88 0.89089 Ab 6 1661.2 0.26487 Bb 4 466.16 0.94387 G 6 1568 0.28061 A 4 440 1 Gb 6 1480 0.29729 Ab 4 415.3 1.0595 F 6 1396.9 0.31498 G 4 392 1.1224 E 6 1318.5 0.33371 Gb 4 369.99 1.1892 Eb 6 1244.5 0.35355 F 4 349.23 1.2599 E 4 329.63 1.3348 Gb 2 92.499 4.7567 Eb 4 311.13 1.4142 F 2 87.307 5.0396 D 4 293.66 1.4983 E 2 82.407 5.3393 Db 4 277.18 1.5874 Eb 2 77.782 5.6568 C 4 261.63 1.6817 D 2 73.416 5.9932 B 3 246.94 1.7818 Db 2 69.296 6.3495 Bb 3 233.08 1.8877 C 2 65.406 6.7271 A 3 220 2 B 1 61.735 7.1271

184

M. D. Jones Volume 7 – Fall 2009

Table 1: Frequencies and tone masses of the eighty­eight notes of the piano keyboard (Continued) Pitch Octave Frequency (Hz) Tone Mass (tmu) Pitch Octave Frequency (Hz) Tone Mass (tmu) Ab 3 207.65 2.1189 Bb 1 58.27 7.551 G 3 196 2.2449 A 1 55 8 Gb 3 185 2.3783 Ab 1 51.913 8.4756 F 3 174.61 2.5199 G 1 48.999 8.9797 E 3 164.81 2.6697 Gb 1 46.249 9.5136 Eb 3 155.56 2.8285 F 1 43.654 10.079 D 3 146.83 2.9966 E 1 41.203 10.679 Db 3 138.59 3.1748 Eb 1 38.891 11.314 C 3 130.81 3.3636 D 1 36.708 11.986 B 2 123.47 3.5636 Db 1 34.648 12.923 Bb 2 116.54 3.7755 C 1 32.703 13.454 A 2 110 4 B 0 30.868 14.254 Ab 2 103.83 4.2376 Bb 0 29.235 15.102 G 2 94.999 4.4898 A 0 27.5 16

THE TIME­LOG(ƒ) PLANE The time­log(ƒ) plane is a two­dimensional space that plots time on the horizontal axis and the log of frequency—that is to say, half steps—on the vertical axis. Time will be measured, as it is in Newtonian mechanics, using seconds. A portion of the time­log(ƒ) plane is shown in Figure 3.

Figure 3: A portion of the time­log(ƒ) plane.

The time­log(ƒ) plane is similar in many ways to a traditional staff system; the primary difference being that, in the time­log(ƒ) plane, both the vertical units of pitch and the horizontal units of time are consistently spaced. Next, the notes through which the tone travels in the Bach excerpt are shown in the time­log(ƒ) plane.

185

Force-Modeling Theory: Melodic Motion and the Real-World Attributes of Tones

TONE POSITION IN THE TIME­LOG(ƒ) PLANE The traditional notes of a score will represent the positions (both vertical and horizontal) in the time­log(ƒ) plane through which a tone moves. In other words, notes on a staff become locations in the time­log(ƒ) plane. This is not a one­to­one mapping, of course, since the vertical axis of the time­log(ƒ) comprises equidistant half steps and the musical staff does not. The five notes of the Bach excerpt are shown as points in the time­log(ƒ) plane in Figure 4 below21. Such graphic representations of music will hereafter be referred to as tone graphs.

Notation of the vertical position of notes is straightforward, with the vertical axis divided into half steps and the relevant portion of the axis being shown. To notate the positions of notes in time, note durations must be known and a tempo must be established. Note durations are evident from a score but a tempo, unless specifically stated by the composer, must either be chosen to the analyst’s liking or taken from a recording of the work.

Figure 4: Notes of the Bach C minor fugue excerpt plotted in the time­log(ƒ) plane.

In this case, the tempo used in performance by Vladimir Feltsman is chosen22. Feltsman’s tempo is approximately 95 beats­ per­minute (bpm). What is desired, however, is a value for seconds­per­beat (spb). This is found by dividing 60 by the bpm. This equation for converting bpm into spb is shown as Equation 3.

60 spb  bpm Equation 3: Conversion of beats­per­minute (bpm) into seconds­per­beat (spb).

 The resulting value of spb in the Feltsman recording is 0.63158 s. This is the duration in seconds of a quarter note at 95 bpm. The duration of an eighth note would therefore be one­half of the spb value or 0.31579 s, and the duration of a sixteenth note would be one­quarter of the spb value or 0.15789 s. The next element of the theory to be discussed is the displacement of tones, or vertical intervals between notes.

21 The representation of notes as dimensionless points in the time­log(f) plane is a simplification that disregards the fact that uncertainty exists in the physical measurements of both the frequency and duration of a tone. 22 Vladimir Feltsman, The Well­Tempered Clavier Book One, Music Masters Classics CD 01612­67105­2, New York: American Academy and Institute of Arts and Letters, 1993.

186

M. D. Jones Volume 7 – Fall 2009

TONE DISPLACEMENT Newtonian mechanics describes the change of position of an object in terms of displacement across some interval. When an object moves from point A to point B, its displacement, or the size of the interval, is defined as the difference between its final and initial positions. This definition is represented formally as Equation 4 where Greek delta (  ) represents change.

x  x2  x1 Equation 4: Displacement in the physical domain. 

 Frequency displacement is similarly defined as the difference in frequency between two notes. This difference is calculated by subtracting the frequency at the final note from the frequency at the initial note. This definition is stated formally as Equation 5.

f  f 2  f1 Equation 5: Frequency displacement.

 Likewise, time displacement is defined as the difference in time between two notes. This difference is calculated by subtracting the time at the final note from the time at the initial note. This is stated formally as Equation 6.

t  t2  t1 Equation 6: Time displacement.

As demonstration of the foregoing, the tone in the Bach example undergoes the displacements f in frequency and t in  n n the time as shown in the tone graph of Figure 5.

 

Figure 5: Tone graph of the frequency and time intervals in the Bach C minor fugue excerpt.

The frequency intervals shown in Figure 5 are calculated in Figure 6.

187

f1  493.88 523.25   29.37 Force-Modeling Theory: Melodic Motion and the Real-World Attributes of Tones f1  493.88 523.25   29.37 f  493.88 523.25   29.37 f1  523.25 493.88  29.37 f 2  493.88 523.25   29.37 1 f 2  523.25 493.88  29.37 f  523.25 493.88  29.37 f 2  392.00 523.25  131.25 f 3  523.25 493.88  29.37 f 2  392.00 523.25  131.25 3 f 3  392.00 523.25  131.25 f 4  415.30 392.00  23.3 f 3  392.00 523.25  131.25 f 4  415.30 392.00  23.3 Figure 6: Frequency interval calculations (all units are Hz). f 4  415.30 392.00  23.3

f 4  415.30 392.00  23.3  The time intervals shown in Figure 5 are calculated in Figure 7.  t  t  0.15789  1 2 t1  t2  0.15789 

t3  t4  0.31579 t  t  0.31579 Figure 3 7: 4 Time interval calculations (all units are seconds).

The reader will observe that the reason t and t are the same, as are t and t , is that they represent the same  1 2 3 4  note values, 16th notes and 8th notes, respectively. Having established frequency and time intervals between tone positions, tone velocity may now be defined.

TONE VELOCITY     In Newtonian mechanics, average velocity ( v ) is defined as displacement divided by the time elapsed during that displacement. This can be represented as in Equation 7.

x v   t Equation 7: Average velocity in the physical domain.

 Similarly, the velocity of a tone is defined to be the change in frequency from one note to the next divided by the time interval between notes. This definition is stated formally as Equation 8.

f v  t Equation 8: Tone velocity.

f 29.37 Since distance is measured in hertz and time is measured in seconds, the base unit of tone velocity is the hertz1 ­per­second  v1 C  B    186.02 (Hz/s). The velocities of the tone during its four motions through the Bach excerft1 0.1578929.37 pt are calculated in Figure 8. v1 C  B    186.02 f1 29.37 v C  B t1  0.15789   186.02 1 ft 0.1578929.37 v C  B f1  29.37   186.02 v1 B  C t 2  0.15789  186.02 2 ft1 0.2915789.37 v B  C  2   186.02 2   f 29.37 t2 0.15789 v 2 B  C   186.02 ft 2 0.2915789.37 v B  C f2  131.25  186.02 v 2 C  G t 3  0.15789   415.62 3 ft2 0.13131579.25 v C  G 3    415.62 3 f 131.25 t3 0.31579 v 3 C  G    415.62 ft 3 0.13131579.25 v C  G 3f  23.3   415.62 v 3 G  A b t 4 0.31579  73.783 4   3f 23.3 b t4 0.31579 v 4 G  A    73.783 ft 0.2331579.3 Figure 8: bTone velocity calculations for the Bach C minor fugue excerpt (all units are Hz/s).4 v 4 G  A  f  23.3  73.783 b t 4 0.31579 v 4 G  A  4   73.783   t 0.31579  4 

 188 

M. D. Jones Volume 7 – Fall 2009

As can be seen from the results of these calculations, when the tone moves downward its velocity is negative and when the tone moves upward its velocity is positive. In other words, if f is positive then v is also positive and if f is negative then v is also negative. Also, the first two velocities are equal in magnitude but opposite in sign. This is to be expected since the tone first moves from C5 to B4 in a time of 0.15789 seconds, and then reverses this motion moving from B4 back to C5 in the same amount of time. This situation, equal velocities with opposite signs, is to be expected whenever a neighboring motion occurs in equal time intervals. Such would be the case in a trill, for example.  Graphically, average tone velocity is directed   along straight lines connecting notes. This can be seen, along with the velocity values calculated above, in the tone graph of Figure 9.

Figure 9: Tone graph showing tone velocities for the Bach C minor fugue excerpt (all units are Hz/s).

The next element of the theory to be discussed is tone momentum. Having defined tone mass and tone velocity, momentum is first defined as it exists in Newtonian mechanics and then adapted to the musical domain as tone momentum.

MOMENTUM IN THE PHYSICAL AND MUSICAL DOMAINS In Newtonian mechanics, the behavior of a moving object is determined not only by the object’s velocity, but also by its mass. It is useful, therefore, to have a composite measure that takes both velocity and mass into account; this measure is momentum. When describing and predicting the motion of a physical object, therefore, it is more useful to know the object’s momentum than to know only its velocity. An object’s momentum, symbolized by Greek rho (  ), is defined as the product of its mass and velocity. Similarly, tone momentum is defined as the product of a tone’s mass and its velocity. This is expressed formally as Equation 9. The base unit of tone momentum is the tmu­hertz­per­second (tmuHz/s).

  mv  Equation 9: Tone momentum.

 The momentum of the tone during each of its motions in the Bach excerpt is calculated in Figure 10 and shown in a tone graph in Figure 11. In the tone graph, a tone’s momentum, like its average velocity, is directed along straight lines connecting notes.

189

1  m1v1  (0.84089) (186.02)  156.42 Force-Modeling Theory: Melodic Motion and the Real-World ttributes of Tones 1  m1v1  (0.84089) (186.02)  156.42 A 1  m1v1  (0.84089) (186.02)  156.42   m vv  ((00..8408989089))((186186.02.02))165156.72.42 12 12 12 2  m 2v 2  (0.89089) (186.02)  165.72   m v  (0.89089) (186.02)  165.72 2 2 2 2  m 2v 2  (0.8408989089) (186415.02.62) )165 349.72.49 3  m 3v 3  (0.84089) (415.62)   349.49 3 3 3 3  m 3v 3  (0.84089) (415.62)   349.49   m v  (0.84089) (415.62)   349.49 43  m34 v34  (1.1224) (73.783)  82.814 4  m4 v 4  (1.1224) (73.783)  82.814 Figure 10: Tone momentum calculations for the Bach C minor fugue excerpt (all units are tmuHz/s). 4  m4 v 4  (1.1224) (73.783)  82.814 4  m4 v 4  (1.1224) (73.783)  82.814    

Figure 11: Tone graph showing tone momentums in the Bach C minor fugue excerpt (all units are tmuHz/s).

As with tone velocities, tone momentums are negative when the tone descends and positive when the tone ascends. Notice in the case of momentums, however, that the first and second are not equal in magnitude as were the first two velocities. This is because the second note B4 has a greater mass (0.89089 tmu) than the first note C5 (0.84089 tmu). Since tone mass is not immediately apparent in the tone graph, it must be inferred from the vertical position of each note in a tone graph, this difference between the momentums of the first two motions in this example is likewise not immediately apparent in the graph. To judge tone momentum from observation of the graph, without calculation, one must take into account not only the slope of the line connecting notes, but also the vertical positions of these notes; a precise value of momentum can only be had by calculation. Having defined tone momentum, changes in momentum and the force necessary to cause these changes are now discussed and defined.

MOMENTUM CHANGE AND FORCE IN THE MELODIC LINE Newton’s second law of motion states that when a force is imposed on an object, there is a change in the object’s momentum proportional to, and in the same direction as, the applied force23. Put another way, the force required to change an object’s momentum is equal to the change in momentum divided by the time interval over which this change occurs. A formal statement of the second law is given as Equation 10.

 F  t Equation 10: Newton's second law of motion24.

  23 “A change in motion is proportional to the motive force impressed and takes place along the straight line in which that force is impressed.” Cohen and Whitman, in their translation of Newton’s Philosophiæ Naturalis Principia Mathematica, 111, make clear that by “motion” Newton means “momentum.” 24 Perhaps a more familiar representation of Newton’s second law is F=ma. This form and the form used above can be shown to be equivalent.

190

M. D. Jones Volume 7 – Fall 2009

Since the base unit of momentum is the tmuHz/s, and force is momentum divided by the base unit of time, the second, the base unit of force in music is the tmuHz/s2. Newton’s second law can be used to determine the forces necessary for the motion of a tone from one position to the next if the quantities  and t can be defined. To begin with, , the change in momentum of the tone is defined, like all changes discussed thus far, as the difference between the tone’s final and initial momentums. This is expressed formally as Equation 11.

   2  1   Equation 11: Momentum change.

 Regarding the value of t in Equation 10, the time interval over which momentum changes, as the tone in the Bach example moves from the first note (C5) through the second note (B4) to the third note (C5), a change in momentum occurs at the second note. The tone has a downward momentum from the first note to the second, and an upward momentum from the second note to the third, hence the change at the second note. It is unsatisfactory, however, for t in Equation 10 to be equal to zero as this would result in an infinite force being applied to the tone at the second note. In fact, if  t of momentum change were always equal to zero, all forces on the tone at all notes would be infinite, rendering the concept of forces in the melodic line trivial. Therefore, even though the change in momentum is said to occur “at the second note,” there must be some non­zero time interval over which the momentum changes. This time interval is defined as follows.  The change in momentum at a general location, having both preceding and succeeding notes, is defined to occur over a time interval beginning at the midway point between the note where the change is said to occur and the preceding note, and ending at the midway point between the note where the change is said to occur and the following note. This time interval is the average (the arithmetic mean) of the time intervals before and after the note where the momentum change is said to occur. This time interval is expressed formally as Equation 12 and shown in a tone graph in Figure 12.

1 t  2 (t2  t1 ) Equation 12: Average time interval of momentum change.

Figure 12: Tone graph showing average time interval of momentum change.

Having expressions for momentum change (Equation 11) and for the time interval of that change (Equation 12), an equation for the force required to produce that change, that is to say, the force required to move the tone from one note to the next, can be derived using Newton’s second law (Equation 10). This will be accomplished in a series of steps involving substitutions from foregoing equations. The first version of the force law, as it will be called, is found by substituting Equations 11 and 12 for the original variables in Equation 10. The result is shown as Equation 13.

191

Force-Modeling Theory: Melodic Motion and the Real-World Attributes of Tones

2  1 FN  1 2 (t2  t1 ) Equation 13: Newtonian force law, first version.

 Next, recalling that a tone’s momentum is the product of its mass and velocity (Equation 9), and substituting Equation 9 into the numerator of Equation 13, one arrives at the second version of the force law shown as Equation 14.

m2v2  m1v1 FN  1 2 (t2  t1 ) Equation 14: Newtonian force law, second version.

 Finally, recalling that a tone’s mass is 440 divided by its frequency (Equation 2), and that a tone’s velocity from one note to the next is the change in frequency divided by the time interval over which that change occurs (Equation 8), and substituting these equations into the numerator of Equation 14, one arrives at the final version of the force law shown as Equation 15.

440f  440f    2   1   f 2 t2   f1 t1  FN  1 2 (t2  t1 ) Equation 15: Newtonian force law, final version.

 The final version of the force law expresses the force required to move a tone from one note to the next entirely in terms of frequencies and time intervals, the most basic elements of tone motion. All three forms of the force law, Equations 13, 14, and 15 are equivalent. Any of the versions of the force law will give the same resulting values of force. The simplest version possible will always be used. If, for example, velocities are already known, then Equation 14 can be used. If momentums are already known, then Equation 13 is simplest to use. If neither velocity nor momentum has yet been calculated then Equation 15 must be used. In the current example, momentums have already been calculated, so in the following force analysis Equation 13 will be used.

A MELODIC FORCE ANALYSIS The forces required to move Bach’s tone through the first five notes of the fugue subject can now be found. These force 1  0 156.42 0 values are calculated in Figure 13 and shown in a tone graph in Figure 14.FN1  1   0  1 156.42 0  1981.4 (1t  0) (0.15789  0) FN1  12 1  12  1981.4 1  0 156.42 0 (t1  0) (0.15789  0) FN1  12  12  1981.4 (1t00) (0156.15789.4200) FN  2 1  2  1981.4 1 1 (t 0) 1 (0.15789165.720()156.42) 2 12 1 2 FN 2  1     1 165.72 (156.42)  2040.3 (t2  1t ) (0.15789  0.15789) FN 2  12 2 1  12  2040.3 2  1 165.72 (156.42) (t2  t1 ) (0.15789  0.15789) FN 2  12  12  2040.3 2  1 165.72 (156.42) F 2 (t2  t1 ) 2 (0.15789  0.15789) 2040.3 N 2  1  1  2 (t322t1 ) 2 (0.34915789.491650.15789.72 ) FN 3  1     1 349.49165.72   2175.4 (t3  2t ) (0.31579  0.15789) FN 3  12 3 2  12   2175.4 3  2 349.49165.72 (t3  t2 ) (0.31579  0.15789) FN 3  12  12   2175.4 3  2 349.49165.72 2 (t3  t2 ) 2 (0.31579  0.15789) FN 3  1  1   2175.4 2 (t433t2 ) 2 (820..31579814  (0349.15789.49)) FN 4  1     1 82.814  (349.49)  1369.0 F  (t4  3t )  (0.31579  0.31579)  1369.0 N 4 12  4  3 12 (t4  3t ) 82(0..31579814  (3490.31579.49) ) FN 4  12 4 3  12  1369.0 4  3 82.814  (349.49) 2 Figure 132:( tNewtonian force calculations for the Bach C minor fugue excerpt (all units are tmuHz/s4  t3 ) 2 (0.31579  0.31579) ). FN 4  1  1  1369.0 2 (t4  t3 ) 2 (0.31579  0.31579)    

192

M. D. Jones Volume 7 – Fall 2009

Figure 14: Tone graph showing Newtonian forces for the Bach C minor fugue excerpt (all units are tmuHz/s2).

Several observations can be made concerning the results of this force analysis. First, the first and third forces are negative while the second and fourth are positive. This indicates that the tone is forced downward from the first and third positions and upward from the second and fourth positions.

Second, considering the magnitude of the first two forces, one might expect the first two forces to be equal in magnitude and opposite in sign, since the tone simply moves from C5 down to B4 and back up to C5. These first two forces are opposite in sign but they are unequal in magnitude, with the second force, in fact, having the greater magnitude. This is explained by the facts that not only does FN2 have to move B4, which has a greater mass than does C5, and, more significantly, that FN2 has to reverse the direction of motion of the tone. In other words, FN2 has to stop the initial momentum coming down from C5 and move the more massive B4 back in the opposite direction to C5. These two factors cause FN2 to be greater than FN1.

Finally, FN4 has the smallest magnitude of any force in this example. Given the foregoing explanation of FN2 being relatively great since it reverses the direction of motion at the relatively massive B4, one might expect the magnitude of FN4, which also reverses the direction of motion at the even more massive G4, to also be relatively great. This tendency is outweighed, b however, by two conditions of the final motion from G4 to A 4. First is the relatively short vertical distance between the fourth and fifth notes, one half step. Second is the relatively large time interval, four sixteenth notes, over which this final motion occurs. These four sixteenth notes can be seen as “0.31579+0.31579” in the denominator of the calculation of FN4. These two factors outweigh the factors of reversal of direction and the relatively massive G4 to make FN4 relatively small.

As can be seen from the foregoing analysis, one can form general intuitions about the magnitudes of the forces in a melodic line, but this cannot always be done accurately at a glance. Many factors are involved in determining the magnitude of the forces and only calculations using the force law can reveal the values of these forces and their relations to one another. As one becomes more familiar with the appearances of melodic lines in tone graphs and with the force law, one’s intuitions of the Newtonian force will become more accurate.

The remainder of this paper will investigate force­modeling theory’s interpretation of certain common melodic motions with the goal of demonstrating symmetries, or analogies, between the behavior of objects in the physical domain and those in the musical domain. The types of motion to be considered are repeated notes and tone motion through notes that equally divide the octave.

193

Force-Modeling Theory: Melodic Motion and the Real-World Attributes of Tones

THE STATIONARY TONE A tone is stationary when notes are repeated. Repeated notes involve no motion of the tone in the frequency dimension. The first two measures of Antonio Carlos Jobim’s “One­Note Samba” melody shall serve as an example. This is shown in traditional notation in Figure 15. The tempo used in the calculations is 100 bpm as performed by Dizzy Gillespie25. The value of 100 bpm converts to 0.6 spb. The eighth note therefore has a duration of 0.3 s.

f1 349.23 349.23 v1  f  349.23 349.23 0 t 1 0.6 Figure v1  151 : Beginning of Jobim’s "One 0 ­Note Samba" melody. f1 349.23 349.23 v  t1  0.6  0 1 f 349.23 349.23 v  t1  0.6  0 The velocities of the tone are calculated for each of the seven “motions” in Figure 16.1 f 349.23 349349..2323 t12 0.6 v12   00 f12 349349..2323349349..2323 t1 0..36 v12  2  00 f 2 349349..2323349349..2323 v  tt12  00..63 00 12  f  349.23 349.23  tt2 00..63 v 2  1   0 ff 2 349349..2323349349.23.23 t32 0.3 v 32   00 f 23 349.23 349.23 tt2 00..63 v 23  3   0 f 23 349.23 349.23 t23 0.36 v 23    0 ft 3 349.230.6349.23 v  t23  0.3  0 3 f 349..23 349349..2323 t43 0.6 v 43   00 f 34 349349..2323349349..2323 t43 0..36 v 34   00 f 34 349349..2323349349..2323 t34 00..63 v 34   00 ft 4 349.230.3349.23 v  t34  0.6  0 4 ff 349349.23.23349349.23.23 t54 0.3 v 54   00 f 54 349349..2323349349..2323 tt54 00.6.3 v 45   00 f 54 349349..2323349349..2323 t54 00..63 v 45   00 f 5 349.23 349.23 t5 00..63 v 5  4   0 ff 5 349349..2323349349.23.23 t65 0.6 v 65   00 f 6 349.23 349.23 tt65 00..66 v 56    0 f 56 349.23 349.23 t56 0.6 v 56    0 f 6 349.23 349.23 t6 0.6 v 6  5   0 ff76 349349..2323349349.23.23 t6 0.6 v 76   00 f 7 349.23 349.23 tt76 00..66 v 67    0 f 67 349.23 349.23 t67 0.6 Figure v 67  16: Velocity calculations for the "One 0 ­Note Samba" excerpt (all units are Hz/s). f 7 349.23 349.23 t7 0.6 v 7  6   0 f 7 349.23 349.23 v  t7  0.6  0 7 f 349.23 349.23  Since the frequencies of all notes are equal, all velocities are equal to zero irrespective of the time interval between notet7 0.6 s. v 7    0 f 7 349.23 349.23  Since  all t7 values of 0 velocity .6 are equal to zero, all values of momentum are also equal to zero. Since this is evident by v 7    0  observation, the calculations are foregone; only the results are shown in Figure 17.t7 0.6   1  2  3  4  5  6  7  0   Figure 17: Momentum calculations for the "One ­Note Samba" excerpt (all units are tmuHz/s).

 Since the Newtonian force is dependent upon momentum change, and all values of momentum are equal, all Newtonian forces are equal to zero. This result is shown in Figure 18.

FN1  FN2  FN 3  FN4  FN 5  FN 6  FN 7  0 Figure 18 Newtonian force calculations for the "One­Note Samba" excerpt (all units are tmuHz/s2).

 These results are shown in a tone graph in Figure 19.

25 Antonio Carlos Jobim, Wave: The Antonio Carlos Jobim Songbook, CD 314 535 528­2, New York: Polygram Records, 1996.

194

M. D. Jones Volume 7 – Fall 2009

Figure 19: Tone graph showing the forces involved in the “One­Note Samba” excerpt (all force units are tmuHz/s2).

Even though the results shown in Figure 19 may suggest that nothing is changing in this example—it is true that the frequency of the tone is not changing—there is nevertheless a sense that something is changing as this melody progresses. This sense of change is due to the fact that time is passing as the note is repeated. This notion of the passage of time in music is now examined more closely.

This example provides the opportunity to demonstrate two important features or attitudes of force­modeling theory, which might initially seem foreign to some readers. First, the question must be asked whether the tone in the Jobim example does in fact move in time. The answer is either yes or no depending on one’s definition of motion in time. Certainly, much has been written on the conception that music moves in or through time26. The physical point of view, however, leads to a different conclusion. As an example, if a vase sits on a table, does one normally consider the vase to be in motion of any kind? The answer is no. The vase is obviously stationary in space (at least relative to the table) and this is probably where the questioning of motion ends. Although time is passing as the vase sits stationary on the table, one would probably conclude that the vase is not moving in time. Motion in time or “time travel” has been observed in nature and occurs whenever an object, or person, is moving through space. Specifically, when an object is in motion relative to another object, time passes more slowly for the first object from the perspective of the second object27. This fact of nature, or more specifically, of space­ time, was predicted by Einstein’s theory of special relativity. For everyday rates of motion this slowing of time is so slight that it goes unnoticed. So, though it exists in nature, time travel and Einstein’s theory is not applicable to force­modeling theory. Consequently, the force modeling perspective on the “One­Note Samba” tone is that it does not move in time.

The second attitude of the theory regarding repeated notes is that the rhythm of repeated notes, the time intervals between notes, is also irrelevant to force values. All of the force calculations for the Jobim excerpt would have had the same results given different time intervals. In this way, rhythmic variation is viewed by the theory as being intimately tied to pitch variation. Rhythm only matters in force­modeling theory when there is some variation in frequency from note to note.

There is a similarity between force modeling’s view of repeated notes and that of Heinrich Schenker, arguably the most influential music theorist of the twentieth century. In a Schenkerian analysis, notes that are deemed ornamental to others are “reduced out” or removed from the music to reveal a “deeper” musical structure. This process is repeated until what Schenker calls the “background” structure is reached. In such an analysis repeated notes are often removed at the first level of reduction. Schenker’s theory thus regards repeated notes as adding nothing new to the underlying structure of a melodic line. In a similar way, the Newtonian force proposed by force­modeling theory regards repeated notes as adding nothing new to the structure of the forces driving the melodic line. This similarity to such a widely known and applied theory of music validates force modeling’s treatment of repeated notes. The final topics to be considered are tone motion through notes that equally divide the octave, and the applicability of the physical law of conservation of momentum to the musical domain.

26 See, for example, Cox, “The Metaphoric Logic of Musical Motion and Space,” or Jonathan Kramer, The Time of Music (New York: Schirmer, 1988). 27 This is the premise upon which the movie “The Planet of the Apes” is based. In this movie, astronauts have been traveling at a high speed, and upon returning to earth find that, while they have aged little, centuries have passed on earth.

195

Force-Modeling Theory: Melodic Motion and the Real-World Attributes of Tones

TONE MOTION THROUGH EQUAL DIVISIONS OF THE OCTAVE AND THE LAW OF CONSERVATION OF MOMENTUM When traversing a pitch set that equally divides the octave, such as a fully diminished seventh arpeggio with consistent rhythmic values, a tone moves a consistent number of half steps from each note to the next. If these notes are equally spaced in time, motion through them will form a straight line in a tone graph, though not necessarily in traditional notation. An example of such straight­line motion is an ascending diminished seventh arpeggio played in eighth notes such as the one from Domenico Scarlatti’s Sonata in A Major, K. 322 / L. 483, m. 63. This line is shown in traditional notation in Figure 20. The tempo chosen here is 100 bpm as performed by Mordecai Shehori28. This converts to a duration of 0.6 spb. The duration of the eighth note is therefore 0.3 s.

Figure 20: Domenico Scarlatti’s Sonata in A Major, K. 322 / L. 483, m. 63.

f1 415.30 349.23 The velocities of the tone during the four motions of Fiv1  f  415.30 349.23  220.23 gure 20 are calculated in Figure 21. v  t1  0.3  220.23 1 f1 415.30 349.23 v  t1  0.3  220.23 1 f 415.30 349.23 v  t1  0.3  220.23 1 f 493.88 415.30 t12 0.3 v 2  f  493.88 415.30  261.93 v  t 2  0.3  261.93 2 f 2 493.88 415.30 v  t2  0.3  261.93 2 f 493.88 415.30 v  t2  0.3  261.93 2 f 587.33 493.88 t23 0.3 v 3  f  587.33 493.88  311.50 v  t 3  0.3  311.50 3 f 3 587.33 493.88 v  t3  0.3  311.50 3 f 587.33 493.88 v  t3  0.3  311.50 3 f 698.46 587.33 t34 0.3 v 4  f  698.46 587.33  370.43 v  t 4  0.3  370.43 4 f 4 698.46 587.33 t4 0.3 v 4    370.43 Figure f214 : 698Velocity calculations for the Scarlatti excerpt (all units are Hz/s)..46 587.33 v  t4  0.3  370.43 4 t 0.3 4 440 440  The four momentums for the Scarlatti excerpt are calculated in Figure 22.1  m1v1  440 v1  440 (220.23)  277.47    m v  f v  349.23 (220.23)  277.47 1 1 1 4401 1 440    m v  f1 v  349.23 (220.23)  277.47 1 1 1 440 1 440    m v  f1 v  349.23 (220.23)  277.47 1 1 1 440 1 440 f1 349.23 2  m 2v 2  440 v 2  440 (261.93)  277.50   m v  f v  415.30 (261.93)  277.50 2 2 2 4402 2 440   m v  f 2 v  415.30 (261.93)  277.50 2 2 2 440 2 440   m v  f 2 v  415.30 (261.93)  277.50 2 2 2 440 2 440 f 2 415.30 3  m 3v 3  440 v 3  440 (311.50)  277.51   m v  f v  493.88 (311.50)  277.51 3 3 3 4403 3 440   m v  f 3 v  493.88 (311.50)  277.51 3 3 3 440 3 440   m v  f 3 v  493.88 (311.50)  277.51 3 3 3 440 3 440 f 3 493.88 4  m 4 v 4  440 v 4  440 (370.43)  277.51   m v  f v  587.33 (370.43)  277.51 4 4 4 4404 4 440   m v  f 4 v  587.33 (370.43)  277.51 4 4 4 440 4 440 Figure 22: Momentum calculations for the Scarlatti excerpt (all units are tmuHz/s).f 4 587.33 4  m 4 v 4  v 4  (370.43)  277.51 f 4 587.33  From these momentums and time intervals, Newtonian forces for each of the five motions of this example are calculated in  Figure 23.  

28 Mordecai Shehori, Scarlatti­Six Sonatas; Beethoven­Sonata, Op. 2, No. 3; Brahms­Paganini Variations, CD 4177, New York: In Sync Laboratories Inc., 1990.

196

  0 277.47 0 F  1   1849.8 N1 1   0 1 (1t  0) 277(0.34700) M.FN D.1  Jones12 1  12  1849.8 Volume 7 – Fall 2009 1  0 277.47 0 (t1  0) (0.3  0) FN1  12  12  1849.8 1  0 277.47 0 (t1  0) (0.3  0) FN1  12  12  1849.8 (1t00) 277(0277.47.3 .500) 277.47 F  2 21 1  2  1849.8 FN12  11 1 1  0 2  1 277.50 277.47 2 ((tt1 0) t )2 (0.3 (0).3  0.3) FN 2  12 2 1  12  0 2  1 277.50 277.47 (t2  t1 ) (0.3  0.3) FN 2  12  12  0 2  1 277.50 277.47 (t2  t1 ) (0.3  0.3) FN 2  12  12  0 (2t1 t ) 277277.50(.510.32772770..347.)50 F  2 32 2 1  2  0 FN 23  11  11  0 3  2 277.51 277.50 2 ((tt2 tt1 )) 2 ((00.3.300.3.3) ) FN 3  12 3 2  12  0 3  2 277.51 277.50 (t3  t2 ) (0.3  0.3) FN 3  12  12  0 3  2 277.51 277.50 (t3  t2 ) (0.3  0.3) FN 3  12  12  0 (3t2 t ) 277277.51(.051.32772770..350).51 F  2 43 3 2  2  0 FN 34  11  1 1  0 4  3 277.51 277.51 2 ((tt3 t2t)) 2 (0(.03.3 0.03.)3) FN 4  12 4 3  12  0 4  3 277.51 277.51 (t4  t3 ) (0.3  0.3) FN 4  12  12  0 4  3 277.51 277.51 (t4  t3 ) (0.3  0.3) FN 4  12  12  0 (04t 3 t ) 0277 277.51(0.51.3277 0..351) F  2 4 4 3  2  0 FN 45  1  1 1  1850.0 0 4 0 277.51 2 ((0t4 tt)3 ) (02(00..33) 0.3) FN 5  12 4  12  1850.0 0 4 0 277.51 (0  t4 ) (0  0.3) FN 5  12  12  1850.0 0 4 0 277.51 2 Figure 23(0:  Newtonian force calculations for the Scarlatti excerpt (all units are tmuHz/st4 ) (0  0.3) ). FN 5  12  12  1850.0 (004t ) 0(2770  0.51.3) F  2 4  2  1850.0 N 5 1 (0  t ) 1 (0  0.3)  These results are2 4 shown in a tone graph of the excerpt in Figure 24.2    

Figure 24: Tone graph of the Scarlatti excerpt showing Newtonian forces (all units are tmuHz/s2).

Observations of these forces begin with consideration of the physical law of conservation of momentum. In the physical domain when an object is set into motion in free space it will continue to move with a constant momentum. In other words, momentum is conserved or is unchanging. Remembering that momentum is the product of mass and velocity, if the mass of the moving object is changing, its velocity will either increase or decrease to compensate for the change in mass, keeping the momentum constant. For example, a rocket becomes lighter as it ascends due to a lightening fuel load as fuel is spent. In this case, the rocket’s velocity will increase to compensate for its decreasing mass, thus conserving its momentum.

The ascending rocket is a physical analogue to the ascending tone in the Scarlatti example. At each higher note the tone becomes lighter and, as can be seen from Figure 21, its velocity increases. The effect, as can be seen in Figure 22, is that the tone’s momentum remains constant during its ascent. Therefore, tones ascending or descending through equal divisions of the octave, moving in straight lines, obey the physical law of conservation of momentum thus requiring no force for their

197

Force-Modeling Theory: Melodic Motion and the Real-World Attributes of Tones continued motion. The fact that tones behave according to the law of conservation of momentum demonstrates another analogy between the physical and musical domains as the latter is viewed by force­modeling theory.

Regarding the forces in this example, since Scarlatti’s tone has constant momentum, one would expect the intermediate forces, FN2, FN3, and FN4 to be equal to zero. As can be seen in Figure 23 above, this is the case. FN1 must be positive to start the tone in motion and, even though more music follows, a force FN5 has been calculated as if no more music followed, to demonstrate another analogy between force­modeling theory and physical theory. When a physical object is set in motion by a force, an equal but opposite force is required to stop the object’s motion. Figure 23 shows this to be the case also in the musical domain since FN5 is equal in magnitude but opposite in direction to FN1.

In summary, the foregoing examples not only demonstrate force­modeling theory’s attitude toward certain types of common melodic motion, but also analogies between the physical and musical domains.

CONCLUSION If one structures the abstract in terms of the physical, then music, in all its complexity and interpretations is one of the richest target domains of physical­to­abstract mappings imaginable. Certainly, the source domain, the physical universe, is equally rich. What has been shown above regarding potential physical­to­musical domain mappings is only a small sampling of what may come to the minds of others. The potential for formal, quantifiable mappings from the physical domain into the musical domain have only begun to be realized here. The same can be said for the attendant physical mode of listening and of thinking about music.

BIBLIOGRAPHY Kurth, Ernst. Die Voraussetzungen der theoretischen Harmonik und der tonalen Darstellungssysteme. In Ernst Kurth as Theorist and Analyst. Translated by Lee Rothfarb. Philadelphia: University of Pennsylvania Press, 1988. Lakoff, George and Mark Johnson. Philosophy in the Flesh. New York: Basic Books, 1998. _____. Metaphors We Live By. Chicago and London: University of Chicago Press, 1980. Scruton, Roger. The Aesthetics of Music. New York and Oxford: University of Oxford Press, 1997. Spitzer, Michael. Metaphor and Musical Thought. Chicago and London: University of Chicago Press, 2004. Turner, Mark. The Literary Mind. New York and Oxford: Oxford University Press, 1996. Zuckerkandl, Victor. Sound and Symbol; Music and the External World. Translated by William R. Trask. New York: Pantheon Books, 1956.

DISCOGRAPHY Feltsman, Vladimir. J. S.Bach: The Well­Tempered Clavier, Book One. CD 01612­67105­2. New York: American Academy and Institute of Arts and Letters, 1993. Jobim, Antonio Carlos. Wave: The Antonio Carlos Jobim Songbook. CD 314 535 528­2. New York: Polygram Records, 1996. Shehori, Mordecai. Scarlatti­Six Sonatas; Beethoven­Sonata, Op. 2, No. 3; Brahms­Paganini Variations. CD 4177. New York: In Sync Laboratories Inc., 1990.

198

S. Kariuki and M. Edwards Volume 7 – Fall 2009

CURRENT METHODS FOR THE TRACE ANALYSIS OF PHENOXY ACID, TRIAZINE AND PHENYL UREA HERBICIDES IN WATER.

Stephen Kariuki and Matthew Edwards Nipissing University, Canada

ABSTRACT Herbicides are widely used throughout the world to aid in the destruction of invasive, competitive and unwanted plant species which if left untreated may negatively affect crop growth and production. Herbicides are usually applied to crops in bulk quantities which inevitably lead to their accumulation in both soils and watersheds. Accurate analytical methods are needed to properly evaluate herbicidal levels in all types of water in order to ensure water quality standards are being met. A review of current methods for determination of the three popular herbicidal groups in water has been completed. The herbicides include phenoxy acids, triazines, and the phenyl urea types. An emphasis has been placed on how these herbicides are usually analyzed in various water samples. In particular, the solid phase extraction technique is discussed as a pre­concentration step of these herbicides to make them detectable using suitable analytical instruments. The liquid and gas chromatographic methods appear very popular for the analysis of these classes of compounds because the methods are able to provide low detection limits as well as high selectivity against potential interferants. It is the feeling of the authors of this work that usage of herbicides should be done with extreme care so as not to use any more than required due to the negative effects these compounds could cause to the environment.

Keywords: Herbicide­Analysis, Phenyl Acid Herbicides, Trianzine Herbicides, Phenyl Urea Herbicides

INTRODUCTION Herbicides are used to kill unwanted plants. Ideally they are expected to kill specific targets while leaving the desired crop relatively unharmed. Some herbicides act by interfering with the growth of the weed and are often synthetic "imitations" of plant hormones. Other herbicides such as those used to clear waste ground, industrial sites, railways and railway embankments are non­selective and kill all plant material with which they come into contact. Smaller quantities are used in forestry, pasture systems, and management of areas set aside as wildlife habitat. Herbicides have evolved over time from metal based inorganic chemicals with general modes of action to highly specific organic chemicals often containing chlorinated or phosphorylated side chains. The rate at which new herbicides are developed and introduced to the market is also rapidly increasing. This increase in use may be attributed to the demands of the biofuel industry, the pressure to produce higher crop yields and the growth of manufacturing in developing countries (Galt, 2008). A major consequence of the rise in herbicide application is the likelihood of trace levels of these organic chemicals appearing in lakes, rivers, drinking water, etc. It is even more likely to happen in some developing countries where pesticide regulations and enforcement are questionable, and herbicides are applied and disposed of at the farmer’s own discretion (Haylamichael & Dalvie, 2009). With new, more specific herbicides being developed all the time, many developing countries find themselves with stock piles of obsolete chemicals that pose serious health risks that they are incapable of disposing properly (FAO, 2002). There is even concern that in developed countries natural disasters such as tornados and flooding could disrupt fields saturated with both new and old herbicides, releasing them into the environment (Gunter & Centner, 2000).

It is well known that exposure to herbicides of all classes can lead to detrimental health effects. Some classes are more accutely toxic than others but in order to propely guage their risk to both the health of humans and the environment, precise analytical procedures must be employed to quantify their persistence in the environment. Phenoxy acid, triazine and phenyl udea herbicides are among the most popular varieties of herbicides used in the world. Quantification of these chemicals in a variety of water sources has been reviewed with full consideration given to all steps of analysis including sample extraction, separation, detection and method sensitivity.

199

Current Methods for the Trace Analysis of Phenoxy Acid, Triazine and Phenyl Urea Herbicides in Water.

PHENOXY ACID HERBICIDES Phenoxy acid herbicides are related to the growth hormone known by the name of indoleacetic acetic acid whose structure is shown in Figure 1. Indolacetic acid is a heterocyclic auxin produced in the plant. It has the function of inducing cell elongation and cell division, thereby making the plant grow.

O

OH

N H Figure 1: Figure 1. IIndndooleacetic Acidleacetic acid

Studies of how indolacetic acid works have led to development of phenoxy acid herbicides. The structures of some common phenoxy acid herbicides are shown in Figure 2. These include 2,4­dichlorophenoxyacetic acid (2,4­D), 2­methyl­4­ chlorophenoxyacetic acid (MCPA), 2,4­dichlorophenoxypropionic acid, and 2­(4­chloro­2­methylphenoxy)propionic acid (MCPP­p). The phenoxy acid herbicides when sprayed on broad­leaf plants induce rapid, uncontrolled growth, eventually killing them (Schmidt, 2000). They do so by imitating a plant’s natural auxins, a class of plant hormones responsible for many developmental and behavioural processes. They are selective against plants such as wheat and corn. These category of O O herbicides first introduced in 1946 remains one of the most used herbicides in the world. O O O OH O OH O O OH OH Cl Cl Cl CH3 2,4-dichlorophenoxyacetic acid (2,4-D) 2-methyl-4-chlorophenoxyacetic acid (M CPA) Cl Cl Cl CH3 2,4-dichlorophenoxyacetic acid (2,4-D) 2-methyl-4-chlorophenoxyacetic acid (M CPA)

CH3 CH3 O OH O OH CH3 CH3

O O OH O O OH Cl Cl Cl CH3 O O 2,4-dichlorophenoxypropionic acid 2-(4-chloro-2-methylphenoxy)propionic acid (M CPP-p) Cl Cl Cl CH3 Figure 2: Structures of Some Common Phenoxy Acid Herbicides 2,4-dichlorophenoxypropionic acid 2-(4-chloro-2-methylphenoxy)propionic acid (M CPP-p) Figure 2. Structures of some common phenoxy acid herbicides Phenoxy Acids are among the most widely used herbicidal classes in the world. Their widespread use is due to their low production costs and high level of selectivity and effectiveness, especially when employed together with benzonitriles, as thFigure 2. Structures of some common phenoxy acid herbicides ey commonly are (Tadeo et al, 1996). According to a report by RIAS Inc.­regulatory impacts, alternatives and strategies (RIAS Inc., Oct. 2006), the three sectors in Canada that represent likely over 90% of all usages of phenoxy herbicides in Canada are:

- the wheat and barley markets in the Western Provinces and Ontario, because the agriculture sector is the largest user group of the phenoxy herbicides - the non­crop industrial sector as an example of a business sector that uses phenoxy herbicides to manage harmful vegetation - the lawn and turf sector as examples of the uses made of phenoxy herbicides by individual Canadians and businesses for private investiment, aesthetic and recreational purposes.

According to the above report, the total phenoxy herbicide costs to wheat and barley producers were estimated to be $170 million, split out as $55 million for 2,4­D and $115 million for MCPA. As indicated in Table 1 below, wheat and barley treatment costs were estimated to have been $114 million and $57 million, respectively. Mecoprop­p is offered as a mix with one of the 2,4­D or MCPA.

200

S. Kariuki and M. Edwards Volume 7 – Fall 2009

Table 1: Annual Treatment Costs ($ millions) for the year 2005 (RIAS Inc., Oct. 2006) Wheat, $ Barley, $ Total, $ 2,4­D 41.6 13.6 55.2 MCPA 72.1 43.2 115.3 MCPP­p 9.3* 1.4* 10.7* All phenoxy herbicides 113.7 56.8 170.5 *included within 2,4­D and MCPA above

Due to their large scale application, high solubility and moderate persistence in the environment (approximately 1 year), this class of herbicides is likely to enter both potable and environmental waters via runoff from fields and have therefore raised concern with environmental monitoring agencies worldwide (Wu et al, 2005; Thorstensenet al, 2000; Sanchis­Mallos et al, 1998). Phenoxy acids are considered moderately toxic to humans and aquatic organisms (Sanchis­Mallos et al, 1998; Fukuyama et al., 2009). For these reasons, quantities of phenoxy acid herbicides in water need to be monitored and a variety of methods for this determination are available.

The most common method for the extraction of phenoxy herbicides from water is through solid phase extraction. This is a very helpful tool for pre­concentrating these herbicides since when present in water, their concentration are usually too low to be detected directly using the available analytical techniques. In this process, the water containing the herbicides to be analyzed is passed through a column containing solid phase material which the target analyte sorbs on to. Vaccum pressure is used to facilitate the movement of the water through the cartridge containing the solid phase. The choice of the sorbent material from a variety of sorbents is dependent on the physical properties of the target herbicide. Once the entire water sample has been passed through the cartridge, the cartridge may be may be washed to remove unwanted chemical species, particularly those that would cause a substantial interference of the target herbicides. This is often done with a solvent whose polarity do not allow the washing of the target herbicides. Once the wash­step is completed the compounds of interest are eluted from the cartridge using a minimum amount of solvent they are easily soluble in. From here the sample may be concentrated, derivatized, or injected directly into the separation apparatus. The sorbent material most often used for phenoxy acid analysis is C18 or styrene divinyl benzene. Elution of the herbicides from the sorbent is commonly done with methanol due to the high polarity of phenoxy herbicides.

This extraction method has been successful in recovering 80 to 100 percent of the phenoxy acids present in the original sample. After extraction, separation and quantification of the compounds is achieved using high performance liquid chromatography, gas chromatography, or capillary electrophoresis. Detection is most often done using tandem mass spectrometry but can also be completed using UV detection. Limits of detection can range from 3 ng/L to 1 µg/L with high performance liquid chromatography and gas chromatography paired with mass spectrometry having the highest sensitivity.

TRIAZINE HERBICIDES The triazine family of herbicides are used mainly to eliminate broad leaf and grassy weeds in corn, rapeseed and low brush blueberries, and for general weed control (Loos et al, 2007). Plant death occurs due to the inhibition of photosynthesis (Schmidt, 2000). Their widespread use is coupled with their strong attraction to organic matter. Further, their high solubility in water has led to the detection of these herbicides at the ng/L and µg/L levels in especially groundwater (Carabias­Martinez et al, 2002).

Atrazine, one of the more popular triazines used, has been identified as a human carcinogen (Nagaruju & Huang, 2007). It does not break down readily (within a few weeks) after being applied to soils of above neutral pH. Under alkaline soil conditions, atrazine may be carried into the soil profile as far as the water table by runoff from treated fields following rainfall causing the aforementioned contamination. Canada has a drinking water standard of 0.005 mg/L for atrazine and its degradation products (Health Canada, 2008). With their increased risk to human health, triazine herbicides require effective and accurate detection methods able to detect herbicides at the part per billion levels.

Extraction from water samples is most often done with solid phase extraction, a process that has been explained above. Typical sorbents used are C18 or Styrene divinyl benzene. Analytes are most often eluted from the cartridge with methanol and ethyl acetate or a dichlorobenzene/acetone combination. Recovery rates have been reported to be between 75 and 100 percent. Separation is most effectively achieved with high performance chromatography and gas chromatography but Micellar

201

Current Methods for the Trace Analysis of Phenoxy Acid, Triazine and Phenyl Urea Herbicides in Water. electrokinetic capillary chromatography has also been reported as an effective separation method. Detection via tandem mass spectrometry is most common. Limits of detection as low as 3 ng/L have been accomplished.

PHENYL UREA HERBICIDES Phenyl Ureas are another class of herbicides that are widely used. They are likely to be applied to crops such as cereals, cotton, potatoes and strawberries (Ruberu et al, 2000). Similar to triazine herbicides, phenyl ureas inhibit photosynthesis (Schmidt, 2000). Phenyl urea herbicides have been found to be moderately toxic to humans, but laboratory experiments on various animals have shown possible carcinogenic properties (Ruberu et al, 2000). Their degradation into the environment can be rather slow and levels as high as 100 ng/L have been reported in surface waters (Gerecke et al, 2001). This is also due in part to the relatively large amount of product applied to the crop (Gerecke et al, 2001). The Canadian drinking water standard for the phenyl urea herbicides fluctuate around the 0.15 mg/L concentration level (HealthCanada, 2008).

Extraction of this herbicidal class from water samples is also done using solid phase extraction with a wide variety of sorbents showing effective recovery. Methanol, along with ethyl acetate, acetone and dichloromethane is commonly used to elute the target analytes from the cartridge. Recovery rates have been reported to be between 70 and 95 percent. Seperation is most effectively completed using high pressure liquid chromatography. Detection can be completed via tandem mass spectrometry or diode array detection.

CONCLUSIONS AND RECOMMENDATIONS The current methods available for determining trace levels of phenoxy acids, triazine and phenyl urea herbicides in water are diverse in their analysis, but similar in their ability to achieve accurate, reliable results. Herbicide extraction via solid phase extraction produces recovery rates that are generally very high (about 80%). Separation and detection from an HPLC­MS or GC­MS system has shown to be one of the most widely used and accurate analytical procedures with limits of detection as low as 3ng/L.

It is important when studying herbicide quantities in water to be aware of the chemical properties of the compounds in question. This will aid in choosing the correct sorbents and chromatographic procedure to employ for analysis. It is also important to correctly identify peaks in the chromatogram as it has often been reported that some analytes and their degradation products will tend to elute at very similar times. Proper collection and storage of samples, well­thought out extraction procedures and triplicate runs will help ensure results are accurate.

REFERENCES Carabias­Martinez, R., Rodriguez­Gonzalo, E., Herrero­Hernandez, E., Sanchez­ San Roman, F. J., & Flores, M. G. (2002). Determination of herbicides and metabolites by phase extraction and liquid chromatography. Evaluation of pollution due to herbicides in surface and groundwaters. Journal of Chromatography A , 950, 157­166. FAO. (2002). Prevention and disposal of obsolete and banned pesticide stocks, 6'th FAO consultation meeting. Rome. Fukuyama, T., Tajima, Y., Ueda, H., Hayashi, K., Shutoh, Y., Harada, T., and Kosaka, T., Toxicology, (2009). Allergic reaction induced by dermal and/or respiratory exposure to low­dose phenoxyacetic acid, organophosphorus, and carbamate pesticides. Toxicology, 261, 152­161. Galt, R. E. (2008). Beyond the circle of poison: Significant changes in the global pesticide complex, 1976­2008. Global Environmental Change , 18, 786­799. Gerecke, A., Tixier, C., Bartels, T., Schwartzenbach, R. P., & Miller, S. R. (2001). Determination of phenylurea herbicides in natural waters at concentrations below 1 ng using solid­phase extraction, derivatization, and solid­phase microextraction­ gas chromatography­mass spectrometry. Journal of Chromatography A , 930, 9­19. Gunter, L. F., & Centner, T. J. (2000). Characteristics of state agricultural pesticide programs in the United States. Journal of Environmental Management , 58, 61­72. Haylamichael, I. D., & Dalvie, M. A. (2009). Disposal of obsolete pesticided, the case of Ethiopia. Environment International , 35, 667­673. HealthCanada. (2008). Federal­Provincial­Territorial Committee on Drinking Water of the Federal­Provincial­Territorial Committee on Health and the environment Guidelines for Canadian Drinking Water Quality. Loos, R., Wollgast, J., Huber, T., & Hanke, G. (2007). Polar herbicides, pharmaceutical products, perfluorooctanesulfonate (PFOS), perfluorooctanoate (PFOA), and nonylphenol and its carboxylates and ethoxylates in surface and tap waters around lake Maggiore in northern Italy. Analytical and Bioanalytical Chemistry , 387, 1469­1478.

202

S. Kariuki and M. Edwards Volume 7 – Fall 2009

Nagaruju, D., & Huang, S.­D. (2007). Determination of triazine herbicides in aqueous samples by dispersive liquid­liquid microextraction with gas chromatography­ion trap mass spectrometry. Journal of Chromatography A , 1161, 89­67. Ruberu, S. R., Draper, W. M., & Perera, S. K. (2000). Multiresidue HPLC methods for phenyl urea herbicides in water. Journal of Agricultural and Food Chemistry , 48, 4109­4115. Sanchis­Mallos, J., Sagrado, S. M.­H., Villanueva Camanas, R., & Bonet­Domingo, E. (1998). Determination of phenoxy acid herbicides in drinking water by HPLC and solid phase extraction. Journal of Liquid Chromatography and Related Technology , 21 (12), 1871­1882. Schmidt, R. R. (2000, February). Classification of Herbicides According to Mode of Action. Retrieved from HRAC: http://www.plantprotection.org/hrac/MOA.html RIAS Inc.­regulatory impacts, alternatives and strategies, Toronto and Ottawa, Canada (October, 2006). Assessment of the Economic and Related Benefits to Canada of Phenoxy Herbicides. Tadeo, J., Sanchez­Brunete, C., Garcia­Valcarcel, A., Martinez, L., & Perez, R. (1996). Determination of cereal herbicide residues in environmental samples by gas chromatography. Journal of Chromatography A , 756, 347­365. Thorstensen, C., Lode, O., & Christiansen, A. (2000). Development of a solid­phase extraction method for phenoxy acids and bentazone in water and comparison to a liquid­liquid extraction method. Journal of Agricultural and Food Chemistry , 48, 5829­5833. Wu, J., Ee, K. H., & Lee, H. K. (2005). Automated dynamic liquid­liquid­liquid microextraction followed by high performance liquid chromatography­ultraviolet detection for the determination of phenoxy acid herbicides in environmental waters. Journal of Chromatography A , 1028, 121­127.

203

Assessing Information Society Indicators: The Puerto Rico Case

ASSESSING INFORMATION SOCIETY INDICATORS: THE PUERTO RICO CASE

Edgar Ferrer Turabo University, USA

ABSTRACT Information and communication technologies (ICTs) have recently gained significance in almost every country, as an essential mechanism for socio­economic development. At the present time, leaders in government and private organizations from all over the world are aware of the development potentialities involved in exploiting benefits of ICTs. However, the competition in ICTs has become difficult for developing countries due to the digital divide problem. The digital divide has been usually measured by means of statistical indices. This work is intended to provide means for analyzing digital divide in Puerto Rico. Unfortunately, Puerto Rico is missed in almost every international study or report about digital divide or e­readiness, this is because these studies have considered Puerto Rico as an USA territory. However, the technological imbalances between Puerto Rico and the USA have been manifested. The goal of this work is to propose a framework for assessing and analyzing domestic digital divide in the Commonwealth of Puerto Rico as a special case of a self­governing developing territory in commonwealth with a developed country.

Keywords: Digital Divide, Digital Equity, E­Government, ICT Adoption.

INTRODUCTION In this era of information and communication technology, the entities or countries that can access information would have competitive advantages over others who cannot. Integrating information and communication technologies (ICTs) for developing e­government, e­ commerce, e­learning, and other e­applications, is gaining significance in almost every country, as an essential mechanism for the development of nations.

At the present time, leaders in government and private organizations from all over the world are aware of the development potentialities involved in exploiting benefits of ICTs. However, the world has to confront an existent reality: the digital divide between developed and developing countries. The digital divide is a complex and dynamic phenomenon which has been considerably studied in the last decade. Around the world, it has been usually measured by means of statistical indices.

As a global issue, Digital Divide has been categorized into domestic digital divide and international digital divide. The domestic digital divide covers the uneven situations of universal information and communication technology (ICT) access between different socio­demographic groups within a country, while the international digital divide indicates disparities of ICT access and utilization between developed and developing countries (Lu, 2001). A domestic perspective of digital divide in Puerto Rico is considered in this paper. Unfortunately, Puerto Rico is missed in almost every international study or report about digital divide or e­readiness because it is considered as an USA territory. However, the technological imbalances between Puerto Rico and the USA could be manifest, therefore a domestic perspective seem to be a suitable approach for this study.

RELATED WORKS The digital divide is a complex and dynamic phenomenon which has been considerably studied in the last decade. Numerous frameworks have been proposed with diverse point of views. In this section some basic and classic approaches are considered, these fundamental approaches have inspired new models (Pittman, J., McLaughlin, R., and Bracey­Sutton B., 2008). Preliminary studies regarding this work, have been presented in (Ferrer, 2009).

In (Kim and Kim, 2000) the authors categorize digital divide into three levels: media accessibility, information mobilization, and information consciousness. Class, education, age, sex, and region have been mentioned as major elements for causing the divides. In (MeIver, 2005) the author views digital divide from a human rights perspective and suggests enforcing equal opportunities on ICT accessibility. A relation between ICT and the economic growth is presented in (Jalava and Pohjola

204

E. Ferrer Volume 7 – Fall 2009

2002), the authors present this relation as a major force in pushing developing countries toward new economy. In (Iyer, L. S., Taube, L., and Raquet, J. (2002),) Iyer et al. address several underlying rationales for global digital divides and select GDP per capita and Internet penetration for clustering countries into different e­commerce growth types. Corrocher and Ordanini (Corrocher, N. and Ordanini, A. 2002) use six categories including market, diffusion, infrastructure, competitiveness, human resources, and competition to measure cross­country digital divides. Economist Intelligence Unit (EIU)/Pyramid Research instead views E­Readiness from six categories: connectivity and technology infrastructure, business environment, e­ commerce adoption, legal and regulatory environment, supporting e­services, and social and cultural infrastructure. The International Institute of Management Development (IMD) (IMD, 2008) uses economic performance, government efficiency, business efficiency, and infrastructure as indicators for measuring world competitiveness. The World Economic Forum (WEF) (WEF, 2008) adopts two complementary approaches: the Growth Competitiveness Index (GCI) and the Business Competitiveness Index (BCI) to analyze global competitiveness. Preliminary studies regarding this work, have been presented in (Ferrer, 2009)

ASSESING THE INFORMATION SOCIETY IN PUERTO RICO Despite the vast amount of tools and procedures for assessing digital divide in developed and developing countries, there not exists a consensus in order to determine a universal method. Unfortunately, there are limitations to use digital divide assessment methods regarding important issues such as the social, political, and geographical context. Some existing frameworks are insufficient to address an universal applicability in countries with certain socio­economical and technological settings. Some of these limitations have been reported in existing literature (Dada, 2006).

This work attempt to develop a particular framework based on some theories and the experience gained by studying the reality of Puerto Rico as a self­governing developing territory in commonwealth with a developed country. The strategic framework reported in this work is conceived in terms of three dimensions, namely: knowledge Societies, community support, and basic technology skills. The aforementioned dimensions are integrated in a hierarchical structure where the higher ideal corresponds to the knowledge society, while the fundamental subjects are driven by basic technology skills. The three dimensions are linked in the field of a local context, where the domestic issues are significant.

THE THREE­DIMENSIONAL MODEL The framework reveled in the previous section defines our three­dimensional model, which is depicted in the figure 1. The three dimensions and the field of the local context are described in the following subsections

Figure 25: The three­dimensional model

The Local Context The field of the local context is the core of the model. This field encompasses three important aspects: the political, the geographical, and the social contexts.

The political context: The commonwealth of Puerto Rico is a self­governing unincorporated territory of the United States. However, technological imbalances between Puerto Rico and the USA have been manifested (Ferrer (2009)).

205

Assessing Information Society Indicators: The Puerto Rico Case

The geographical context: Puerto Rico is among the most populated countries in the world by population density. The entire population is concentrated or close to urban areas.

The social context: Spanish is the dominant spoken language in Puerto Rico. However, measures and assessments are usually made in English.

Knowledge Societies Bridging the digital divide for social inclusion, or universal access (Alampay, 2006), is the great motivation behind the efforts toward IT access and use. Moreover, a great concern is that the digital divide is an important factor in the expansion of knowledge divide. In developed and developing countries the Knowledge societies (Anderson, 2008) are becoming an important goal to pursue. The world knowledge base is increasing each year as the information sources in internet is growing. In this sense, the society has to be prepared to deal with a rapid growing in basic and applied knowledge. With an increasing flow of information, national economies grow more internationalized. There is a social demand for higher levels of education as technology is reducing the need for many types of unskilled or low­skilled workers.

Community Participation In this framework ICT is seen to contribute to improvements of life conditions through enabling knowledge acquaintance mechanisms. Interventions to develop community ICT services in areas where participation levels through ICT is low, bear implicit promises for educational and economic benefits, inserting communities in the knowledge society. The introduction of ITC practices and encouraging the community participation is seen as an essential factor for the social inclusion.

Basic Technology Skills An important aspect of the knowledge society is founded in reduce the digital, social and economic exclusion of populations by digital literacy. The main assumption here is that the access and use of the Internet and digital technologies are critical elements for individuals to participate and derive the benefits of a global knowledge society. A requisite for participation, however, is basic literacy. Literacy levels vary greatly across gender, nations, and the world.

An important issue that tends to be hidden in the tables listing international digital divide indicators is the existence of in­ country disparities with regard to the availability and significance of ICT and in particular internet connectivity. According to the IBM Institute for Business Value (E­readiness rankings 2008: Maintaining momentum) some countries have both highly networked ready communities and communities that are completely cut­off from the networked world that the report attempts to assess.

CONCLUSIONS AND FUTURE WORK In this work we presented a framework for investigating digital divide from a domestic perspective. The work was motivated by the case of Puerto Rico, a self­governing developing territory in commonwealth with a developed country. The framework is based on some theories and the experience gained by studying the reality of Puerto Rico. Future work includes domestic and international comparative analysis, and empirical study for examining the proposed model.

REFERENCES Alampay, E. (2006). “Beyond access to ICTs: Measuring capabilities in the information society”, International Journal of Education and Development using ICT, Vol. 2, No 3, pp. 4–22. Anderson, R. (2008), “Implications of the information and knowledge society for education”, (in: J. Voogt, & G. Knezek ­Eds, International handbook of information technology in primary and secondary education). Berlin Heidelberg New York: Springer. Corrocher, N. and Ordanini, A. (2002), “Measuring the Digital Divide: a Framework for the Analysis of Cross­Country Differences”, Journal of Information Technology, Vol. 17, pp. 9­19. Dada, D. (2006), “E­readiness for developing countries: moving the focus from the enviroment to the users”, Electronic Journal on Information Systems in developing Countries, Vol 27, No 6, pp 1­14. EIU: e­Readiness Report. Economist Intelligence Unit, (various years) http://www.ebussinessforum.com , [Accessed 27.10.2008] Ferrer, E. (2009), “A Strategic Framework for Analyzing Digital Divide from a Domestic Perspective”, in Proceedings of International Conference on eGovernment and eGovernance, Vol 1, pp. 11­115, Ankara, Turkey, March 2009. IMD: The World Competitiveness Yearbook. International Institute for Management Development, Lausanne (various years).

206

E. Ferrer Volume 7 – Fall 2009

Iyer, L. S., Taube, L., and Raquet, J. (2002), “Global E­Commerce: Rational, Digital Divide, and Strategies to Bridge the Divide”, Journal of Global Information Technology Management, Vol 5, No 1, pp. 43­68. Jalava, J. and Pohjola M. (2002), “Economic Growth in the New Economy: Evidence from Advanced Economics”, Information Economics and Policy, Volume 14 (2002), pp. 189­210. Kim, M. C. and Kim, J. K. (2001), “Digital Divide: Conceptual Discussions and Prospect”, ( in: W. Kim et al. Eds, Lecture Note in Computer Science), Vol. 2105, pp. 78­91. Lu, M. (2001), “Digital Divide in Developing Countries”. Journal of Global Information Technology Management, Vol. 4, No 3, pp. 1­4 1–4. MeIver, W. J. Jr. (2005), “A Human Rights Perspective on the Digital Divide: The Human Right to Communicate”, Proceedings of the DIAC Symposium, Seattle, Washington. Pittman, J., McLaughlin, R., and Bracey­Sutton B. (2008), “Critical success factors in moving toward digital equity”, ( in: J. Voogt, G. Knezek , Eds, International Handbook of Information Technology in Primary and Secondary Education), pp. 803–817. The IBM Institute for Business Value, “E­readiness rankings 2008: Maintaining momentum”, A white paper from the Economist Intelligence Unit. J. Voogt, G. Knezek (eds.) International Handbook of Information Technology in Primary and Secondary Education, 803–817. WEF: The Global Competitiveness Report. World Economic Forum, (various years) (http://www.weforum.org), [Accessed 17.11.2008].

207

Information and Communication Technology Impact on Asia and the Pacific

INFORMATION AND COMMUNICATION TECHNOLOGY IMPACT ON ASIA AND THE PACIFIC

Shahram Amiri, Swen Harke and Ryan Bauer Stetson University, USA

ABSTRACT In 2003, the World Summit on the Information Society, under the patronage of the United Nations, has arrived at the conclusion that Information Communication Technologies (ICTs) “under favorable conditions...can be a powerful instrument, increasing productivity, generating economic growth, job creation and employability and improving the quality of life of all.” (WISI, 2003). The prominence of this fundamental declaration becomes especially apparent when analyzing current and predicting future economic conditions in emerging regions such as Asia­Pacific. In the light of the findings put forth by the WISI, this research will consider the prospect of imminent growth in Asia­Pacific and further examine correlations between ICT infrastructure/penetration and socio­economic expansion in this specific geographical area. The study will focus on four indicating factors that are key to comprehending the depth of this topic: mobile telephone usage, Internet penetration, broadband expansion, and government policies. Throughout the analysis, marked differences among countries within the Asia­Pacific region will be addressed and a conclusion will be reached based on the aforementioned factors.

Keywords: Information Communication Technology, E­Commerce, Asia Pacific, Socio­Economic Development

INTRODUCTION The ICT infrastructure in Asia is steadily growing and therefore contributing to the emergence of web site development for commercial, personal, network, and other purposes. As stated by the Information Society Statistical Profiles 2009, Asia­ Pacific has become an ICT world leader during the past decade. “At the end of 2007, Asia and the Pacific accounted for 42 per cent of the world’s mobile cellular subscriptions, 47 per cent of the world’s fixed telephone lines, 39 per cent of the world’s Internet users, 36 per cent of the world’s fixed broadband subscribers, and 42 per cent of the world’s mobile broadband subscriptions.” (International Telecommunication Union, 2009). The implications for economic development are significant. For example, ICT expansion increases revenues for advertising using online portals and channels (Asian e­commerce holds great growth potential, 2004).

Share of Internet users in Asia and the Pacific

Source: ITU World Telecommunication/ICT Indicators database.

Moreover, it has to be noted that high ICT growth is often coupled with significant increases in Gross Domestic Product, as it is the case in China or Viet Nam, both of which recorded GDP growth rates of 10.3 and 7.8 per cent in 2007, respectively. (World Bank, 2009)

208

S. Amiri, S. Harke and R. Bauer Volume 7 – Fall 2009

The International Telecommunication Union acknowledges that despite “high­growth and record absolute numbers, penetration rates of all ICTs in the Asia­Pacific region were lower than those of the world.” (International Telecommunication Union, 2009). Thus, the challenge of making benefits available to the broader population remains.

ASIA­PACIFIC REGION Based on World Bank income data, Asia­Pacific can be divided into four income groupings: Low­income, encompassing Afghanistan, Bangladesh, Cambodia, D.P.R. Korea, Myanmar, Nepal, Pakistan, Papua New Guinea, Solomon Islands, and Viet Nam. The lower­middle­income group includes Bhutan, China, India, Indonesia, Iran (I.R.), Kiribati, Maldives, Marshall Islands, Micronesia, Mongolia, Philippines, Samoa, Sri Lanka, Thailand, Tonga, Tuvalu, and Vanuatu. Finally, the upper­ middle­income group encompasses Fiji, Malaysia, and Nauru whereas the high­income category includes Australia, Brunei Darussalam, Hong Kong (China), Japan, Korea (Rep.), Macao (China), New Zealand, and Singapore. Research has shown that significant differences in ICT development among these regions prevail, causing unequal opportunities for socio­ economic growth. On the one hand, differences in penetration as well as overall ICT infrastructure are caused by the country’s income level; on the other hand they are driven by government policy, which either encourages or curtails ICT development.

In general, the development of ICT in Asia is a fast­changing emerging phenomenon that has yet to reach its full growth potential. Because ICT encompasses technological advancements and electronic activities, there are improvements that are rampantly sprouting all throughout Asia. The Republic of Korea accounts for the highest workforce of ICT employment in Asia with more than 10% in 2003. Other countries with a high ICT workforce are India, Philippines, and Sri Lanka (United Nations, 2007). The Asian economy is slowly gaining more competitive advantages by increasing the market share on global ICT exportation to various international markets. To account for this exportation, China and India are the largest exporters of ICT services and goods. China surpassed the United States as the world’s leading ICT exporter in 2004 (United Nations, 2007).

KEY DETERMINING FACTOR – MOBILE TELEPHONY During the early years of the 21st century, Asian countries have adapted mobile telephone and Internet as a major necessity in lifestyle and trade. The integration of mobile telephony in the industry is the ability of distributing mobile phones for calling, text messaging, and Internet browsing. In 2007, there was a continuous and substantial progression on the mobile phone coverage in Asia especially within the developing nations (United Nations, 2007).

Mobile cellular growth in Asia and the Pacific and in the world, 2000­2007

Source: ITU World Telecommunication/ICT Indicators database.

The Information Society Statistical Profiles 2009 states that “from 1997 to 2007, mobile cellular subscriptions in the region have grown at an impressive compound annual growth rate of 33 per cent, in line with the overall trend in the world of 31 per cent CAGR.” (International Telecommunication Union, 2009). China and India alone have added almost 700 million mobile cellular subscriptions in 2007 but also countries such as Indonesia, Thailand, Pakistan, the Philippines, Japan, and Bangladesh have contributed to the significant growth. However, the graphs in this section show that while Asia and the Pacific is leading in terms of absolute numbers, it still lags behind in concern to penetration. Only 36.6 out of 100 inhabitants

209

Information and Communication Technology Impact on Asia and the Pacific have a cell phone subscription in Asia and the Pacific versus 50.1 in the world. A closer look at the data reveals that great inequalities exist between economies with different income levels (International Telecommunication Union, 2009). In upper­ middle and high­income countries the penetration per 100 inhabitants is at 90.5 whereas in the low­income section it is only at 23.7.

Mobile cellular penetration in Asia and the Pacific and in the world, 2007

Source: ITU World Telecommunication/ICT Indicators database.

The substantial growth of cellular use in Asian countries will contribute to an increase in business and commercial transactions using mobile technology. In most countries in Asia, mobile phones are already being used as payment methods, business communication tools, and information resources.

KEY DETERMINING FACTOR – INTERNET Another strong indicator of Asia’s fast growth is Internet penetration. As the popularity of the Internet continues to flourish all over the world, developing nations in Asia are catching up with the trend. Already in 2004, Asia and the Pacific surpassed the rest of the world by gaining the largest share of Internet users. By the end of 2007, the region accounted for 551 million Internet users or 39 per cent of the world total. From 2000 to 2007, the annual growth rate was 24 per cent (compared to 19 per cent globally), driven by China, India, and Japan. As with mobile telephones, Asia­Pacific lags behind in terms of Internet penetration. In 2008, India and China recorded only 6.9 and 22.3 Internet users per 100 inhabitants, respectively. Contrariwise, higher­income nations such as the Republic of Korea and Singapore recorded 77.5 and 70.0 users, respectively. Here, the Internet penetration is even higher than in the United States, which is in part attributed to an economic policy in those countries that focuses on electronics and telecommunications. Interestingly, the G8 nations Japan and United States show a decline in Internet penetration (International Telecommunication Union, 2009).

Internet users per 100 inhabitants

% change % change % change Country 2005 2006 2007 2008 2005 ­ 2006 2006 ­ 2007 2007 ­ 2008 Japan 66.8 68.5 68.9 68.6 2.5% 0.5% ­0.4% China 8.4 10.4 16.0 22.3 23.8% 53.8% 39.4% China (Hong Kong) 50.0 52.3 55.0 56.7 4.6% 5.1% 3.1% India 5.4 5.4 6.9 6.9 0.0% 28.3% 0.0% Republic of Korea 68.4 70.4 76.3 77.5 2.9% 8.4% 1.5% Singapore 39.8 39.2 70.0 70.0 ­1.5% 78.5% 0.0% China (Taiwan) 58.0 57.9 64.5 65.7 ­0.2% 11.3% 2.0% United States 65.7 68.5 72.5 71.2 4.3% 5.8% ­1.7%

210

S. Amiri, S. Harke and R. Bauer Volume 7 – Fall 2009

Overall, the upper­middle and high­income countries within Asia­Pacific have a much higher penetration than lower and middle­income regions. United Nation statistics indicate that the differences in Internet user penetration correspond to differences in the proportion of households with a computer (International Telecommunication Union, 2009).

KEY DETERMINING FACTOR – BROADBAND The International Telecommunication Union has highlighted that “broadband­based applications have the greatest impact on people, society and businesses. Broadband makes the Internet always available at a fast speed.” (International Telecommunication Union, 2009). The implications are obvious: companies are able to run websites 24 hours, seven days a week, while delivering products and services in real time. Through broadband, companies are able to maximize the use of virtual connections for business activities, promoting business­to­business portals, and other online websites. Moreover, broadband contributes to a better Internet experience for users, making it an essential component of ICT. Today, four out of the top ten nations with household broadband access in the world are from Asia and the Pacific with South Korea taking the global lead not only in broadband access but also in fiber­optic connections, followed by Hong Kong and Japan.

Fixed broadband Internet subscribers in Asia and the Pacific, 2007

Source: ITU World Telecommunication/ICT Indicators database.

Noteworthy is the drastic “broadband divide” as illustrated be this graph from the Information Society Statistical Profiles 2009. Upper­middle and high­income economies have a much higher broadband penetration than lower­income economies and despite “impressive advances of Asia and the Pacific in broadband technologies, the broadband divide remains striking and the fixed broadband gap is hardly shrinking.” (International Telecommunication Union, 2009).

Since the adaptability of broadband use for businesses contributes to the fast growth of the Asia­Pacific market, those lower­ income nations are at a disadvantage concerning economic development.

KEY DETERMINING FACTOR – GOVERNMENT POLICIES The last but most deciding factor that affects the market growth of Asia Pacific is the initiation of governmental regulations to support the growth of ICT infrastructure. An examination of the relationship between the ICT Development Index (IDI) and GNI per capita has affirmed that a strong correlation between ICT levels and GNI exists. Thus, it is not surprising that “the top ten 2007 IDI economies in Asia and the Pacific comprise all of the region’s high­income economies, topped by the Republic of Korea.” (International Telecommunication Union, 2009). Many governments and politicians in Asia have recognized that ICT development is a key contributor to a nation’s financial success, as stated in the 2003 WISI Geneva declaration. The government of India is “recognizing the potential of ubiquitous Broadband service in growth of GDP and enhancement in quality of life through societal applications including tele­education, tele­medicine, e­governance, entertainment as well as employment generation…” (Government of India, 2004). Dr. Lim Keng Yaik, former Minister of Energy, Water and Telecommunications in Malaysia has stated that “high­speed broadband, which a few years ago was considered a luxury is today a necessary part of the industrial, commercial and lifestyle landscape.” (Yaik, 2006).

211

Information and Communication Technology Impact on Asia and the Pacific

The role that governments play in ICT development cannot be stressed enough. Especially in the region’s high­income economies, the diffusion of broadband has been encouraged in the past through national broadband policies and plans, as reflected in the high broadband penetration rate in those countries. Most recently in April 2009, the Australian Prime Minister has announced that more resources will be devoted to the country’s national broadband policy (Telecom TV, 2009).

IDI and GNI per capita, 2007

Source: ITU.

Another example of policies towards ICT development is Japan’s Zero broadband plan, which aims to give the entire country broadband access by March 2011 (BBC News, 2009). Also China, India, and Viet Nam’s recent deployments of IMT­2000/G3 networks are promising developments (International Telecommunication Union, 2009). While bridging the divide in the Asia and the Pacific region remains a major task for national and regional policy­makers, the ITU in 2008 listed several steps, which governments can take to foster ICT development: 1) Establishing targeted broadband policies; 2) Awarding spectrum for mobile broadband and fixed wireless technology; 3) Encouraging new broadband operators and stimulating competition; 4) Creating investment incentives for the broadband industry; 5) Using universal service funds to distribute broadband to rural and underserved areas; 6) Promoting the development of online e­government services and other local content to minimize dependence on expensive international connectivity, and encourage more citizens to access relevant services and applications. (International Telecommunication Union, 2009).

Additionally, various regions in Asia­Pacific are joining together by imposing cyber legislations not only to promote e­ commerce but also to encourage freedom of trade such as China, Hong Kong, Sri Lanka and ASEAN countries. These approaches maximize technological developments by decreasing restrictions of online trade within the global markets. The Association of South East Asian Nation (ASEAN) has passed a legal framework called the ASEAN E­Commerce Project to concentrate particularly on ten members of the ASEAN and implement a harmonized legal infrastructure for e­commerce (United Nations, 2007). This project focuses on removing the barriers on online activities for consumers and businesses. Also, this legislation is geared towards minimizing inconsistencies and duplications to create a legal platform for online business.

Furthermore, China has enacted the Signature Law that allows various electronic forms aside from a handwritten signature (Srivastava & Thomson, 2007). Sri Lanka has enacted the Electronic Transaction Act in 2006, which removes barriers of electronic transactions and promotes commerce (Kariyawasam, 2008). Under this act, the government promotes elimination of barriers, reliability of commercial forms, documents, and records, and attainment of public trust and confidence (Kariyawasam, 2008). In Hong Kong, the government has enacted an Electronic Transaction Ordinance 2000 to help promote e­business in the country. Under this legislation, the government aims to correct any legal impediments for electronic transactions, conduct security for electronic transactions, adopt neutral approaches to cope with rapid technological changes, and minimize restrictions to develop trade in private sectors (Kariyawasam, 2008).

212

S. Amiri, S. Harke and R. Bauer Volume 7 – Fall 2009

CONCLUSION Throughout the research, two key factors have become apparent: on one side Asia and the Pacific have experienced exponential ICT growth, on the other side significant disparities remain among the Asian nations. “Despite huge increases and record numbers, ICT penetration in the region remains relatively low, below the world average, given its large population, difficult geographic conditions and major differences in income.” (International Telecommunication Union, 2009). Individual access to computers and the Internet is especially limited for citizens in low­income economies. However, it can be determined that countries, which have actively pursued an aggressive ICT expansion policy, regardless of income level, have succeeded in proliferating the benefits of ICT more quickly than others. Among the countries with higher­than­expected ICT levels compared to their income level are Viet Nam and China. In both countries, more than 20 per cent of citizens are Internet users as opposed to India and Pakistan, where less than 16 and 11 per cent of the population, respectively, uses the Internet. Coincidently, Viet Nam and China also have a literacy rate of over 90 per cent as compared to Pakistan and India with literacy rates of below 67 per cent. Generally speaking, the strong link between ICT uptake and growth potential among the regions in Asia and the Pacific further supports the fact that ICTs indeed contribute to socio­economic development.

Finally, this study has shown that Asia­Pacific as a whole has both the largest share of ICTs worldwide and high­growth rates in ICT development. Combined, those two aspects speak to the fact that this region will indeed have enormous economic potential in the future whereas those countries with high­income levels will have a clear advantage. Since the cost of ICT remains a critical barrier in the region’s low­income economies, these governments will heavily depend on foreign aids to foster the expansion of ICT infrastructure. For all countries, however, sustainable ICT growth and the subsequent positive impact on the socio­economic situation can only be achieved if a) favorable government policies and regulations are present and b) the overall political environment remains stable, encouraging free market development and ensuring that long­term potential for ICT related commerce is backed by strong buying power.

REFERENCES World Summit on the Information Society, (2003). Declaration of Principles. Retrieved July 6, 2009, from International Telecommunication Union Web site: http://www.itu.int/wsis/docs/geneva/official/dop.html International Telecommunication Union, (2009). Information Society Statistical Profiles 2009. Asia and the Pacific, Retrieved Jun 26, 2009, from http://www.itu.int/ITU­D/ict/material/ISSP09­AP_final.pdf Asian e­commerce holds great growth potential. (2004, April). Market: Asia Pacific, Retrieved January 28, 2009, from Business Source Premier database. World Bank, Data & Research, (2009). Retrieved July 1, 2009, from World Bank Web site: http://econ.worldbank.org/WBSITE/EXTERNAL/EXTDEC/0,,menuPK:476823~pagePK:64165236~piPK:64165141~theSit ePK:469372,00.html United Nations. (2007). Information Economy Report 2007­2008. Science and Technology for Development: The New Paradigm for ICT, xxii, xxv­xxviii, xxxv­xxxvi, 21­35, 42­46, 49­53, 70­74, 85­87, 103. Retrieved February 27, 2009, http://www.unctad.org/en/docs/sdteecb20071_en.pdf. Fitzpatrick, M (2009, May 26). Broadband goes big in Japan . Retrieved July 6, 2009, from BBC News, Japan Web site: http://news.bbc.co.uk/2/hi/technology/8068560.stm Kariyawasam, K. (2008, March). The growth and development of e­commerce: an analysis of the electronic signature law of Sri Lanka. Information & Communications Technology Law, 17(1), 51­64. Retrieved March 31, 2009, doi:10.1080/13600830801889301. Srivastava, A., & Thomson, S. (2007, June). E­Business Law in China: Strengths and Weaknesses. Electronic Markets, 127. Retrieved April 27, 2009, doi:10.1080/10196780701296121

213

The Impact of E-Technology on the Healthcare Management Environment

THE IMPACT OF E­TECHNOLOGY ON THE HEALTHCARE MANAGEMENT ENVIRONMENT

Ralph L Harper and Wayne Brown Florida Institute of Technology, USA

ABSTRACT The objective of this paper is first to provide evidence of the significant impact e­technology has had on the healthcare environment from a management perspective. Understanding the methodology of teaching is vital when you consider how quickly technology is changing. In the healthcare environment competency in the utilization of these rapid improvements in technology has an impact on the quality and therefore the results of the patient outcome. How e­technology influences an individual working in or receiving services throughout the healthcare system, depends on what level in the healthcare organization they perform various patient care functions. This article will add clarity indicating how four major levels of healthcare facility were affected by e­technology. The four levels that are addressed in the article are the medical staff, administrative staff, clinical staff, and the patient. Evidence will show there are common variables consistent within each level of patient care and throughout not only the healthcare environment, but the industrialized workforce as a whole.

Keywords: E­Technology, Healthcare Organization, Clinical Staff, Patients, Medical Staff, Administrative Staff

INTRODUCTION In order to get a clear understanding and appreciation of the impact of e­technology in an acute healthcare environment, you must first understand how information is best received and therefore, how we learn. Unless you have worked in an acute care setting you can only imagine how quickly the need for a particular clinical skill can go from taking a 15 minute break to life or death medical intervention. Working in this type of fast­paced setting, skill sets are typically called upon and utilized in sequence; leading to compliance of state mandated educational requirements. Teaching in a long, drawn out mode of presentation does not result in clarity or retention of information.

Healthcare facilities across the nation are admitting more patients, increasing demands on doctors, and are facing nursing shortages; they cannot afford to have their employees spend more time taking federally mandated training than is necessary. They need to save both time and money while ensuring that their staff obtained required certification.

Before April 2002, Gunderson Lutheran's Safety Department staff taught Occupational Safety & Health Administration (OSHA) courses by traveling throughout the network; each class lasted two hours, and instructors taught with Power Point presentations. There was minimal Web­based delivery of OSHA courses­­most employees had to sit through an instruction until teachers gave a test at the end of class. To meet the healthcare network's training needs, in January 2002, Fernandez picked the Learning Management System (LMS). The LMS cut OSHA training time at the healthcare network by more than 50 percent. Most employees were able to finish each two­hour OSHA course in less than 30 minutes by taking the instruction online via the LMS. By the end of 2002, Fernandez reported that 80 percent of the workforce had used the system to complete the required OSHA courses. By delivering OSHA courses online, “Gunderson Lutheran saved $700,000 in employees' time and instructors' fees and travel costs during the first six months of using the LMS. For example, each of Gunderson Lutheran's 380 doctors was now spending up to seven additional billable hours per year with patients”. (Health Management Technology, 2003).

At the core of clinical instruction and e­technology is the problem solving and decision making process. This methodology provides a mechanism to assess, diagnose, and treat in sequence, avoiding missed steps. Baumann defines “problem solving, as the search for the single “correct” solution to a problem; in contrast, he defines “decision­making” situations as those in which a choice must be made from among several alternatives, often involving trade­offs of harms or benefits. Problem solving thus requires that the problem solver have a set of skills and a knowledge base that enable him or her to identify the alternatives and the probability of each outcome”. (Raise B. Deber, PhD, 1994)

214

R. L. Harper and W. Brown Volume 7 – Fall 2009

In the large acute care facilities, there exist a vast number of administrative and clinical departments that must work together learn together. There should be no silos; the operation of each clinical area affects the operations of many others. While the clinical process as a whole should be linked across the healthcare spectrum, evolution of e­technology has impacted each organizational level, from the patient to the Medical staff at a different rate.

MEDICAL STAFF AND E­TECHNOLOGY The most precious commodity a physician can have in an acute healthcare environment is time. In our present day of diminishing Medicare reimbursement, physicians need to provide care to as many patients as possible, as they build a strong referral base. One of the areas where time is a major issue as it impacts physicians on a daily basis is the extended wait period for lab results. This is not only frustrates the physicians, it may have a negative effect on the outcome of patient care. To solve this time factor St. Joseph Hospital in Phoenix, Arizona made the decision to adopt a cutting­edge automation system. The decision was preceded by a number of challenging factors that forced the laboratory to take a hard look at its current processes and instrument platforms. Today, however, the laboratory is more efficient than ever. And much of its success is credited to this simple switch to an automated system.

In 1995, St. Joseph Hospital hired consultants to perform system­wide profitability and improvement studies. The Consultants recommended major staff reductions at St. Joseph Hospital. The laboratory lost its full service phlebotomy staff and 10 medical technologists, leaving only 14 FTEs in place to conduct central processing. With these changes, the St. Joseph Hospital laboratory struggled to meet the needs of the hospital. Testing turnaround time was unacceptable. “Morning results were not fully complete until 9:00 a.m. and test results were not available to treat critically ill patients or to begin surgeries on time. In addition, STAT turnaround time varied significantly, which led to frequent complaints by the medical staff to hospital administration. Nursing floors began to investigate the possibility of bypassing the laboratory through the acquisition of point­ of­care testing. The right choice for St. Joseph Hospital’s laboratory was a combination of two SYNCHRON LX®20 chemistry analyzers and a Power Processor core system that was eventually connected to the two analyzers through a CHEMXpress upgrade. When the two LX20 systems were installed in August, 1999, the laboratory process dramatically improved. “We were immediately impressed with the LX20 systems’ throughput, which is double the speed of our old analyzers, as well as the systems’ one minute turnaround time for critical tests,” says Susan McMillan, Chemistry Supervisor. As a result of using the LX20 systems, our laboratory’s morning work is now out an hour and a half earlier than it was before — it’s now done by 7:30 a.m. versus 9:00 a.m.” (Beckman Coulter, Inc.2001) The change to the chemistry analyzers and the switch to an automated system is credit for the success.

As medical information systems began to evolve and become integrated in the operations of the medical environment and therefore had great potential to maximize physician “time” you would think that it would be welcomed with opened arms, but Older staff members were set in their way and did not want to change anything. Examples are “Physicians at a California hospital rebel against a new computer system for ordering prescriptions and laboratory tests, forcing it to be shut down. In the Southeast, doctors at an acute care facility bypass the new computer system by seeking assignments on wards that have not been computerized. And in a suburban Boston hospital, the chief of surgery stalks into the CEO's office and, referring to a brand new medical computer system, demands, "Rip it out!" (Joseph B. Martin, March 29, 2007) Why are new technologies resisted by the very professionals they are designed to help, and why is there such aversion to systems designed to improve care, reduce medical errors, and lower medical costs?

TRAINING IS THE ANSWER A generation ago, doctors were taught that they were all­knowing healers whose judgment was sacrosanct. But today, there's simply too much to know. With the overwhelming advancement of innovative drugs and procedures, “doctoring has moved from an individual endeavor to a team effort, and it is technology that binds the team together.” (Joseph B. Martin, March 29, 2007) With the improved e­technology systems, physicians can now go on­line and acquire information on new medical interventions, state mandated regulations, and new policies and procedures. Many of these resources are interactive and therefore providing a forum of medical intervention exchange between members of that particular audience. In some instances physicians need not leave their office, in order to participate in a group discussion, where his (her) input is vital, and the initiative is time­sensitive. E­learning can be modified or enhanced quickly as content needs change, a feature that is critical in a dynamic industry in which regulatory changes and product innovations are frequent. “Information on a new product can be distributed instantaneously to a global, highly distributed audience. Changes in guidelines or regulations can be instantly communicated to the workforce, without having to print and distribute new training manuals or get people together for classroom training.” (Kevin H Nalty and David Osborn, MX March 2001) Currently there are now systems in place that allows the clinician via electronic technology to capture the lab results at the patient’s bedside and hand the results to the physician

215

The Impact of E-Technology on the Healthcare Management Environment on the spot, or the information can be retrieved from the system, after a two minute download. This methodology of patient care is termed “point­of­care” testing, and is having a huge positive impact on many medical procedures where “turn­around­ time” is linked to indicator compliance.

TECHNOLOGIES The National Institute of Health (NIH) is committed to improving healthcare quality in the US and has set up initiatives to address problems such as the fragmented nature of healthcare provision. A hypothesis has been developed that testing closer to the point at which care is delivered may reduce fragmentation of care and improve outcomes. The National Institute of Biomedical Imaging and Bioengineering (NIBIB), the NIH’s National Heart, Lung, and Blood Institute, and the National Science Foundation sponsored a workshop, Improving Health Care Accessibility through Point­of­Care Technologies, in April 2006. The workshop assessed the clinical needs and opportunities for point­of­care (POC) technologies in primary care, the home, and emergency medical services and reviewed minimally invasive and noninvasive testing, including imaging, and conventional testing based on sensor and lab­on­a­chip technologies. Emerging needs of informatics and telehealth and healthcare systems engineering were considered in the POC testing context. Additionally, implications of evidence­based decision­making were reviewed, particularly as it related to the challenges in producing reliable evidence, undertaking regulation, implementing evidence responsibly, and integrating evidence into health policy. (Christopher P. Price and Larry J. Kricka, National Institute of Biomedical Imaging and Bioengineering) Many testing procedures were considered to be valuable in the clinical settings discussed. Technological solutions were proposed to meet these needs, as well as the practical requirements around clinical process change and regulation. From these considerations, a series of recommendations was formulated for development of POC technologies based on input from the symposium attendees. NIBIB has developed a “Point­of­Care Technologies Research Network that will work to bridge the technology/clinical gap and provide the partnerships necessary for the application of technologies to pressing clinical needs in POC testing.” (Christopher P. Price and Larry J. Kricka, National Institute of Biomedical Imaging and Bioengineering)

The implementation of Picture Archival Communication Systems (PACS) is increasing in the medical e­technology environment. This system allows information to be sent from a hospital environment, anywhere in the world. Example: you are a surgeon on a Caribbean beach vacation, but you are also on­call for the next ten (10) hours. A call comes in requiring your expertise in a surgical case. You log into your computer, and can now view and provide your expertise in a surgical procedure taking place real­time in the hospital where you practice in the US. Picture Archive and Communication Systems (PACS) are comprehensive management systems for diagnostic imaging studies that are increasingly used in hospitals and health care systems. It is essential for PACS to be an integrated part of the total hospital electronic information system in order to be maximally effective. “The main objective of any new information system in health care is to improve the effectiveness and efficiency of health care. Although the initial implementation of PACS is costly, the ability for care providers to have faster access to diagnostic imaging information allows care to be delivered more expediently, which improves the overall quality of care patients receive. Nurses will have the ability to see images, rather than just reports about imaging studies. An electronic system for diagnostic imaging procedures and management provides nurses with unique opportunities to improve their involvement in clinical discussions, their ability to provide quality patient care, and potential to further nursing research.’ (J Radial Knurs 2006; 25:69­74.) I

There are hospital facilities that utilize what is known as an e­ICU. In this instance, as opposed to the ICU intensivist having to come to the ICU and make rounds every morning, the physician can provide this service from his office, home, anywhere there is access to the internet. The physician is viewed on a rolling monitor, and is taken to each ICU patient room, where they lead the discussion regarding the care of that particular patient. Patients in adult intensive care units (ICUs) require multidisciplinary care that frequently results in substantial morbidity, mortality, and costs. Telemedicine has been used to provide remote intensivist monitoring for ICUs. “Remote ICU monitoring (e­ICU) was found to reduce mortality and morbidity as much as onsite intensivist staffing, who is associated with a 29% reduction in hospital mortality and a 49% reduction in ICU mortality.” (Management, policy and community health, University of Texas School of Public Health) “In this study we measure the cost­effectiveness of e­ICU in 8 hospitals in the Houston metropolitan area. We assess the cost­effectiveness of e­ICU by comparing the costs and clinical outcomes in the period after the full implementation of the eICU with the costs and clinical outcomes in the baseline period before the introduction of the eICU. The cost­effectiveness analysis in this study adopts a hospital perspective because the decision to implement an eICU is made at the hospital or health system level. Clinical outcomes are measured by ICU and hospital LOS and ICU and hospital mortality, obtained from chart reviews, and costs are measured by hospital costs and the cost of operating the eICU. Hospital costs are computed using average daily ICU costs and floor costs for patients in each ICU during the two study periods using individual patient data. (Luisa Franzini, PhD1, Eric J. Thomas, MD, MPH2, Kavita Sail1, and Laura Wueste, Management, policy and community health, University of Texas School of Public Health) Two methodologies for assessing costs were used. “First, costs are obtained by multiplying charges

216

R. L. Harper and W. Brown Volume 7 – Fall 2009 by Medicare ratios of cost to charge. The second approach to assessing hospital costs is based on costs as computed by the hospitals' cost­accounting system. Cost accounting is built up from information on resource­use and is a more accurate representation of economic costs.” Additionally, “we perform a reimbursement analysis to assess the eICU's impact on hospital revenues by comparing per­case revenue and monthly revenue in the baseline period and the eICU period. Based on previous studies, we expect to find that the e­ICU reduces hospital costs and improves mortality and morbidity outcomes. Introducing e­ICU may be a very cost­effective intervention for reducing costs of ICU, which consume 20­34% of all acute care resources and total 1% of the U.S. gross domestic product.” ( Luisa Franzini, PhD1, Eric J. Thomas, MD, MPH2, Kavita Sail1, and Laura Wueste, Management, policy and community health, University of Texas School of Public Health)

CHART SMART CHARTING BY EXCEPTION (CBE) or variance charting is a system for documenting exceptions to normal illness or disease progression, using a shorthand method of charting what's usual and normal. You make check marks or write your initials in certain places on the CBE flow sheets. This type of charting is often done on flow sheets that are based on pre­established guidelines, protocols, and procedures that identify and document the standard patient management and care delivery. You need to make additional documentation when the patient's condition deviates from the standard or what's expected. “The Advantages of CBE are: You spend less time on paperwork and charting because the documentation system is streamlined. Documentation consistency is enhanced because the system reduces individual variations in documentation quality and quantity. Confusing, redundant charting is reduced or eliminated, Variances stand out clearly as needing intervention, and you can spend more time with patients.”(Smith, Linda S "How to chart by exception". Nursing. 21 Sep, 2009) “Before implementing a CBE system, employers need to make sure that their charting policies don't conflict with state and federal regulations or those of accreditation agencies such as the Joint Commission on Accreditation of Healthcare Organizations. Risk managers, facility attorneys, interdisciplinary committees, and regulatory spokespersons such as state human services representatives should review and approve any CBE plans. These reviews and decisions need to be well documented.

All CBE policies, protocols, standards, procedures, clinical pathways, and staff training need to be in place before CBE implementation. (Smith, Linda S "How to chart by exception". Nursing. 21 Sep, 2009)

ADMINISTRATIVE STAFF AND E­TECHNOLOGY Operations have become more effective due to the growth of e­technology at the Administrative staff level. Like the physician “time” is never in abundance, especially since the expectation of multi­tasking is a condition of employment.

At the core of a successful leader in most service environments is the ability to build relationships, this is probably even more significant in the healthcare industry, as you are aware this takes time. Unfortunately, along with the many tasks of an Administrator, comes the expectation of documentation (paperwork) and meetings. Sometimes up to 12­15 meetings in single week, and coming out of each meeting, additional tasks. In many progressive healthcare environments, there is on­going mandatory leadership developments meetings, these meetings are typically several hours long, that resulting in falling behind on other required duties.

Generally administrative tasks include federal regulation compliance, team­building, budget management, and new service implementation. With time spread so thin, invariably the Administrator is less effective and resort to putting out fires, as opposed to forward progression of any one initiative. It is in these situations that e­technology can make a substantial impact in the way Administrators full­fill their many tasks. Now picture the scenario where E­ technology is utilized as an additional tool throughout the leadership development process. Management leaders will have the ability to present discussion questions in a proprietary corporate forum and discussion, exchange experiences and ideas to assist the user with sensitive issues they may be challenged with. Be aware while not all facilities share identical issues, there exist many instances where the internal areas for improvement are consistent throughout a corporation. Not only will e­technology provide needed information among the Leadership staff, on a broader scale it paves the way for greater cultural standardization within the corporation. E­technology will also allow the leadership staff to complete assignments electronically, and when time is available, allowing each member to have greater control of their time and regulate daily tasks more efficiently. Developing leadership skills through online tools offers many advantages for healthcare organizations. Training time is condensed, resulting in significant cost savings. Marc Rosenberg reports in his book, e­technology: Strategies for Delivering Knowledge in a Digital Age, “that e­technology takes 25 to 60 percent less time to convey the same amount of instruction or information as in a classroom.” “In an environment where patients come first, the need for nurse administrators and staff to be available at all times, compounded by a national nursing shortage, makes e­technology particularly efficient.” (Health stream, Inc.­BW Health Wire, Aug., 2001).

217

The Impact of E-Technology on the Healthcare Management Environment

CLINICAL STAFF AND E­TECHNOLOGY The largest group in any healthcare setting is the front­line staff. As this is the case, it is to the benefit of the patient, that e­ technology has had the most impact. Consistent with most new processes and techniques of operations, is resistance as the first barrier. A major complaint as mentioned earlier is documentation. E­technology has been an excellent tool to alleviate the pain of documentation. In the not so distance past the staff would spend close to 35­40% of their shift documenting, while patients complained of waiting extended periods of time for assistance. Now many hospitals have adopted the evidence based proven documentation of charting –by­exception. The clinician would navigate this software, wherein only abnormal results were documented electronically; this task would be performed bedside on computers attached to mobile carts. All patient care assessment information was in the system, allowing access with a touch of a finger, eliminating the need to leave the patient to look up information in a chart or reference guide that was never where it was supposed to be. E­technology and IT took bedside documentation even one step further by linking the procedural information documented by the clinician to the billing department, allowing the system to charge the patient for billable procedures automatically. This resulted in two very favorable outcomes, it reduced billing errors to a great extent and it eliminated documentation redundancy of the staff, which in the past would have to manually place charges in the system, AFTER they finished their routine patient documentation.

Data collection is another area in the healthcare environment that became more efficient with e­technology. To give you some background the Centers of Medicare/Medicaid Services (CMS) has mandated that all hospitals must meet specific patient care goals of various indicators such as fall rate, Infection rate, etc. This indicator data had to be collected on a daily basis, and numerically calculated to assess whether the hospital was in compliance to receive a high or low Medicare reimbursement rate. This was termed Pay­for­ Performance (P4P). As you can see while the staff dreaded this additional task of documentation it was vital from a financial standpoint. Fortunately through e­technology the clinician can access indicator information concurrently and focus on areas for improvement prior to the monthly data submission to the State.

Pay­for­performance programs are now firmly ensconced in the payment systems of US public and private insurers across the spectrum. More than half of commercial health maintenance organizations are using pay­for­performance, and recent legislation requires Centers for Medicare & Medicaid Services (CMS) to adopt this approach for Medicare.1 As commercial programs have evolved during the last 5 years, the categories of providers (clinicians, hospitals, and other health care facilities), numbers of measures, and dollar amounts at risk have increased. In addition, acceptance of performance measurement among physicians and organized medicine has broadened, with the American Medical Association committing to the US Congress in February 2006 that it would develop more than 100 performance measures by the end of 2006.2 (Meredith B. Rosenthal, PhD; R. Adams Dudley, MD, MBA JAMA. 2007;297:740­744)

The technology assessment program at the Agency for Healthcare Research and Quality (AHRQ) provides technology assessments for the Centers for Medicare & Medicaid Services (CMS). These technology assessments are used by CMS to inform its national coverage decisions for the Medicare program as well as provide information to Medicare carriers. AHRQ's technology assessment program uses state­of­the­art methodologies for assessing the clinical utility of medical interventions. Technology assessments are based on a systematic review of the literature, along with appropriate qualitative and quantitative methods of synthesizing data from multiple studies. Technology assessments may be done in­house by AHRQ staff, or they may be done in collaboration with one of our Evidence­based Practice Centers.

When available, technology assessment topics are linked to corresponding information on the CMS Web site. (Agency for Healthcare Research and Quality, November 2008) Monitor Performance­ “E­technology greatly enhances the ability of the organization to monitor and track the learning that is actually being accomplished. Reports that document completion of a course, as well as the proficiency level that was attained by learners, can be used to ensure that important learning has taken place. Certification of learners in critical areas is generally much simpler and more accurate using an e­technology approach, compared with administering written tests or passing around attendance sheets in a classroom.” (Kevin H. Nalty and David Osborn, MX, March. 2001) One advantage of e­technology, in any environment is data maintenance, when a large amount of data is manipulated the chances of lost is very high. Leading to hundreds of man­hours wasted adding to the cost of sometimes mandated data collection. In the E­technology environment lost of data is almost non­existent. All of the advantages of E­technology helps to address the overriding perception that educational requirements is busy work that merely keeps me away from my patients.

Too often, healthcare organizations respond to budget woes with budget cuts, and training is one arena frequently trimmed. While such cuts may provide short­term relief, reducing the availability of training opportunities can have severe long­term consequences. E­technology, like any type of advanced training, should lead to gains in customer satisfaction, productivity and eventually, increased revenues. Healthcare organizations that employ e­technology can reap almost immediate results.

218

R. L. Harper and W. Brown Volume 7 – Fall 2009

Online learning provides easy access to required education through self paced courses that are available anytime, anywhere. E­technology guarantees consistency of the message­everyone in your organization gets the same information in the same way at the same time, which is crucial for compliance courses. (Goettner, Pete Publication: Health Management Technology December 1 2000)

HOMELAND SECURITY DISASTER PREPAREDNESS Disaster management utilizes diverse technologies to accomplish a complex set of tasks. Despite a decade of experience, few published reports have reviewed application of telemedicine (clinical care at a distance enabled by telecommunication) in disaster situations. Appropriate new telemedicine applications can improve future disaster medicine outcomes based on lessons learned from a decade of civilian and military disaster (wide­area) telemedicine deployments. Emergency care providers must begin to plan effectively to utilize disaster specific telemedicine applications to improve future outcomes. Telemedicine encompasses the diagnosis, treatment, monitoring, and education of patients, and provides convenient, site independent access to expert advice and patient information. “Transmission modalities include direct wired connections over standard phone lines and specialized data lines, and wireless communications using infrared, radio, television, micro wave, and satellite­based linkages.” (D. Ziadlou, A. Eslami, H. R. Hassani, 2008)

Many forms of e­technology, particularly Web­based instruction can be customized to meet the needs of specific learning audiences and individuals. Companies can deliver tailored training materials to the desktops of people in different business units or product groups, or different customer and stakeholder segments (e.g., primary care physicians, specialists, health consumers). E­technology can be designed to support flexibility in moving through the information, so that learners can focus on areas of greatest need or interest. And unlike classroom training, e­technology can provide learners with a personalized experience precisely where and when they want or need it. (Kevin H. Nalty and David Osborn, MX, March. 2001) Technology transfer is a central agency function that is directly in line with CDC’s Futures Initiative. The CDC Technology Transfer Office (TTO) is a primary window to the business community regionally, nationally, and internationally and facilitates productive interactions with the public health, life sciences, and occupational safety and health industries. Technology transfer is wholly focused on translation of CDC’s research findings into practical application for the benefit of health and safety of the American public and the world. Each week CDC “analyzes information about influenza disease activity in the United States and publishes findings of key flu indicators in a report called Flu View. During the week of September 6­12, 2009, a review of the key indicators found that influenza activity continued to increase in the United States compared to the prior weeks.” (Centers for Disease Control, September 18, 2009)

THE PATIENT AND E­TECHNOLOGY We need to now address the most important entity of the healthcare environment and that is the patient. For the patients E­ technology has had a significant impact and will continue to for years to come. Changes in the Web patterns of U.S. consumers are causing many med­tech firms—who have historically invested little to no funding in patient education—to investigate the return on investment of consumer­education initiatives. Studies indicate that “70–90% of on­line adults currently use the Internet to find health information. Many of these health­med retrievers visit the Internet before seeing a doctor, then print out Web pages to guide their discussions with physicians.” (Kevin H. Nalty and David Osborn, MX, Mar. 2001) Results of patient surveys generally cite access to care and delays in care as sources of dissatisfaction. Delays are expensive, not only in terms of the direct costs they incur, but also in terms of the potential costs of decreased patient satisfaction and adverse clinical outcomes.

Picker research shows that access to care is a significant business concern for specialty groups and health systems; patients dissatisfied with access are less likely to recommend physicians and more likely to disenrollment from managed care plans. Also, research shows the longer the wait time for an appointment, the greater the likelihood those patients will fail to keep the appointment or will cancel, creating unused or wasted capacity. “1 Delays in diagnosis also affect patients’ clinical outcomes. Several lines of research have found that prompt interactions with patients and their families have remarkably strong effects on clinical outcomes, functional status and even physiologic measures ofhealth.2 Improving access requires a balancing of capacity and demand.“Capacity” is the ability to provide the services patients need when they need them, and “demand” is the patient’s need for services. Health care providers that succeed in achieving this balance have taken measures to evaluate their current practice strategies and implemented services such as scheduling group appointments, conducting follow­up visits by telephone and reducing backlog to prepare for future demand.” (Picker Institute, Issue 15 2000) The Women’s Pavilion at Milford­Whitinsville Regional Hospital created a system that uses computer technology, clinic design and staff coordination to reduce waiting times for women who receive a positive mammogram result. Sharp Mission Park Medical

219

The Impact of E-Technology on the Healthcare Management Environment

Group imported a customer service system called Making Your Mark, a program that analyzes systems and drops unneeded steps, to determine how to reduce wait times for referrals. (Picker Institute, Issue 15 March 2000)

CONCLUSION It is blatantly clear as to the overwhelming positive impact e­technology has had on the healthcare environment. While the efficient use of time is a common thread among all four groups, the outcomes vary and affect each group in a different way. Once the obstacles of cost, resistance, training and integration have been overcome, the monitoring of improvement and added efficiency is vital. Measurement of process improvement will indicate the end value of the e­technology that has been implemented, resulting in a positive outcome from an operations standpoint. However the positive impact on patient care will elevate the healthcare environments patient satisfaction, resulting in an overall perspective of Quality. When the healthcare of our community improves, as the work environment becomes more efficient, the results are greater retention of excellent staff, which leads to optimal patient care.

Overall, e­technology has vast potential in the healthcare environment as well as many others. In this day of economic instability the opportunity to redirect limited resources resulting from the utilization of e­technology, without a doubt will continue to grow by leaps and bounds in our society as a whole.

REFERENCES Afsari, Mary F., (2000) New Visions for Healthcare: Ideas worth Sharing, Research & Development. The Picker Institute, Camden, ME Centers For Disease Control and Prevention, (2009) 2009 H1N1 Situation Update. http://www.cdc.org, Deber, Raisa B., (1994) Healthcare Management August 15, 1994. Can Med Assoc., J 1994; 151 (4) 427 LA GESTION DES SOINS DE SANTE Physicians in health care management: The patient­physician partnership: decision making, problem solving and the desire to participate. E­learning replaces classroom: Wisconsin healthcare network uses LMS to educate staff and to track federally mandated training ­ Learning management systems: case history Health Management Technology, April, 2003 Franzini, Luisa., PhD1, Thomas, Eric J., MD, MPH2, Sail1 ,Kavita., and Wueste2 Laura., Cost­effectiveness of ELCU in reducing morbidity and mortality in ICUs: (1) Management, policy and community health, University of Texas School of Public Health, 1200 Pressler Drive, Houston, TX 77401, 713 500 9487. Goettner, Pete. Effective E­learning for Healthcare. (Publication: Health Management Technology, Friday December 1 2000). Health Stream Selected by Development Dimensions International to Deliver Online Management. (Publication: Business Wire). Wednesday, August 8 (2001) Hood, Maureen N., MS, RN; Scott, Hugh., MS, (2006) Journal of Radiology, Nursing, Introduction to Picture Archive and Communication Systems. Martin, Joseph, B., (2007) Digital Doctoring. The Boston Globe. March 29, 2007 Nalty, Kevin H., Osborn, David. (2001) Leveraging E­Learning in the Medtech Industry. Originally Published MX March 2001 Price2, Christopher P., Kricka3 Larry J., Improving Healthcare Accessibility through Point­of­Care Technologies. Edited on behalf of the National Institute of Biomedical Imaging and Bioengineering/National Heart, Lung, and Blood Institute/National Science Foundation Workshop Faculty1, Rosenthal, Meredith B., PhD; Dudley, R. Adam, MD, MBA. (2007) Pay­for­Performance: Will the Latest Payment Trend Improve Care? Journal of American Medical Association, 297:740­744. Sharp, Beth Collins (2007) Agency for Healthcare Research and Quality, AHRQ’s Center for Outcomes and Evidence (COE) oversees the Evidence­based Practice Centers Program. Rockville, MD. Smith, Linda S. (2009) "How to chart by exception".Nursing. 21 Sep, 2009. http://findarticles.com/p/articles/mi_qa3689/is_200209/ai_n9125527/ , Copyright Springhouse Corporation Sep 2002 Provided by ProQuest Information and Learning Company. St. Joseph Hospital’s Automated Lab Solution Helps Speed Turnaround Time and Increase Overall Efficiency General Chemistry Lab Automation. http://www.beckmancoulter.com, Ziadlou, D., Eslami, A., Hassani, H.R. (2008) "Telecommunication Methods for Implementation of Telemedicine Systems in Crisis," broadcom, pp.268­273, 2008 Third International Conference on Broadband Communications, Information Technology & Biomedical Applications, 2008

220

R. L. Harper and W. Brown Volume 7 – Fall 2009

AUTHORS’ BIOGRAPHY Wayne Brown Wayne Brown was Director of Cardiopulmonary services at Acute Care Hospital and Lead the operations of 20 clinical team members of the Respiratory Therapy, Cardiology and Neurodiagnostic cost centers. His position consists of providing the growth and quality performance standards for all clinical areas. Wayne managed the budget for the department in each cost center and elevated the educational standards for optimal patient care. Wayne participated with nursing care­partners in the implementation of the Clinical Assessment (CAT) Team and was the Hospitalists liaison, coordinating the clinical activities resulting in quality patient care, improving clinical outcomes and expediting patient discharge. Wayne is an Adjunct Faculty Member at Florida Tech teaching Health care course For the BA in Health Care. Wayne is a PhD Candidate at the Florida Institute of Technology.

Ralph L. Harper Jr., CISM Dr Harper is Program Chair for Management at Florida Institute of Technology, College of Business, and Dr. Harper had Over 42 years of experience in Acquisition and Logistics Management focusing on product application and development. Thirty­ seven years experience in the aerospace industry in the development and delivery of logistics products and services to government and commercial customers in the U.S. and overseas. He Defined and developed a global logistics network management system, configuration management system, and production control management system for use in support of multiple defense systems. Dr Harper was Logistics Configuration Manager for Air Defense, Raytheon Technical Services Company 1967 to 2004, President of HarCon Inc. 1988­2007, Adjunct Professor of Management, Florida Institute of Technology, 2001­2007, Adjunct Professor, Franklin Pierce College, School of Graduate & Professional Studies 2001­2007, Adjunct Professor, Southern New Hampshire University, School of Graduate & Professional Studies 2001­2007. Dr. Harper is a member of the Sigma Beta Delta National Honor Society for Business, Management and Administration. Dr Harper has 22 Published papers and presentations.

221

Software Development Standards for Medical Devices: Evolution and Improvement

SOFTWARE DEVELOPMENT STANDARDS FOR MEDICAL DEVICES: EVOLUTION AND IMPROVEMENT

Hisham M. Haddad and David Battista Kennesaw State University, USA

ABSTRACT Software in the medical device industry has seen an exponential increase in the last 30 years, enabling new applications and broader range of use. As the complexity and reliance on software increases, the industry had to find ways to assure safety in the development of software for medical devices, to protect the patient and the end­user. It started with establishing new regulations. Quickly the need for acceptance of existing software development standards was recognized, and the medical industry adapted them to its own benefit. Analyzing the specific needs of the medical field, medical device software standards have been established. However, these standards are evolving and don’t provide sufficient oversight of several areas in this field. This work investigates these standards and highlights areas of needed improvement. It draws from work experience in the field and addresses issues of software reuse and network­attached medical devices.

Keywords: Software Standards, Software for Medical Devices, Critical Software, Standards Evolution, Software Reuse, IEC80001, SW68.

INTRODUCTION We rely on critical systems in many aspects of our modern life. From simple circuit breakers to nuclear power plants, critical systems range in complexity. The more complex the system, the more software it usually contains: there are more lines of code on an airplane than there are mechanical parts. As the digital age advances, more systems are replaced by digital equivalent, often to reduce size, increase capabilities, effectiveness and efficiency as well as user friendliness.

A system can be called performance critical, mission critical, or life (safety) critical. In a performance critical system, the criticality of the system lies in its ability to maintain throughput consistently with time. It is often true of systems that are developed to detect problems with other systems. On the other hand, when other systems depend on the good function of a parent system, this parent is called a mission critical system. For example, the Operating System (OS) is mission critical for the rest of the applications in a computer. Loose the OS and applications won’t function at all. At last, when people’s life or safety is at stakes, or when potential harm to equipment or the environment is possible, we have life, or safety critical systems. A good example of a safety critical system is a radiation therapy treatment machine that uses high intensity radiation to destroy cancer cells, while preserving healthy tissue. Another example is the avionics on board an aircraft.

This work focuses on safety critical software in medical devices. The goal is to highlight current and future software development standards for medical devices. The paper is motivated from work experience in the field and describes the issues of software reuse and network attached medical devices. It outlines areas that would benefit from new standards.

SOFTWARE ENGINEERING VS. HARDWARE ENGINEERING To better understand the need for specific software improvement process, we can start by comparing software to hardware engineering for critical systems. The first difference is that software errors happen mostly during the development process. Software does not wear out and fail like a mechanical part would do. Random production errors usually are not an issue, and manufacturing can be as simple as copying the software on a CD. So software safety assurance happens during the design.

Second, software can be very complex, with multiple execution paths. Comprehensive testing even of simple programs is not feasible. Software testing alone cannot guarantee quality. The third difference is that software is more flexible to changes than hardware during the development cycle. This can lead to the assumption that errors can be easily corrected later in the development, and that changes are easily implemented without unexpected consequences.

222

H. M. Haddad and D. Battista Volume 7 – Fall 2009

So it is evident that safety critical software requires a methodical, disciplined approach. “The FDA foresaw a need for better regulation of computer­controlled devices as early as 1981, but none specific to medical device software existed prior to the Therac­25 disaster” [1], a radiation device that killed several people due to poor control of the software development processes.

The criticality of these systems triggered early on the need for regulatory agencies like the FDA for medical devices or the FAA for avionics to come up with software specific regulations to ensure the safety of the public. Recognizing the risks, the FDA put out regulations and an approval process for the software driving medical devices, before they can be released for marketing. "The FDA found that approximately 44% of the quality problems that led to voluntary recalls of medical devices were attributed to errors or deficiencies designed into particular medical devices rather than having been inserted during the manufacturing phase" [2]. "According to the Institute of Medicine report 'To Err is Human', between 44000 and 98000 people die in Hospital from preventive medical errors. The report also says that more people die every year as a result of medical errors than from motor vehicle accidents, breast cancer or aids" [2].

Complexity and risk profile of medical device range widely, from a simple digital thermometer, to complex MRI diagnostic machine and radiation treatment delivery devices. Some operate in real time, monitoring patient position or health as a treatment is delivered. Some software is often tasked to also ensure the safety of the user and of the patients.

At a glance, and given the differences between software with hardware, one can recognize the need for a risk management approach (in the design, the first difference), a quality assurance system (the second one because we cannot rely on comprehensive testing alone), and good software lifecycle practices (the third one), including change control mechanisms.

EVOLUTION OF REGULATIONS AND EARLY STANDARDS In the first implementations of the regulation, a device manufacturer is required to submit documentation for the FDA to conduct a traditional pre­market review of the software. The four areas of FDA requirements include Process Management, Requirements Specifications, Design Control, and Change Control.

“Software validation is accomplished through a series of activities and tasks that are planned and executed at various stages of the software development” [3]. These activities and tasks include Level of concern, Software description, Device hazard and risk analysis, Software requirements specification, Architecture design, Design specifications, Requirement traceability analysis, Development, Validation, Verification and testing, and Revision level history.

Each of these activities gets verification, testing and other tasks that support software validation. Software verification determines that the inputs and outputs of a software lifecycle phase correspond and are accurate, and that it meets the requirements. Verification is used for validation, but validation goes beyond by providing a means to prove that the software is complete and accurate for intended use. The third important concept in the regulation is traceability, following the life of a requirement thru the development process, and facilitating regression analysis in case of a change. The regulatory body uses material called guidance to explain the different steps and to provide examples of tasks that support each activity. Figure­1 [4] shows an example of Verification and Traceability flow according to FDA 510K.

Because of the delays incurred by manufacturers for getting their software validated by the FDA, the need for recognized standardization quickly arose. By analyzing existing software development standards, like ISO 9000­3, and the software engineering institute’s Capability Maturity Model (CMM), it was determined that standards by themselves do not fully meet FDA requirements, but are useful. The need to create standards designed specifically for the medical device industry was appearing.

The first effort towards standardization was the Software Quality Audits (SQA), resulting from a workshop help at the National Institutes of Health (NIH) in September 1996. A manufacturer who followed the audit process and had no negative findings would be allowed to market its device without further review. While still being responsible for inspecting and ensuring that their processes are meeting the FDA requirements, the manufacturer would then bypass long delays in reviews. The SQA was designed only for stand­alone software, and did not address software embedded into medical devices.

Then from the International Electro­technical Commission came “IEC 60601­1­4, Collateral Standard: Programmable Electrical Medical Systems” also in 1996. The “1­4 standard”, as it is commonly referred to, required the manufacturer to establish and document a process that included risk management and development activities. But the FDA did not recognize the standard, mainly questioning if it was a software standard or only a risk management standard.

223

Software Development Standards for Medical Devices: Evolution and Improvement

Figure­1: FDA 510K Verification and Traceability Flow [4].

In response to the IEC 1­4 standard, the Association for the Advancement of Medical Instrumentation (AAMI) attempted to generate a software standard in medical devices that addressed all the requirements of the FDA for safe and effective software. But at that time, it was still too early for regulatory agencies to recognize standards; also there was “no demonstrated traceability to IEEE software standards or IEC/ISO JTC1 software engineering standards”, which already was recognized in the general software engineering community. “There was a definite need to show that the AAMI proposal was actually based on established software engineering standards being applied to the medical device industry. Unfortunately this was not happening” [5].

Then an important milestone was achieved: the FDA Modernization Act, in November 1997, enabled the FDA to use standards in its regulatory process. The Center for Devices and Radiological Health (CDRH) created 13 task groups to tackle the problem of medical device standardization. The CDRH­STG (Software Task Group) did extensive research of existing software engineering practices and carefully linked those practices to the pre­market submission guidance. The result was a table with 2 entries for documentation required for submission depending on the standard used: for example if a manufacturer used ISO/IEC 12207 “Standard for Information Technology­Software Life Cycle Processes” for developing software, the table

224

H. M. Haddad and D. Battista Volume 7 – Fall 2009 would show which items did not need further review (Don’t Submit) and what is required documentation per the pre­market submission guidance (See Guidance). Here is a fictive example:

Level of Concern Software Documentation Minor Moderate Major Architectural Design Don’t submit Don’t submit See Guidance Traceability Don’t submit See Guidance See Guidance

After this first effort, the AAMI then focused their attention on designing a software standard specific for medical devices that would meet the following [5]: “Be sufficient for 50­80% of medical device software use what was already done; use engineering terminology, and avoid regulatory or quality language; and have the potential for international acceptance; provide a clear means for assessment”.

Building on the ISO/IEC 12207, the STG modified it and came out in March 2001 with AAMI SW68 Medical Device Software – Software Lifecycle Processes, the first standard developed specifically for medical device software. The relationship between Risk Management (ISO/IEC 14971) and software engineering was underlined, where Software Engineering practices alone cannot guarantee safety, while SE information is necessary to make good risk management decision. (ISO 14971 was then incorporated into the SW68 standard). In the same way, the standard requires a quality system along the lines of ISO 13485 or FDA’s QSR (Quality System Regulation). So with SW68, the three ingredients that we outlined at the beginning are delivered: Risk Management, Quality Management, and Software Lifecycle practices, which together promote safety for medical device software.

SW68 uses primary and supporting processes, consisting of task­based activities. The two primary processes are development and maintenance and the five supporting processes are software hazard management, documentation, configuration management, verification, and problem resolution. The software development team can choose the software life cycle model they want to use, but need to identify the tasks and processes set forth by SW68, when and where they are applied in the software life cycle. The original SQA task force role changed from doing audits (SQA) to conducting assessment of the manufacturer compliance with the recognized standards.

In 2006, the new American National Standard ANSI/AAMI/IEC 62304:2006 “Medical Device Software – Software Life cycle processes” was introduced to update SW68. It provides “international harmonization that provides manufacturers with a single set of guidelines for stand­alone medical device software as well as software that is an embedded part of a medical device” [7]. The new standards improves on SW68 which only addressed software of minor to moderate concern, addressing software of major levels of concern, and software that is itself a medical device. The three key ingredients for safety in medical device software are underlined: Risk Management, Quality Management and Software Engineering. It also defines three levels of risk of each software components, and defines minimum life cycle requirements based on the risk level:

 Class A: No Injury or damage to health is possible  Class B: Non­serious Injury is possible  Class C: Death or Serious injury is possible

ANSI/AAMI/IEC 62304:2006 provides life cycle (core) processes in the areas of Software Development, Software Maintenance, Software Risk Management, Software Configuration Management, and Software Problem Resolution. The standard then defines a set of activities and tasks which together with the core processes establish a common framework for medical device software lifecycles. Figure­2 [7] illustrates these life cycle processes.

225

Software Development Standards for Medical Devices: Evolution and Improvement

Figure­2: IEC62304:2006 core processes [7].

Here, the Software Risk Management builds on ISO 14971 Risk Management, introducing Risk Control techniques using FMEA tools (Failure Mode and Effects Analysis, used in Hardware Engineering). The system (hardware, software) gets analyzed for risk (likelihood and severity) and a risk management table is drawn: Failure Mode, Effect on System, Software Cause, and Methods of Control. Figure­3 [8] is an example of a risk management table.

The new standard was driven by the FDA as a continuation of the effort on SW68 and as such is recognized by the FDA. Also Japan already requires it and Europe is expected to use it soon also [7].

Ref.# Item/Function Failure Mode Effect on System Cause Methods of Control 1 Surgery Case Does not finish OR time exceeded Unreliable interrupt Check for loop overrun; in time source; too much system performance computation; measurements; choice of interference from operating system other tasks 2 Image import Wrong Wrong location Image information Exception if information is orientation penetrated not readable not available Figure­3: A sample Risk Management Table [8].

Standards like the ANSI/AAMI/IEC 62304:2006 are important to the industry as globalization makes complying with international regulation bodies a difficult task. It also benefits the industry to have a tried and true solution to reoccurring problems. The standard can also be used as a model for the regulatory body to evaluate the software processes that a company chooses to use, if not trying to be directly compliant with the standard. It also helps hospitals choose which devices they want to purchase based on industry standards that are widely recognized.

226

H. M. Haddad and D. Battista Volume 7 – Fall 2009

THE UPCOMING STANDARD It is good news for the industry to be able to follow a standard that the regulatory body will recognize. Standards “help insure compatibility, interchangeability, or basic safety, capture tried and true solutions to reoccurring problems and harmonize technical regulation” [9]. They are a benefit to the industry for sharing ways that work, and guidance to meet regulations.

Recognizing the threat posed by medical devices running on hospital network, the also industry is working on AAMI/IEC 80001: Application of risk management to Information Technology Networks incorporating medical devices. Specifically, the new standard addresses the following: Process to define responsibilities between device manufacturer and hospital IT; Risk Management in IT network where medical devices are attached; Loss of data; Data corruption; Unauthorized access to data; Inappropriate data interchange; Medical device systems that use software running on general purpose computers; and Worms and virus pose a threat to safety in interconnected devices.

“The current regulatory model regulates medical device vendors up to, and not beyond, the sale of the device to a customer. As the above risks explain, arbitrarily limiting regulatory oversight in this way is increasingly inadequate” [9]. All too often the local IT departments have a very limited knowledge of the issues raised by medical devices on shared network, and often are even subcontract general IT stores.

Manufacturers have been trying to avert issues of medical devices being on open hospital network by creating local private networks. This practice proves to be ineffective for a variety of reasons including the following:

 It causes problems for timely maintenance (remote access by the software technical support agent)  It makes it more difficult to pass information between devices (Data interchange)  Applications that run on these devices are becoming enterprise applications that force the private network to have many open ports, and managing open ports between many devices become an issue.  It segments users authentication and creates the needs for multiple local users that are very hard to track by the hospital IT department, also posing risks of password requirements not being met  It segments those devices from the hospital policies which can be detrimental to the good operation of the OS (like updates, anti­virus remote control application.)

AAMI/IEC 80001 proposes a process standard for integrating medical devices onto hospital networks. Activities revolve around formal risk management processes, starting with a “responsibility agreement” where vendors contractually agree to make available to providers all the information they need to apply an effective risk management process to a system that includes their products” [9]. The latest committee drafts on this publication were in March and June 2009.

Risk Management techniques specific to the software in medical devices exist in IEC 62304, but only address risks encountered during the development and changes to the software. The standard does not address the risks posed after installation of a system in a hospital IT­network infrastructure, other than addressing the defects, and mostly thru problem resolution.

Since “even programs with resources as prodigious as NASA’s space shuttle program cannot completely eliminate defects, because theoretical limits to the minimum number of defects that will be introduced during the software coding, system integration and installation exist” [1], the industry has to address any of those risks also, in the benefit of patient safety.

The software in a safety critical medical device has to protect the user and the patient. To manage risk in an efficient (versus costs) and effective (actual benefit) manner, identifying potential risks early in the process of the software development is key [10]. This applies to risk that can happen after software delivery also, all the way until the software is put to decommission. AAMI/IEC 80001 under development shows that this is the main reason it was needed: to address risk management from inception to decommission, as found in the preliminary documents.

In the event of a virus attack, as rapid response is needed, a poor plan for recovery affects patient data security, and even may disrupt operation (IT may want to hastily launch group policy to turn on software firewall on all the computers in a department). In the case of data loss, most of the time the manufacturer is involved at some point in the recovery process, but usually it is not responsible for backing up the data. So the manufacturer ends up piece milling together the system with an IT department that may not be trained in the issues raised by software in medical devices. Gray areas are left in the responsibility and management of the risks posed by IT­Network. The standard for example would address recovery techniques that are both efficient and safe, while protecting patient data, address areas of responsibilities for the manufacturer

227

Software Development Standards for Medical Devices: Evolution and Improvement and the users as proposed in the current development of AAMI/IEC 80001, and propose risk management techniques that are started by the manufacturer and implemented by the IT managers.

The risk management life cycle documentation is kept up to date, probably by the IT network administrators where the system is installed, with updates coming from the manufacturer as well. The manufacturer would actually start the risk management process by providing local IT with proper documentation for a safe integration of a device into their network infrastructure (like network guidelines) and for the IT operators to have well defined risk management techniques with well defined responsibility areas. The new IEC standard makes a great point about the integration of a medical device into an IT­Network being “a life cycle activity spanning design, implementation, use, reconfiguration, maintenance, and decommissioning”.

The upcoming standard not only describes concepts in responsibility management, risk management, but also proposes risk management templates and description of specific hazards found in today’s networks, while providing regulation and reference material used to address each issue.

AREAS OF NEED FOR STANDARDS My experience in the field dealing with issues in the software driving medical equipment for Varian Medical Systems, a worldwide leading manufacturer of radiation oncology equipment, has underlined a few areas that I think deserve to be addressed. As more and more hospitals and clinics are going paperless, I think this is the time to address these issues. IEC 80001 is not final yet, and may benefit from some these suggestions, others fitting better somewhere else (via new or updated standards).

Software PMIs (Planned Maintenance), as opposed to hardware PMI, addressed in a contractual manner with defined roles and responsibilities. For example a S­PMI could include checking and installing mandatory fixes and releases (OS, hardware bios, and drivers) from the manufacturer of the servers and workstations running the software. Vendors as well as hospital IT would have to work together to address which updates to apply, following testing. For example, a hospital may install a server that includes a RAID controller. 3 months later, the manufacturer of the server releases a new set of drivers and card bios to address some potential risk like loss of data.

The only way to control “old” software currently is for a manufacturer to issue a statement assigning a deadline for the continuation of support of a software version. Once that deadline is reached, the software is still usable and still grand­ fathered compliance, even though the manufacturer does not support it anymore. It is up to the users to decide when they will invest in a newer revision. This leaves huge gap holes to safety that is not acceptable in the higher class of safety critical devices. For example old software running on hardware that is aging and OS that is also not supported anymore, left in a hospital network with internet access poses a security risk and opens the door for issues that compromise safety. It also leaves open the issue of responsibility in the event of an accident. The upcoming IEC 80001 proposes that the risk management lifecycle covers the use of the device all the way to decommissioning, so a good risk management would unearth the issue. However, regulatory oversight is probably needed for this area.

The internet has enabled a much faster way to provide service, especially to software issues with remote support to deploy, patch, and train. HIPAA has laid out rules to protect patient information, and they apply to remote support. Apart from HIPAA rules, the consideration to safety is left to hospitals and device manufacturer. This often creates an issue where Hospital policy dictates the use of one particular remote access software, or forbid use altogether, and the device manufacturer requiring a different one. Standardization guidelines of remote access software methods and processes are vital for the safe use of this technology, and only addressing remote software as a Risk Management item may not be best for the industry.

Some have proposed that the regulation should also encourage software process improvement [2], highlighting what a device manufacturer is doing to improve upon existing software processes. The aviation and automobile industry already have those in place, but the medical industry only emphasizes compliance to existing regulation [2]. Driving SPI through standards, the hospital and the vendor may even work together to achieve quality, through standard processes established between the two parties, focusing on SPI.

Software is required to comply with FDA510k standards only if attached to a medical device. Other software (like patient management for example) does not have any compliance requirements (apart from HIPAA rules due to the nature of the information stored): the guidance applies to any "…software used as components in medical devices, to software that is itself a medical device, and to software used in production of the device or in implementation of the device manufacturer's quality system" [6]. If hospitals choose to comply with IEC 80001, they may require that vendors also comply. I think it would improve

228

H. M. Haddad and D. Battista Volume 7 – Fall 2009 the quality of EMR (Electronic Medical Record) systems for example and add a level of standardization to a busy Hospital IT department across many systems.

At the same time, the new standard should encourage the storing of patient data with data interchange and interoperability in mind. For example bioinformatics has made data mining an integral part of development of cures by doctors, researchers and scientists. Requiring for example that patient information be stored in long term archive in secure, Dicom compliant format could be part of the standard: from the point of view of integration between the different areas of a Hospital and from the point of view of patient quality of care, making the standard emphasize data interchange characteristics could potentially drive acceptance of the new standard.

Finally, the industry would benefit from guidance for critical software component reuse, to reduce costs and speed up development in the long run. This has been addressed by the FAA in aviation for certain type of software (for example AC­20­ 148 addresses hardware independent components of an operating system that are reusable). Instead of validating an entire system, the software reuse would focus on validating individual pieces of software that can be later integrated in a bigger safety critical system.

I imagine that the concept can be built on the existing classification system (Class A, B and C) that IEC 62304 introduced: (ref page 11). A software system can be decomposed in software items, each inheriting the safety classification of the system. It can be seen as the first implementation of the idea of software components into standards. Adding requirements for reuse and continuous integration of software components based on the classification makes sense.

This is like developing safety critical objects in object oriented software engineering. For example, generating a DO­178B Level A (most critical level in aviation) kernel and surrounding libraries requires a huge effort, but because it can be then reused in many Operating Systems at any level of concern, it provides long term savings in effort, cost and time [11]. Leanna Rierson, in a thesis called “Software Reuse in Safety­Critical Systems” [12], makes a great point for the FAA (Federal Aviation Administration) about reusing software in critical systems. She is the leader of the FAA’s Technical Reusable Software Team (TRUST), which purpose is to identify issues of software components reuse and to develop guidelines to address those issues.

She first notes that reusing software is potentially dangerous. It has been done in the past many times: the Ariane Five rocket reused a software component from Ariane Four, which proved disastrous because the launch characteristics were different, and that software component could not handle the differences. The other example was the Therac­25 accidents where software from the Therac­20 was reused. In the T­20, the error in the software only caused blown fuses once in a while. On the T­25, it caused the death of multiple patients.

As software becomes more complex and increasingly used, reuse is inevitable, and happens already de facto. Instead of being a problem, software reuse in critical systems needs to be addresses and potentially can save effort, cost and time both during development and regulatory review.

The first important distinction to make is that software reuse is intended to be a component that will be reused, and needs to be treated as such. It is not software salvaging, or the mere parsing of pieces of codes from a previous software implementation. Leanna discusses 7 concepts of software reuse in her thesis: Planning for reuse, Domain Engineering (architectures that contribute to reuse), Software components, Object­Oriented technology, Portability, COTS or Commercial­ Off­The­Shelf software, and Product Service History.

Software reuse with safety requirements brings about questions like how much re­verification is needed, what about interface documentation, what needs to be planned, how about using Object Oriented languages and components like Java, and the list goes on.

CONCLUSION Rigorous software development and verification practices add the benefits of high­quality, performance, safety and reliability [4]. In software development designed for safety, the first step is to determine the level of criticality of a piece of software, by determining the impact of failure conditions. The following activities are meant to verify that the failure conditions have been addressed properly. “The heart of safety­critical software development lies in processes and techniques for verification” [4]. Validation then demonstrates that the software meets the requirements for intended use. Traceability is an important part of the process, as it traces the implementation of the requirements throughout the development activities. These concepts ­

229

Software Development Standards for Medical Devices: Evolution and Improvement defining requirements, developing tools for verification, validation and traceability ­ would benefit any software development companies even if not under direct requirement to comply with a regulation. And because risk cannot be avoided, risk management techniques are employed from conception to decommission of a device. The software industry would reap the benefits of quality, robustness, performance, and good documentation, from adopting techniques used for safety­critical software.

Where safety is a requirement, regulation was the first step in improving the software in medical devices, following the achievement of other industries (like automotive and aviation). To support regulation and its application, the recognition of standards in software development has shown the need for software standards to be tailored to the medical device industry. This is still an ongoing effort as some areas of software engineering practices in the medical industry still need to be addressed, and the release in the next few years of AAMI/IEC 80001 will be a welcome addition to answer current issues, especially the one of risk management from inception to decommission of the software, including the use of medical devices on IT­Networks. As more hospitals and outpatient clinics are going paperless, the industry needs to address the issue of protecting patient data on network attached medical devices.

As outlined in section 5, there are still other areas like addressing safety critical software reuse in medical devices that still need to be addressed. In addition, the relationship between Hospital adopting the new standard and the Vendors is essential improvement and both would benefit from adopting the new standard. The relationship and procedures of in house IT departments and vendors can be harmonized, and contractually address today’s challenges posed by integrating medical devices onto hospital networks; the hospital would be getting better service throughout the entire lifecycle of the products they are choosing, and the vendors setting a higher mark for quality software engineering.

7. REFERENCES [1] A Framework for Assessing the Use of Third­Party Software Quality Assurance Standards to Meet FDA Medical Device Software Process Control Guidelines, Matthew W. Bowee, David L. Paul, and Kay M, Nelson, IEEE November 2001. [2] The Need for a Software Process Improvement Model for the Medical Device Industry. F. Mc Caffery, G. Coleman, International Review on Computers and Software Vol 2 n 1, Copyright 2007 Praise Worthy Prize S.R.L, January 2007. [3] Guidance for the Industry and Staff: General Principles of Software Validation, CDRH, US FDA, January 11 2002. [4] Why Safety­Critical Software Development Processes Make Sense Even If Not Required, Timothy Budden, COTS journal, September 2003. [5] Medical Device Software Standards: Vision and Status, Sherman Eagles and John Murray, MDDI May 2001. [6] Guidance for the content of pre­market submissions for Software Contained in Medical Devices, CDRH, US FDA, issued on May, 11 2005. [7] Time to Reboot: New Software Standard to Replace SW68, Kurt Larrick, Biomedical Instrumentation and Technology, September/October 2006 issue. [8] Introduction into IEC 62304 Software Life Cycle for Medical Devices, Christopher Gerber, September 4th 2008. [9] IEC 80001 – an Introduction, Tim Gee, May 26th 2008 accessed http://medicalconnectivity.com/2008/05/26/iec­80001­an­ introduction/. [10] Developing Safety­Conscious Software for Medical Devices, Timothy Cuff and Steven Nelson, Medical Device Link Archive, January 2003. [11] Reusing Safety­Critical Software in Aviation, Jasvinder Matharu, IEE Electronics Systems and Software Feb/March 2006. [12] Software Reuse in Safety­Critical Systems, Rochester Institute of Technology Master’s Thesis, Leanna Rierson, May 1, 2000.

230

A. Bunger, D. Davis, S. Dhawan, L. Lee and E. A. Raynes Volume 7 – Fall 2009

SHOULD PHYSICAL THERAPISTS CONSIDER PULMONARY FUNCTION IN ASTHMATIC CHILDREN WHEN IMPLEMENTING AN AEROBIC EXERCISE PROGRAM?

Alan Bunger29, Dana Davis1, Sunaina Dhawan1, LaNedra Lee1 and Edilberto A. Raynes30 Tennessee State University, USA

ABSTRACT When implementing an endurance physical therapy program for children with asthma, the physical therapist should use pulmonary function testing to monitor the condition of the child during activity. Pulmonary function testing is a tool that provides the therapist with the information to rely on scientific data rather than child reported symptoms during exercise. The testing can distinguish the amount of exertion, level of endurance, and safety of the activity throughout the therapy program. The collaboration of twenty peer reviewed articles explain the importance of physical activity with asthma conditions and the demands placed on the body needing constant monitoring for performance levels and safety. All literature uses level 2a and above Oxford Level of Evidence based literature. The significance of this evidence based research applies to all pediatric physical therapists explaining the importance of pulmonary function testing for safe and effective therapeutic prescription.

Keywords: Pulmonary Function Test, Exercise, Asthma, Swim Training and Asthma, Children with Asthma, Lung Function, Asthma Exacerbation, Aerobic Training.

BACKGROUND OF THE STUDY Asthma is a chronic disease that involves recurrence of attacks with varying frequency and severity of breathlessness and/or wheezing (World Health Organization [WHO], 2008). If the passage way into the lungs becomes hypersensitive the muscles surrounding the bronchioles constrict thereby limiting air transport/airflow. Concurrently, the bronchioles become inflamed and secrete an increased amount of mucus, which creates an exacerbation of symptoms (National Heart Lung Blood Institute [NHLBI], 2008b). There are triggering factors that make children with asthma susceptible to experience symptomatic episodes. These are respiratory infections, allergic reactions, sudden temperature change, smoke, stress, and exercise (American Lung Association [ALA], 2007). Exercise, although a recognized asthmatic trigger, also serves as a treatment. For example, children attending an aerobic regimen have found exercise beneficial through a supervised exercise treatment (Neder, Nery, Silva, Cabral, & Fernandes, 1999). Along with pharmacological intervention, exercise has become more widely accepted as a treatment protocol. A proper understanding of the risks involved in exercise is paramount to successfully prescribe the dose of treatment (Cooper, Radom­Aizik, Schwindt, & Zaldivar, 2007). The study recognizes exercise induced asthma (EIA) during an aerobic exercise treatment as compliance threat because it can elicit exacerbations of asthmatic symptoms hindering treatment. Also, it examined the cause and effects of EIA in order to set parameters for exercise monitoring and to ensure patient’s safety.

Epidemiology of Asthma In the United States the number of children who suffer with asthma has grown to 6.7 million which makes up 9.1% of the total pediatric population (Center for Disease Control and Prevention [CDC], 2009). Asthma accounts for the third leading cause of pediatric hospitalizations in 2007 where the annual healthcare expenditure accounts for 14.7 billion dollars in the United States (ALA, 2007). Globally, it serves as the leading chronic disease in children regardless of the country’s level of development (WHO, 2008).

Classification of Asthma The National Asthma Education and Prevention Program (NAEPP) classified asthma according to symptomatic level based on three categories: severity of symptoms, nighttime symptoms, and forced expiratory volume at one second (FEV1). Mild intermittent classification involves less than 2 symptomatic episodes per week, less than or equal to two nighttime symptoms

29 Senior students of Doctorate Program of Physical Therapy, Nashville, Tennessee. 30 Tenure track Assistant Professor, Department of Physical Therapy, Tennessee State University. He is currently in PhD program in Public Health specializing in Epidemiology at Walden University.

231

Should Physical Therapists Consider Pulmonary Function in Asthmatic Children when Implementing an Aerobic Exercise Program?

per month, and less than 80% FEV1. Mild persistent classification involves symptoms occurring two times per week but less than once per day, nighttime symptoms occurring greater than two times per month, and greater than 80% FEV1. Moderate persistent classification involves the daily intake of beta2 agonists, nighttime symptoms greater than once per week, and 60 to 80% FEV1. Severe persistent classification involves continual symptoms, frequent nighttime symptoms, and less than 60% FEV1 (NHLBI, 2008a).

Asthma as a Challenge Currently, there is no cure for Asthma (NHLBI, 2008b). Asthma is a continual disease that demands lifetime treatment. Although some symptoms alleviate over time with treatment, it is important to intervene when symptoms first begin (NHLBI, 2008b). If untreated, asthma can restrict the child’s level of activity over a lifetime (WHO, 2008). ALA (2007) argued that it is the leading cause of school absenteeism in 2003 accounting for 12.8 million missed days of school. Although mortality is rare, the deaths per 100,000 can be debilitating where hospitalization is crucial (CDC, 2009). It also accounts for the third leading cause of pediatric hospitalizations in 2007 where the annual healthcare expenditure accounts for 14.7 billion dollars in the United States (ALA, 2007). Therefore, asthma is a very serious disease for America’s youth requiring pulmonary function regulation in order to maintain safe parameters for exercise.

Pulmonary Function Test: An Overview Pulmonary function testing concurrently with exercise treatment can offer feedback on the severity of the asthmatic symptoms, which can aid the clinician in the regulation of exercise regimen (Alain et al., 2003). There are multiple testing media such as spirometry, peak expiratory flow rate (PEFR) and forced expiratory volume in one second (FEV1) are used to record the patient’s specific pulmonary conditions. Spirometry tests evaluate both the amount and rate of air that the patient breathes in and out. The measures are documented by PEFR and FEV1 determine patient’s expiration ability. A peak flow meter is a common easily portable device used by patients with asthma that measures the amount of expired air (NHLBI, 2008b). All of the pulmonary function testing help the clinician monitor the patient’s condition and prevent any exacerbation of asthmatic symptoms. Standard values and parameters for each test allow the clinician to determine whether the patient’s test performance indicates possible asthmatic symptoms.

Physical therapy intervention for children with asthma involves light to moderate cardiovascular exercises to allow the body to aerobically improve overall fitness and to decrease the level of corticosteroid use (Neder et al., 1999). The proactive approach is important to exercise by monitoring pulmonary function then children may decrease the amount of exacerbated symptoms produced and an increase in aerobic capacity. Decreasing the dangers of exercise will increase patient compliance. Children who fear the onset of bronchiole constriction do not want to engage in activity that may exacerbate asthmatic symptoms. Children were chosen as the study population because they typically have an active lifestyle and are at the earliest detection of asthma onset. This study assesses the need for pulmonary function test in regulation of exercise intensity for children with asthma to prevent exercise induced bronchiole constriction.

Sample Case Scenario The following is a sample case scenario that illustrates the relation of physical therapy, asthma, and pulmonary function test.

J.B. an eight year old boy, diagnosed with asthma, desires to play soccer but has episodes of shortness of breath and seeks consult for aerobic exercise program. Physical therapy intervention for this boy is structured light aerobic exercise. The boy is reluctant to participate in treatment because he knows exercise triggers his asthma attacks.

Research Question With the foregoing background of the study and the sample case scenario, the researchers explore if pulmonary function tests can be considered in implementing an aerobic exercise program in asthmatic children.

REVIEW OF LITERATURE Physical therapy programs for children with asthma include a variety of low level endurance activities to regain pulmonary function. When implementing the program the therapist must know the dangers of exercise for this population to prepare for any symptom exacerbations. Pulmonary function testing can regulate these symptoms by recording the level of exertion and endurance during an activity to improve the child’s safety. The articles researched in this study were found in PubMed, CinAHL, EBSCO, MedLine, and ProQuest search engines. Key words used include: pulmonary function test, exercise,

232

A. Bunger, D. Davis, S. Dhawan, L. Lee and E. A. Raynes Volume 7 – Fall 2009 asthma, swim training and asthma, children with asthma, lung function, asthma exacerbation, aerobic training. Pulmonary function testing during physical therapy intervention can help monitor the patient’s condition to better the overall experience and quality of care.

Level of Exertion The exertive demands placed on the body during exercise require a multitude of physiologic changes to carry out the activity. These physiologic changes from rest to an active state require that all body systems adapt and function properly. Asthma constricts the airways making activity difficult or restrictive by not allowing adequate oxygen exchange. This can build fear and reluctance to participate in any form of exertive action or play. Hoskins et al. (2008) explains that children with asthma who perceive their illness as a limitation to physical activity tend to avoid activity because of the perceived exertion effects and the stigma that they may face from their peers after having such attacks. As physical therapists, it is important to ease anxiety of symptom exacerbations by providing proper administration and regulation. Ambrosius et al. (2001) found that healthy preschool children were able to complete spirometric pulmonary function tests with the adequate coaching and instruction. This allows the therapist to administer the test without concern that the child cannot successfully handle such a test (Ambrosius et al., 2001). Cooper et al. (2007) did meta­analysis that provides information from a variety of studies suggesting that an exercise program could become harmful if not regulated. They explain that there is an inflammatory response to exercise that occurs due to the movement of leukocytes in the blood stream. This reaction to exercise occurs in normal human function. . The chemical mediators are usually only responsive during the active stage of exercise, but some, i.e. neutrophils, last in the blood stream for hours post exercise program. This means that the inflammatory danger can be expected during the bout of exercise with the additional need for regulated post exercise care (Cooper et al., 2007). The properly regulated exercise training regimen for children with asthma showed a decreased exertion on the body leading to adaptability (Cooper et al., 2007). For example, Rosimini (2003) conducted studies in children from age 6 to 17 years of age, where graduated swim training regimen performed between 30 to 60 minutes per day, two to five times a week. The study revealed reduced exercise induced bronchospasm.

Environment plays a pivotal role in triggering asthma symptoms. Sport involvement and active environments for children can occur in several settings from a night of indoor ice skating to an early morning summer track meet. Laurence et al. (1999) argued that certain environment may induce bronchospasms in children with asthma. The researchers studied 21 children (13 children with asthma and 8 children without asthma). The children with asthma have a decrease in both FEV1 and FEVT% 5 minutes following hockey activity. However, there was not a similar decrease found following gymnasium and pool activity of the same intensity. The study implied how the cold did not inhibit the quality of activity and performance (Laurence et al. 1999). Similarly, Nielsen, and Bisgaard (2005) studied 40 preschool children ages 2 to 5 years old, fitting the subjects with a facemask and a mouthpiece. The subjects were randomly assigned to either a cold air or dry air test. The children achieved hyperventilation by competing against a computer animated screen showing a rising balloon. The dry air test lasted six minutes with the same computer animated interaction as the child received dry room temperature air from a tank. If the subjects presented with hyperventilation rates of three times the baseline then they were retested an hour later. The tests revealed that cold air caused significantly greater bronchial hyper responsiveness than dry room temperature air. The hyperventilation rates spiked more intensely and in shorter duration in the cold air test compared to dry air testing. The testing proved that the cold air was an added bronchial stimulus to constrict the airways adding to the hyperventilation rates and intensity. Because these environmental variables exist it is imperative that pulmonary function testing monitor the symptomatic exacerbations. It provides the therapist with information of how intense the program can become before symptoms exist and prolonging therapeutic intervention. These functional tests can also help determine a safe and appropriate exercise endurance training level according to the child’s physical level.

Endurance Endurance activities require the lungs and accompanying systems to produce the adequate oxygen exchange for a prolonged period of time. Asthma can restrict these airways with endurance activities creating decreased level of oxygen exchange produced due to the constricted bronchioles. This decrease in oxygen supply must be regulated through pulmonary function tests to provide safety for children during this type of activity. Pulmonary function testing has shown to accurately depict the level of pulmonary performance in endurance activities. Atri et al. (2005) evaluated 36 patients ages 16­20 to participate in an 8­week aerobic exercise program that included: three 20 minute sessions of aerobic exercise in a week with 15 minutes of warming up and tensile exercise. The study showed that engaging in aerobic exercise can lead to an increase in FEV1 and forced vital capacity (FVC) in asthmatic patients (Atri et al. (2005). These increased pulmonary function test values directly correlate with the increased demand during such activities. Spirometry and flow volume analysis are pulmonary function tests that are easily transported and quick to use to evaluate the patient’s pulmonary status out in the activity vicinity. Endurance is

233

Should Physical Therapists Consider Pulmonary Function in Asthmatic Children when Implementing an Aerobic Exercise Program?

best evaluated by the VO2 Max recordings following an exercise bout. VO2 Max records how well the body uses oxygen to function. By knowing the child’s endurance level the therapist can understand how long the child can proceed with an exercise protocol safely. Bates, Hallstrand, and Schoene (2000) contend that exercise rehabilitation could improve ventilation and aerobic fitness among patients with asthma. The authors reported that after the program the patients with asthma have increased ventilation capacity and decrease hyperpnea. It was conducted to explore if there was any change in aerobic capacity in children with moderate to severe cases of asthma following an aerobic training regimen. The study consisted of forty­two children in Sao Paulo, Brazil whose age ranged from 8 to 16 years old. The training includes a two month program of indoor activities. Three times per week the children performed activities such as 10­15 minutes of calisthenics and warm up, 30 minutes of continuous cycle ergometer training, and then 5 minutes of cool down They found that spirometric variables, VO2 Max, VO2AT and O2 Pulse Max in group 1 were much lower than in the control group (Neder et al., 1999). This provides the therapist with the knowledge that not only does the pulmonary function testing test for exacerbation of asthma symptoms but also serve as an endurance performance measure.

Clark and Cochrane (1990) studied the sub­maximal physical exercise effects to improve cardio­respiratory performance in patients with asthma. They used 14 males and 22 females age ranging from 16 to 40 year­old in a training program for three­ months involving 30 minutes three times a week sessions. Spirometry and flow volume analysis were performed to record progressive incremental exercise on an electronically braked bicycle ergometer for ten minutes. Others measurements include were dyspnea index, heart rate, tidal volume, respiratory frequency, minute ventilation, sense of breathlessness, and mixed expiratory concentrations of carbon dioxide and oxygen. After training there was a significant increase in the mean maximal oxygen uptake, oxygen pulse, and anaerobic threshold. There was also a decrease in breathlessness scores, blood lactate, CO2 output, and minute ventilation during sub­maximal exercise in the training group and no difference in the control group. The physiological changes monitored by pulmonary function testing provide the child with sound evidence that they are taken care of by a professional which allows them to complete their activities. The physical training regimen allows the child to better cope with their condition and possibly change their attitude on asthma (Bogaard et al. 2000). This decreased level of anxiety will help in the safety of physical therapy intervention and allow the child to produce adequate performance data without worry of skewed results due to psychological distortion.

Safety of Performance Once the child is comfortable with the intervention and surrounding environment then they will perform at a higher level. According to Baraldi et al. (1997) there was no difference found in the ventilation and metabolic values in the asthmatic and healthy groups when eighty children who suffered from mild to moderate asthma, whose ages ranged from seven to fifteen years of age The children also completed a progressive exercise test that was performed on a treadmill, where minute ventilation, respiratory frequency, oxygen uptake, carbon dioxide output, and heart rate were measured. A dyspnea index was calculated from predicted max voluntary ventilation. The children completed a questionnaire according to their type and duration or physical activity (Baraldi et al. 1997). While we know that asthma can alter the pulmonary function, the study revealed that exercise at such a high performance level can be performed safely and have similar results. Basaran et al. 2006 included the Pediatric Asthma Quality of Life Questionnaire (PAQLQ) Study to access the limitation of activity and quality of life. It was conducted on children from age 7­15 years of age. In the study, six minute walk test (6 MWT), physical work capacity at a heart rate of 170 beats per minute (PWC 170 test) and pediatric asthma quality of life questionnaire (PAQLQ) was performed in both groups. For the exercise group moderate intensity basketball training program was designed for 8 weeks, 3 times a week, for sixty minutes. Fifteen minutes out of sixty minutes were spent on warm up exercises, thirty to thirty five minutes on basketball and ten minutes on flexibility exercises. For the control group there was not any exercise plan. Both exercise and control group were encouraged to do respiratory exercises at home. PAQLQ scores, PWC 170 test, 6 MWT, spirometer tests, medication scores, symptom scores and cycle ergometer were used to access the results of the study PAQLQ scores showed improvement in both groups but there was significant improvement in the exercise group. PFT values improved only in the exercise group. Moderate intensity basketball training program for 8 weeks, 3 times a week, for sixty minutes exhibited beneficial results for asthmatic children especially there quality of life and exercise capacity (Basaran et al., 2006). The more positive psychological product from a regulated exercise regimen allows the children to flourish with other aspects of their daily lives. It can also establish a higher level of trust in the caregiver­patient relationship. If the child begins to understand that the therapist cares for their safety they will be more susceptible to other treatment as therapy continues. This will reveal the patient’s true performance level unmasked from the fear that once restrained. Using spirometry to assess forced vital capacity (FVC), forced expiratory volume in first second (FEV1), and forced expiratory flow Pianosi & Davis. (2004) studied 64 children from 8­12 years of age to evaluate physical activity and severity of asthma. Histamine inhalation challenge test was used to assess airway responsiveness. Bicycle exercise test was used to assess maximal aerobic power. Ventilation and gas exchange was measured with an exercise testing module. Heart rate was monitored with tripolar lead system. Oxygen saturation was measured by pulse oximetry. Questionnaires were used to assess habitual activity level,

234

A. Bunger, D. Davis, S. Dhawan, L. Lee and E. A. Raynes Volume 7 – Fall 2009 perceived activity limitations, perceived competence in physical activity and attitudes towards physical activity. Asthma severity was assessed from spirometric values, degree of airway hyper responsiveness and amount of medications. All the 64 children were assigned to the exercise group and their results were compared with 77 healthy children of age 7­12 years as part of other studies with same exercise tests. Although higher asthma severity indicated greater medication needs in obese children suffering from asthma there was no correlation between asthma severity and aerobic fitness. The maximal aerobic power of asthmatic children was related to the perceived competence at physical activity rather than severity of asthma. The higher score in asthma severity scale in obese children was related to higher doses of medication intake because they were not capable of perceiving themselves to exercises (Pianosi & Davis. 2004). Spirometry in this study was trusted to safely record pulmonary function which in turn allowed the children to feel safe when performing the activities regardless of Asthma severity.

METHODOLOGY This literature review on the use of pulmonary function testing in physical therapy prescription attempted to find the most credible research available to provide us with the most reliable collaboration of information possible. All articles in this study were classified as level 1a or 1b according to the Oxford Level of Evidence (Oxford Centre for Evidence Based Medicine). According to the Centre of Evidence Based Medicine the Oxford Level of Evidence Scale systematic orders the different research for each question type. Level 1 evidence is the highest regarded quality research corresponding to systematic reviews of randomized control trials. Level 2 evidence is the systematic review of a cohort group with little to no randomized population. Level 3 evidence reviews a case study to gather research. Level 4 uses case series with poor quality control in the population study. Level 5 refers to expert opinion without any literary evidence to support claims. This provides the reader with the information needed to understand what types of research were gathered and synthesized to incorporate in the analysis. All of the articles in this study were arranged by how pulmonary function testing revealed the different themes of exercise in order to adequately answer our research question. These themes include how pulmonary function records the intensity of exercise, exercise endurance capacity, and the safety involved with exercise intervention. The authors found 25 articles to answer the research question. Of the 25 articles 18 were used in the synthesis of this literary review. Exclusion criteria to those articles were the articles only focusing on physical activity and asthma without mention of pulmonary function. Two articles were excluded because they focused on racial differences in children with asthma. Other articles were excluded due to the focus on only the intervention of a certain sport and its interaction with asthma. The authors wanted to primarily focus on the pulmonary function levels when children with asthma participated in activity. If the articles added other variables as a main theme then they were discarded in concern for outside variables affecting the research question. The articles in the literature review derived from PubMed, CinAHL, EBSCO, MedLine, and ProQuest search engines.

RESULTS The chart listed below revealed the most significant literature on the importance of pulmonary function test implementation for physical therapy prescription with children who have asthma. All the research was gathered from peer reviewed sources with no article less than an Oxford 2a level of evidence. Eighteen peer reviewed articles were gathered in order to make a viable argument that pulmonary function testing is important to physical therapist. These articles explain the importance of monitoring level of exertion, endurance duration, and safety of exercise.

235

Should Physical Therapists Consider Pulmonary Function in Asthmatic Children when Implementing an Aerobic Exercise Program?

Title Rank Objective Results Discussion Pulmonary function of 1a Examines the pulmonary Overall the children with asthma had Certain environments may children with asthma in function of young males a decrease in both FEV1 and FEVT% induce bronchospasms in selected indoors sports before and after exercise in 5 min following hockey activity. There children with asthma. It environments three indoor sport was not a similar decrease found indentifies certain indoor sport Authors: Holt L., Moss environments: ice rink, following gymnasium and pool activity environmental conditions that M., Pelham T. gymnasium, and swimming of the same intensity. Children without may play a role as a factor. pool. asthma maintained normal pulmonary Swimming at the exercise function in all three environments prescribed level is least likely to when compared to children without interfere with pulmonary function. asthma. Dangerous exercise: 1a The physiological effects of Possibility of exercise induced Leukocyte levels may be lessons learned from exercise on the body. anaphylaxis. Exercise initiates a abnormal. Obstruction is due to dysregulated There are further changes sensitization of the allergens from the inflammation from the inflammatory responses that occur that could have previously ingested food into the leukocyte elevation causing the to physical activity. detrimental effect. Exercise blood stream causing an allergic bronchial tubes to become Authors: Cooper, D.M., program could become reaction. Induced bronchoconstriction constricted. This preventative Radom­Aizik, S., harmful if not regulated. occurring after vigorous exercise in measure allows a safe recovery Schwindt C., & Zaldivar, 60­80% of asthmatic children. and promotes continuance of F. further exercise. Effects of physical 1a Investigates the effects of PAQLQ scores showed improvement Moderate intensity basketball exercise on quality of life, regular submaximal in both groups but there was training program for 8 weeks, 3 exercise capacity and exercise on quality of life, significant improvement in the times a week, for sixty minutes pulmonary function in exercise capacity and exercise group. Medication scores exhibited beneficial results for children with asthma pulmonary function in showed improvement in both groups. asthmatic children especially Authors: Basaran, S., asthmatic children. Peak expiratory flow values improved there quality of life and exercise Guler­Uysal, F., Ergen, only in the exercise group. capacity. N., Seydaoglu, G., Bingol­Karakoc, G., & Altintas, D. U. Pulmonary Function Test 1a Compares PFT data There was not a difference in the FRC The difference in PFT values in Preschool Children between healthy and values of the control subjects and the noted are in line with what we with Asthma asthmatic preschoolers. It children with asthma. The children would expect in a child with an Authors: Alain, B., Alberti, helps give us information with asthma had increased Rint obstructive lung disease. This C., Amsallem, F., et al. on how pulmonary function measurements. The children with gives us information on how differs between healthy and asthma also had an increased pulmonary function differs asthmatic children to difference in predicted resistance between healthy and asthmatic address adopting an values pre and post­ bronchodilator children and adopts an exercise exercise protocol that will administration. protocol that will be specific to be specific to children with children with asthma. asthma.

Short term effects of 2a This study explores if there Spirometric variables, VO2 Max, These results then show that an aerobic training in the was any change in aerobic VO2AT and O2 Pulse Max in group 1 exercise therapy program can be clinical management of capacity in children with were much lower than in group 2. An most beneficial in the most unfit moderate to severe moderate to severe cases increase in the maximal exercise children with moderate to severe asthma in children of asthma following an occurred in the trained group. The asthma. It is safe to conduct a Authors: Neder, J. A., aerobic training regimen maximal exercise recordings do not program and the short term Nery, L.E., Silva, A.C., associate well with daily activities for respiratory function effects can Cabral, A.L., & children because children usually do also lead to a decrease in the Fernandes, A.L. not perform at such high levels but in amount of corticosteroid short bursts of activity. It shows that ingestion. the less fit a child was in the beginning of the study the higher the gains were in post program testing.

DISCUSSION The case example of the eight year old boy with asthma exemplifies how this pulmonary function testing research provides confidence in the physical therapy profession. The boy desires to play soccer yet he is fearful of intensifying asthmatic

236

A. Bunger, D. Davis, S. Dhawan, L. Lee and E. A. Raynes Volume 7 – Fall 2009 symptoms. After visiting his pediatrician he is clear for exercise and is sent to a local physical therapist. The physical therapist explains to J.B. that exercise is good for his body and that he can do all the activities that other children do in soccer. The physical therapist provides J.B. with a series of variable intensity and endurance testing using a spirometer to gauge of the pulmonary function. The therapist then demonstrates on how the spirometer operates and what it used to test. J.B. better understands and tries the tests using the spirometer. J.B. begins to feel more comfortable with the active testing and trust both the therapist and the spirometer. J.B. then explains that he knows his body can handle the physical demands of soccer but he is worried that he won’t be able to predict an asthma attack. The therapist assures J.B. that he will receive a spirometer for his parents to take to the game and monitor his performance during breaks.

The example of the spirometer intervention has allowed the therapist to adequately assess the physical capability of the child while providing the child the confidence needed to continue therapy. Through extensive research, the therapist understood that spirometry measurements adequately recorded exercise performance and showed improved performance through a suitable program (Neder et al. 1999). The physical therapy program differs for children with asthma compared to those without the disease. This program must include a less intense regimen to avoid any exacerbation of asthmatic symptoms (Alain et al. 2003). An uneducated therapist might prescribe a standard endurance protocol for all children regardless of condition causing more harm than good; however, this therapist understood the physiological risk that exercise introduced through anaphylaxis or bronchoconstriction (Cooper et al. 2007). The therapist understood J.B.’s goals of returning to soccer and therefore tailored the exercise program to be conducive to an outdoor activity environment. Through research study the therapist understood that the environment plays a large factor for asthmatic conditions in children (Holt, Moss, & Pelham 1999). The therapist could prescribe a perfect exercise regimen for the child, but if he did not consider environmental influence then he did not properly helped J.B.. Now that J.B. has an appropriate exercise program he will begin to experience both an increase quality of performance and overall quality of life. The therapist knew that not only would J.B. benefit on the field, but he would not take as much asthma medication (Basaran et al. 2006). With the detailed knowledge of current research the therapist simultaneously met J.B.’s needs and strengthened the physical therapy profession. The use of scientific literature to prescribe physical therapy program allows treatment validity and integrity to the profession.

CONCLUSION The researchers gathered 18 peer review articles of 1 and 2 Oxford level of evidence suggesting that physical therapists should consider pulmonary function tests in asthmatic children when implementing an aerobic exercise program. This provides the clinician with the tools necessary to adequately assess asthmatic symptoms when exercise intervention is introduced to the patient. This can help in the predicting an onset of an asthma attack in the clinic which can reduce anxiety in children allowing for greater exercise susceptibility. Pulmonary function testing also provides the patient with the assurance that they know their physical limitations and can engage in activity within those limits. The patients will have a visual cue for the onset of symptoms through pulmonary function testing which will help alleviate the fear of the unknown. This will aid in developing an exercise program by allowing the therapist to rely on concrete asthma signs rather than the child’s description of symptoms and feelings. The exercise program will then have reliable data which can increase the ability to reproduce testing and ultimately helping the child increase function.

REFERENCES Alain, B., Alberti, C., Alain, B., Amsallem, F., Bellet, M., Beydon, N., Boule´, M., Chaussain, M., Denjean, A., Gaultier, C., Matran, R., Pin,I, and the French Paediatric Programme Hospitalier de Recherche Clinique Group (2003). Pulmonary Function Test in Preschool Children with Asthma. American Journal of Respiratory Critical Care Medicine, 168, 640­644. Ambrosius, W, Bieler, H., Christoph, K., Eigen, H., Grant, D., Heilman, D., Tepper, R., Terrill, D. ( 2001). Spirometric Pulmonary Function in Healthy Preschool Children. American Journal of Respiratory Critical Care Medicine, 163, 619­ 623. American Lung Association. Asthma and Children. (2007). Fact Sheet. Retrieved from http://www.lungusa.org/site/apps/nlnet/content3.aspx?c=dvLUK9O0E&b=429 4229&ct=3227479. Atri, A., Azad, f., Farid, R., Ghafari, J ., Ghasemi, R., Khaledan,A., Khoei, T., Rahimi, M. (2005). Effect of aerobic exercise training on pulmonary function and tolerance of activity in asthmatic patients. Iranian Journal of Allergy, Asthma, and Immunology, 4(3), 133­138. Baraldi, E., Filippone, M., Santuz, P., & Zacchello, F. (1997). Exercise performance in children with asthma: is it different from that of healthy controls?.European Respiratory Journal, 10, 1254­1260. Basaran, S., Guler­Uysal, F., Ergen, N., Seydaoglu, G., Bingol­Karakoc, G., & Altintas, D. U. (2006). Effects of physical exercise on quality of life, exercise capacity and pulmonary function in children with asthma. Journal of Rehabilitation Medicine, 38, 130­135.

237

Should Physical Therapists Consider Pulmonary Function in Asthmatic Children when Implementing an Aerobic Exercise Program?

Bates P, Hallstrand T, Schoene R (2000). Aerobic Conditioning in Mild Asthma Decrease the Hyperpnea of Exercise and Improves Exercise and Ventilatory Capacity. American College of Chest Physicians, 5, 1460­1469. Bogaard, J., Hessels, M., Veldhoven, N., Vermeer, A., & Wijnroks, L. (2000). Children with asthma and physical exercise: effects of an exercise programme. Clinical Rehabilitation, 15, 360­370. Centers for Disease Control and Prevention. (2009). Asthma. Retrieved from http://www.cdc.gov/asthma/faqs.htm#what. Clark C., Cochrane, L. (1990). Benefits and problems of a physical training programme for asthmatic patients. Thorax, 45(5): 345–35. Cooper, D.M., Radom­Aizik, S., Schwindt C., & Zaldivar, F. (2007). Dangerous exercise: lessons learned from dysregulated inflammatory responses to physical activity. Journal of Applied Physiology, 103:700­709. Holt L., Moss M., Pelham T. (1999). Pulmonary function of children with asthma in selected indoors sports environments. Pediatric Exercise Science., 11, 406­412. National Heart, Lung, and Blood Institute. (2008a). Key Clinical Activities for Quality Asthma: Recommendations of the National Asthma Education and Prevention Program. Morbidity and Mortality Weekly Report, 5(). Retrieved from http://www.cdc.gov/asthma/pdfs/RRAsthmaCare.pdf. National Heart Lung and Blood Institute. (2008b). The Diseases and Conditions Index page. Retrieved from http://www.nhlbi.nih.gov/health/dci/Diseases/Asthma/Asthma_WhatIs.html. Neder, J. A., Nery, L.E., Silva, A.C., Cabral, A., Fernandes, A.L. (1999). Short term effects of aerobic training in the clinical management of moderate to severe asthma in children. Thorax, 54, 202­206. Nielsen, K.G., & Bisgaard, B. (2005). Hyperventilation with cold versus dry air in 2­to 5­year­old children with asthma. American Journal of Respiratory and Critical Care Medicine, 171, 238­241. Oxford Centre for Evidence­Based Medicine Levels of Evidence (2009). Centre for Evidence Based Medicine Retrieved from http://www.cebm.net/index.aspx?o=1025 Pianosi PT, Davis HS. (2004). Determinants of physical fitness in children with asthma. Pediatrics, 113, 225­9. Rosimini C. (2003). Benefits of swim training for children and adolescents with asthma. Journal of American Academy of Nurse Practitioners, 15(6), 247­252. World Health Organization. (2008) Asthma Fact Sheet. Retrieved from http://www.who.int/mediacentre/factsheets/fs307/en/index.html.

238

D. White Volume 7 – Fall 2009

GEM AND THE (2S)

D. White Roosevelt University, USA

ABSTRACT The general form for the absorption cross­section governing all quantum systems is utilized in conjunction with the basic precepts of the Gluon Emission Model (GEM) in order to investigate the nature of the (2S)  something and (2S)  (J + something) transitions. Specifically, as GEM has shown to be the case regarding the J, we assume that the (2S) is initially created as a c (charm quark) c* (anti­charm quark) structure, which quickly transitions to an excited s (strange quark) s* (anti­ strange quark) system, the form factor for so doing being slightly less than one, indicative of a small fraction of the original cc* states “staying behind”, so to speak, to decay directly into various end products. Unlike the J, however, whose cc* branch decays exclusively into leptons, we will encounter evidence that the cc* branch of the (2S) must decay into both hadrons and leptons in order for GEM to correctly yield the hadronic and leptonic partial widths of the (2S)  something transitions. Similar to the J, where the fraction of cc* states which “lag behind” to decay into leptons is ~ (1/9), the “lagging” cc* states associated with the (2S) constitute ~ (1/4π) of the original complement of same. Finally, in terms of the directly calculable square of the interaction potential matrix element associated with the(2S)  something transitions, the relative strength of same as associated with the (2S)  J + something transitions is estimated.

Keywords: psi(2S); Gluon Emission Model; Leptonic Partial Width, Hadronic Partial Width

INTRODUCTION In all quantum systems in which natural decay occurs between an excited level (k) and another level (s), the general form of the absorption cross­section associated with the decay process is given by

2 2 Vks (1/m) (1/ks)(), (1)

2 where  represents the fine­structure constant = (1/137.036),Vks represents the square of the matrix element associated with the interaction potential, Vks , m represents the mass of the system, and ks represents the photon frequency. (See, for example, Merzbacher (1970), p. 486.) Equation 1 can be employed to develop a general formula for calculating the width of any vector meson in its ground state. (See White (2008 a). As applied to the case at hand, such as, for example, the (776), the (1019), or the J(3097), m becomes the mass, Mv , of the relevant vector meson,  is replaced by  s = the strong ­1 2 4 coupling parameter = 1.2[ln(Mv/50 Mev)] (see White (2008 a), Eq.2),Vks is assumed to be proportional to iqi , where qi represents the charge of the quark of type “i” making up the vector meson, and ks = Mv (in “natural units”), as in the present context all decays result in the complete dissolution of the meson considered. Utilizing appropriate experimental data (see White (2008 a), Eq. 4) we find the hadronic partial width of any vector meson (v) in its ground state as determined by the Gluon Emission Model (GEM) to be given by

3 4 ­1 v(GEM)  (1960 Mev) (776 Mev/Mv) (iqi )[ln(Mv/50 Mev)] (2)

In the present work, however, we wish to employ GEM as a tool for investigating the nature of the (2S), which is a vector meson, but not one in its ground state. Rather, the (2S) appears to be an excited state of the J(3097), as (2S)  (J + something) transitions have been observed and catalogued in great detail. At the present juncture, therefore, we would like to introduce some special notation designed to simplify the discourse from here forward, viz., we now identify the J(3097) as “the J” and the “something” in “(2S)  (J + something)” or in “(2S)  something” as simply, “x”. In terms of the new notation, then, if we are to employ GEM to investigate the nature of the (2S)  (J + x) transitions, we will need to (at least) 3 replace one of the “MΨ(2S)” factors in the denominator of the “(776 Mev/Mv) ” (v = (2S) here) term of Eq. 2 by “(MΨ(2S) ­ MJ)” (recall the role of “ks”, now playing the role of the gluon frequency, in Eq. 1). Hence, since MΨ(2S) = 3686 Mev and MJ = 3097 Mev, if all else in Eq. 2 remains the same, we obtain the hadronic partial width of the (2S)  (J + x) decay as determined

239

GEM and the (2S) by GEM, under the assumption that 100% of the originally produced cc* (charm quark / anti­charm quark) states constituting the (2S) undergo a transition to excited ss* (strange quark / anti­strange quark) states instantaneously with form factor for so doing equal to one, as:

2 ­1 21(GEM)  (1960 Mev)(776/3686) (776/589)(1/81)[ln(3686/50)]  328.57 Kev (3)

The above equation is consistent with assuming that the (2S) decays exclusively as an ss* construction (with qs = (1/3)). The subscripts of “21(GEM)” conform to placing the (2S) at “level 2” and the J at “level 1”.

Now, defining subscript “0” as representing the state of complete dissolution of the (2S), under the same assumption as above, i.e., that the (2S) decays exclusively as an ss* object, Eq. 2 determines the hadronic partial width of the (2S)  x decay as:

3 ­1 20(GEM)  (1960 Mev)(776/3686) (1/81)[ln(3686/50)]  52.50 Kev (4)

We may think of the resulting partial widths stemming from Equations 3 and 4 as “base line widths” in the sense that they correspond to the hadronic partial widths of the “2  1” and “2  0” transitions, respectively, as if 100% of the associated decays were associated with exclusively ss* structures, which, as we shall see, cannot be the case upon viewing experimental determinations of the hadronic partial widths associated with (2S)  (J + x) and (2S)  x.

EXPERIMENTAL DETERMINATION OF THE PARTIAL WIDTH REGARDING (2S)  x According to the Particle Data Group (PDG) (see PDG (2004), p. 83) the full width of the (2S) as determined by experiment is given by Γfull(PDG) = (281 ± 17) Kev, of which 97.85% involves hadronic end products. Hence, the PDG’s determination of the full hadronic width of the (2S) is given by Γfull­hadronic(PDG) = (275 ± 17) Kev. Now, the PDG has also determined that 42.4% of all (2S) decays are of the “level 2” to “level 0” type, i.e., involve complete, direct dissolution of the (2S), so that the PDG’s determination of the (2S)  x hadronic partial width would be given by

20(PDG)  0.424 (275 ± 17) Kev  (116.6 ± 7.1) Kev (5)

Clearly, the “base line” theoretical width, 20(GEM), is seriously discrepant with 20(PDG), as 20(PDG) is more than a factor th of two times as large as 20(GEM). The above­mentioned discrepancy indicates that, unlike the J, approximately (1/9) of whose original cc* states “lag behind” the other (8/9)ths (which undergo a very rapid transition to excited ss* states to subsequently decay mainly into hadrons) in order to decay exclusively into leptons, the (2S) must have “laggers” which are able to decay into hadrons. Since the only difference in calculating a partial width via GEM as a “base line width” versus a 4 4 partial width as due to a cc* contribution lies in utilizing either qs = (1/81) or qc = (16/81), respectively, in Equation 4 (or in another context, in Equation 3), we may easily calculate the fraction, βh, of cc* states contributing to the (2S)  x hadronic partial width by setting [16 βh + (1 ­ βh)] 20(GEM) equal to 20(PDG). We thus have:

[16 βh + (1 ­ βh)](52.5 Kev) = (116.6 ± 7.1) Kev (6)

Solving for βh yields:

βh = 0.0814 ± 0.0090 (7)

Hence, we see that the hadronic partial width of the (2S)  x decay mode can be fully explained by GEM if it assumed that ~ (1/12)th of the original cc* states destined for the (2S)  x decay mode remain to do so, with ~ (11/12)ths of them decaying via the “main route” as ss* objects.

A similar analysis may be effected regarding the purely leptonic partial width associated with the “level 2” to “level 0” transition. Again from PDG (2004), p. 83, the partial width of the (2S) e+e­, i.e., the electron/positron decay mode, is given by the PDG as (2.12 ± 0.12) Kev. The corresponding “base line” partial width according to GEM is given by:

Γee ≈ (α / αs) 20(GEM) ≈ (1/137.036) (1/0.2791) (52.50 Kev) ≈ 1.37 Kev (8) indicating that as is the case for the purely hadronic emissions, a small fraction, βl, of the original cc* states associated with formation of the (2S) “lag behind” to decay into leptons. Using Equation 6 as a guide, we may calculate βl via the equation,

[16 βl + (1 – βl)](1.37 Kev) = (2.12 ± 0.12) Kev (9)

240

D. White Volume 7 – Fall 2009 from which we obtain:

βl = 0.0365 ± 0.0058 (10)

Hence, we see that the partial width of the (2S)  e+e­ decay mode can be fully explained by GEM if it assumed that ~ (1/27)th of the original cc* states destined for the (2S)  e+e­ decay mode remain to do so, with ~ (26/27)ths of them decaying via the “main route” as ss* objects. GEM, of course, is fully in agreement with experiment in all aspects (i.e., as regards separately the hadronic, as well as the leptonic, partial widths) only if βh is the same as βl, whereas we have just found βh ≈ 0.081 ± 0.009, while βl ≈ 0.036 ± 0.006. Overall, however, in terms of the determination via GEM of the full width of the (2S), i.e., the sum of the associated hadronic and leptonic partial widths of the (2S), since the leptonic partial width of the (2S) is so much smaller than its hadronic counterpart, there is very little consequence in assuming that βh = βl = < β>, where < β> represents the weighted average of βh from Equation 7 and βl from Equation 10, the weight factors based on the relative contributions of the hadronic partial width and the non­hadronic partial width to the full width of the (2S). Accordingly,

<β> = (0.9785)(0.0814 ± 0.0090) + (0.0215)(0.0365 ± 0.0058) = 0.0804 ± 0.0089 (11)

As (1/4π) ≈ 0.0796 is a number well within the stated uncertainty of <β> above, for simplicity we will let <β> → (1/4π). We will explore the consequences of setting βh = βl = < β> = (1/4π) in some detail below in Section IV, but for now we remark that with βl = (1/4π) GEM yields (see Equations 8 & 9):

Γee = 3.01 Kev (12) a figure less than one Kev too large associated with the electron/positron partial width, representative of less than one percent of the total width of the (2S).

EXPERIMENTAL DETERMINATION OF THE PARTIAL WIDTH REGARDING (2S)  (J + x) Assuming that all (2S)  (J + x) decays involve a “level 2” cc* to “level 1” cc* transition to produce a J in its original state, which state then decays in its normal fashion, the partial width of the (2S)  (J + x) decay, according to the PDG (2004), p.83, would be given by:

Γ21(PDG) = 0.576 (281 ± 17) Kev = (162 ± 10) Kev (13)

From Equation 3 above, however, the associated “base line” partial width according to GEM, which, again, assumes the decay of an ss* state, is about twice as large as Γ21(PDG), as Γ21(GEM ) = 329 Kev. Since, though, we assume that all (2S)  (J + x) decays involve a “level 2” cc* to “level 1” cc* transition to produce a J in its original state, as stated above, if all other factors in Equation 3 besides the replacement of qs = (­1/3) by qc = (2/3) remain the same, we would obtain via GEM a partial width of the (2S)  (J + x) decay of Γ21(GEM­1) = 16 Γ21(GEM) = 5257 Kev, a factor of about 32 times the experimental result. If GEM and experiment are to be brought to agreement, then, there must be at least one factor 2 contained in Equation 3 which is subject to change … and there is, viz., the factor “1960 Mev”, which is numerically {Vks / 4 qc } in the present context, which is the assumed universal multiplicative factor in the square of the matrix element descriptive 4 of the resonant contribution to vector meson decays involving their complete dissolution, i.e., apart from the “qi ” therein. Now, 2 2 it is quite reasonable, as GEM is here indicating, that V21 should be of different magnitude than V20 for the following reason: Both the (2S) and the J are spin one objects. Complete dissolution of either one (corresponding to a final state of “level 0”) must yield a net angular momentum of one unit as associated with their respective decay products, therefore. Hence, transverse gluons must be involved in propagating from the cc* or ss* resonance state to the hadronic particle production vertex for all “level 2” (the (2S), remember) to “level 0” decays and for all “level 1” (the J) to “level 0” decays. The “level 2” to “level 1” transition, however, cannot involve a transverse gluon; on the contrary, it must involve a longitudinal 2 4 gluon, as no unit of angular momentum is transferred from “level 2” to “level 1”. So, {V21 / qc } may well be of different value 2 4 than{ V20 / qc }; GEM, in fact indicates that, at least as it regards the (2S),

2 4 2 4 {V21 / qc } ≈ (1/32) {V20 / qc } (14)

Assuming such, we obtain the partial width of the (2S)  (J + x) decay as:

Γ21(GEM­2) ≈ 164 Kev (15) a statistical match to the experimentally derived determination of (162 ± 10) Kev mentioned above.

241

GEM and the (2S)

A “SIMPLEX MODEL” OF THE (2S) As stated in Section II above, we may assume for simplicity that

βh = <β> = (1/4π) (16a) and βl = <β> = (1/4π) (16b)

We see before us, therefore, a high degree of similarity, as regards the (2S) to the J, in that the decay of the (2S) involving complete dissolution stems in the main from ss* excited states, as does the J, a small fraction of the decays stemming from cc* states, which originally constitute the (2S), as with the J. The main difference between the (2S) and the J is that whereas the J’s cc* states decay exclusively into leptons, some of the (2S)’s cc* states decay into hadrons. Though the (2S) and the J are each too light to be able to emit two charm mesons as decay products, that yet a small fraction of the (2S)’s cc* states does decay into hadrons is very intriguing. Since the mass of the (2S) is about 99% of the mass of two D mesons (the lightest charm­bearing mesons), perhaps there is a very slight amount of “overlap” due to the width of the (2S) (stemming from the Uncertainty Principle) with twice the mass of the D ­ something not present as regards the J – which makes a small measure of hadronic decay possible from the cc* states of the (2S).

At any rate, we may assess the success of the “Simplex Model” of the (2S) by calculating its full width under the guidelines of the model to follow. First, utilizing βh = (1/4π) in conjunction with Equation 6, we obtain our final determination of the hadronic partial width associated with the “level 2” to “level 0” decay of the (2S) as:

Γ20­hadronic(GEM­Simplex) = 115.2 Kev (17 a)

Second, utilizing βl = (1/4π) while assuming “eµ universality” and the PDG’s assessment that the partial width associated with tauon/anti­tauon decay is ≈ 0.37 that of electron/positron decay (PDG (2004), p. 83), guided by Equation 12 we obtain our final determination of the leptonic partial width associated with the “level 2” to “level 0” decay of the (2S) as:

Γ20­leptonic(GEM­Simplex) = (3.01 Kev)(2.37) = 7.12 Kev ≈ 7.1 Kev (17 b)

2 2 Third, assuming V21 = (1/32) V20 from Equation 14 above and identifying

Γ21(GEM­Simplex) = Γ21(GEM­2) = 164 Kev (17 c) from Equation 15, we obtain the full width of the (2S) via GEM as:

Γfull(GEM­Simplex) = Γ20­hadronic(GEM­Simplex) + Γ20­leptonic(GEM­Simplex) + Γ21(GEM­Simplex) = 286 Kev, (17 d) a statistical match to the experimental determination of same, i.e.,

Γfull(PDG) = (281 ± 17) Kev (17 e)

CONCLUDING REMARKS

Since the (2S) is an excited state of the J, that <β> ≈ (1/4π) is of similar magnitude to {β}J, where {β}J ≈ (1/9) represents the fraction of cc* states of the J decaying directly into end products, the other (8/9)ths of same first transferring their four­ 2 momentum to excited ss* states, perhaps should not be surprising. Whether or not it is purely coincidental that {βJ} ≈ qs , the 4 role the electromagnetic interaction plays in the formation of vector mesons, as evidenced by the presence of the “qi ” term in 4 4 4 Equation 2, cannot be denied. Further, since (2/3) is 16 times (1/3) , the above­mentioned factor of “qi ” in Equation 2 provides for a very pronounced “mathematical leverage tool” for deducing the behavior and construction of the various “standard” vector mesons, such as the ρ and the φ, and of other vector mesons which are not so “standard”, such as the K*(892) (see White (2008 c)), the Υ(9460) (see White (2008 a)), the J(3097) ( see White (2008 b)), and the subject of the

for example, is the only circumstance that makes it possible to deduce that the vast ,׀qc׀ ≠ ׀qs׀ present work, the (2S). That majority of the J’s original cc* states make a very quick transition to ss* states – the actual fraction which do so also calculable; as seen above, the same may be said as to the (2S).

2 2 2 2 Of additional interest is that, as seen in Section III, V21 << V20 (recall that V21 ≈ (1/32) V20 ), which indicates that the basic matrix elements governing longitudinal gluon emission associated with vector mesons in excited states may in general be of much smaller magnitude than those descriptive of transverse gluon emission. It would therefore be of great interest to apply GEM to the Upsilon and its various “NS” excited states, where “N” is an integer, as part of an on­going research program involving GEM and vector mesons.

242

D. White Volume 7 – Fall 2009

As a final observation, we would like to address the potentially disconcerting result seen from work exhibited in Section II that βl derived from the PDG’s report of the partial width of the Ψ(2S)’s electron / positron decay mode turned out to be only about 45% that of βh derived previously, whereas for GEM to be completely self­consistent, βl must be the same as βh. As seen in Section IV, the consequences of the mis­match are minimal, as by assuming βl = βh = (1/4π), GEM determines the leptonic partial width of the Ψ(2S) as 7.1 Kev, compared to the sum of the electron / positron, muon / anti­muon, and tauon / anti­tauon partial widths of (5.0 ± 0.3) Kev (see PDG (2004), p. 83). Hence, GEM yields a partial leptonic width about 2 Kev too large under the assumption that βl = βh = (1/4π), an error leading to an over determination of the full width of the Ψ(2S) by less than one percent if all else constitutes a perfect match to experiment. Nevertheless, the relative error extant as to the leptonic partial width itself via GEM under the stated assumption is ~ 40%. What we wish to point out, however, is that there is substantial difficulty in obtaining the full leptonic width of the Ψ(2S) experimentally, as well. If instead of adding up the individually­obtained leptonic contributions to determine the experimental result of interest here we subtract the experimentally­obtained fraction of hadronic decay contributions to the full width of the Ψ(2S), leaving us with the fraction of non­hadronic contributions, i.e., leptonic contributions only, we obtain as another determination of the experimental leptonic partial width of the Ψ(2S):

Γleptonic(PDG­1) = (1 – 0.9785) (281 ± 17) Kev = (6.0 ± 0.4) Kev (18 a)

Further, if the uncertainty in the PDG’s figure for the fraction of hadronic decays associated with the Ψ(2S), i.e., 0.9785 ± 0.0013, is taken into account, Γleptonic(PDG­1) above becomes:

Γleptonic(PDG­2) = (6.0 ± 0.7) Kev (18 b) a figure much easier to live with vis­à­vis GEM’s associated figure of 7.1 Kev.

REFERENCES E. Merzbacher (1970), Quantum Mechanics, Wiley, p. 486. D. White (2008 a), “The Gluon Emission Model for Hadron Production Revisited”, Journal of Interdisciplinary Mathematics, Vol. 11, No. 4, pp. 543 ­ 551. D. White (2008 b), “GEM and the J(3097)”, under review by IIC (submitted simultaneously with the present manuscript). D. White (2008 c), “GEM and the K*(892)”, Journal of Applied Global Research, Vol. 1, Issue 3, pp. 1 – 4. PDG (2004), “Mesons”, accessed online Nov. 7, 2008, pdg.lbl.gov/2004/tables/mxxx.pdf, p.83.

243

A Novel Extended ANFIS: Application in a Control System

A NOVEL EXTENDED ANFIS: APPLICATION IN A CONTROL SYSTEM

J. Hossen, A. Rahman, K. Anayet Multimedia University, Malaysia

RATIONALE AND OBJECTIVE This paper proposes a novel extension to the Adaptive Neuro­fuzzy Inference System (ANFIS) which call extended ANFIS (EANFIS). Included in EANFIS architecture together with its structure determining procedures overcomes the current limitation facing ANFIS architecture when applied to systems with large number of inputs. The possibility of determining a membership function from the input variables means the user no longer needs to select a membership function from a set of candidate membership functions. The new EANFIS architecture is evaluated in a specific inverted pendulum control system and has been found superior performance and response. In addition, as this is an EANFIS, rules can be extracted from the trained system, thus providing information on the way in which the underlying system is operating. The proposed EANFIS recommends itself readily for application in practical systems.

The architecture and learning procedure underlying ANFIS has been discussed, which is a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the ANFIS can construct an input­ output mapping based on both human knowledge and stipulated input­output data pairs. However, there are some basic aspects of this approach which are in need of better understanding.

In this Perspective, the aim of this paper is to suggest a novel architecture called Extended Adaptive Network­based Fuzzy Inference System, Simply EANFIS into a control application, which will have the following capabilities:

- Automatic determination of the shape of the membership functions. - Automatic determination of the structure of the neuro­fuzzy system in terms of the number of rules required for a particular problem. - hence, improving performance and desired response in a control system

APPROACH AND METHODOLOGY The EANFIS is applied to classic fuzzy control systems. In this paper, the novel extended ANFIS method has been applied into a control of an Inverted Pendulum system with sitting on a moving cart. A rod is hinged on top of a moving cart. The cart is free to move in the horizontal plane, and the objective is to balance the rod to keep it in the upright position and keep the cart in the center position. The mechanical system is as shown in this paper. The system takes four inputs: is the angle of the rod makes with the vertical axis, is the angular velocity of the rod, is the cart position with respect to the center position, and is the cart velocity. The aim is to use these four inputs to calculate the control force which is required.

This model is simulated by software. The initial parameters are M (mass of cart) = 2kg, m (mass of rod) = 0.1kg, L (length of rod) = 0.5m and g (gravity) = 98.1 m/sec2. The cart and rod should be back in the desired angle and position within 2 second. This is simulated using Equation 23. The state­space equation of the system is given in Equation 24.

u = - kx (23)

T where is the desired feedback gain vector and x is the input vector x = [x1 x2 x3 x4 ] .

244

J. Hossen, A.. Rahman and K. Anayet Volume 7 – Fall 2009

In this example k = [ ­298.15 ­60.697 ­163.099 ­73.394 ].

………. (24)

where x1 = , x2 = , x3 = , x4 =

RESULTS AND DISCUSSION We train the system with initial conditions = 0.3, = 0, = 0 and = 0. In order to obtain the rod in the desired angle and position, we apply a control force and it has been shown. After the control force is applied, it induces a new position, velocity, rod angle and angular velocity and it has been shown in this paper.

We test the system with another initial condition = ­0.3, = 0, = ­1 and = 0. The control force using EANFIS is shown and the input status has been shown in this paper. We can observe that there is no difference between the EANFIS control force and the desired control force. While the ANFIS generates a different control force it still can balance the rod. The input status using ANFIS is shown in this paper. The architecture found using the proposed EANFIS architecture is shown also. Finally in the Table, it is shown that the EANFIS architecture performs better than ANFIS architecture.

CONCLUSION We apply the novel extended ANFIS (EANFIS) method into an Inverted Pendulum control system and compared our results on the extended ANFIS with those obtained by the ANFIS architecture, and we can claim that in Inverted Pendulum control cases, the EANFIS architecture provides an improved result. We suggest that, the novel extended ANFIS might be applied in other control area (i.e. Robotic control) also.

245

Green Office with Electronic Document System Technologies

GREEN OFFICE WITH ELECTRONIC DOCUMENT SYSTEM TECHNOLOGIES

S. Chanput, Patcharee Chantanabubpha and S. Adsavakulchai University of the Thai Chamber of Commerce, Thailand

ABSTRACT An Electronic Document System (EDMS) is a computer system designed to store and track electronic documents and other media. The system consists of a variety of technologies including digital imaging, document management, workflow, computer output to disc, document input, groupware, records management, and search and retrieval. Various combinations of the technologies can be integrated to create systems for information management. Combining multiple software applications and providing a common interface to through the desktop is an excellent solution for record­keeping problems. Paperless is to reduce the carbon footprint and save money in the environmentally­aware office. The main objective of this study is to develop the web application using PHP and MySQL as a tool for Administrative Office, School of Engineering University of the Thai Chamber of Commerce. The result from this study is to improve working process performance 50%. It can be concluded that an electronic document system reduce paper used 86.2%. The less paper the less energy used with printers and copiers. On­going research is to integrate this program for others office in the school and tend to be the paperless school.

Keywords: Track Electronic Documents, Paperless, Carbon Footprint

246

K. Fleming Volume 7 – Fall 2009

GLOBAL WARMING: SCIENCE OR IDEOLOGY?

Kenton Fleming Southern Polytechnic State University, USA

ABSTRACT The subject of man caused global warming, or Anthropologic Global Warming (AWG) is not only a scientific issue, but a social and political debate; an emotional hot button which influences formation of laws, imposition of fines, and limitations placed on human activities. This paper is presented as a guide to understanding global warming in general, and AWG specifically. The history of global warming and the debate is presented. Authoritative groups on the subject are identified, consisting of both advocates and opponents. Data from the Intergovernmental Panel for Climate Change (IPCC), Nongovernmental International Panel for Climate Change (NIPCC), and like organizations are presented. This includes empirical and computer model results of climate evaluation. A discussion of the reliability of the computer models is given. A critical analysis reveals limitations of these models and their contrasts with empirical results. A recent congressional hearing on the subject is discussed where religious leaders were asked by congress to render their views of the subject. Finally the consensus among Scientists is shown to have actually decreased since the beginning of the global warming debate when it first appeared in the early 90’s.

Keywords: IPCC, NIPCC, OISM, Computational Fluid Dynamics, CFD, Simulation, Computer

247

Microcontroller for Automatic Microscope Slide Volume 7 – Fall 2009

MICROCONTROLLER FOR AUTOMATIC MICROSCOPE SLIDE

P. Ueatrongchit University of the Thai Chamber of Commerce, Thailand

ABSTRACT To develop an automated microscope specimen slide for microscopic examination. A moving stage for slide specimen is indexed to position each specimen slide for access by a horizontal feed mechanism which transfers the specimen slide onto a microscope stage which has controllable X and Y axis positioning to move the specimen slide into the optical viewing field of the microscope and conduct systematic examination over the desired area of the specimen by X­Y axis incremental motion. At the end of the examination the specimen slide is returned to the beginning point and the procedure is repeated for the next specimen slide. Previously examined specimen slides positions have been recorded and automatically returned to the microscope stage for reexamination.

Keywords: Microcontroller, Automatic Microscope Slide, Reexamination

BACKGROUND Microscopes are used for the examination of clinical specimens such as parasite infected red blood cell. The microscope with attached to actuate the stage and control is affected through manual as shown in Figure 1. The basic principal for diagnostic in red blood cell is using microscope manually. In such cases, electronic systems may be used to automatically examine and analyze the optical images of the microscope. Where electronics systems are used for rapid analysis of microscope specimen images it becomes desirable to automatically regularly and rapidly feed the specimens to the microscope optics. After analysis a specimen would be removed to make room for the next specimen and would be collected for either further examination, reference, record keeping or disposal.

Figure 1: Microscope moving stage

The main objective of this study is to develop an automatically returned to the microscope stage for reexamination. These technologies dramatically increased the accuracy of measurement results and contributed greatly to the modernization of testing and medical care.

248

P. Ueatrongchit Volume 7 – Fall 2009

MATERIALS AND METHODS Sample preparation:

1. To prepare the blood smear sample and set up the microscope working area at 1000x with 0.2 mm. dimension as shown in Figure 2. In order to the area of the specimen slide is viewed during examination of the specimen without sliding it.

Figure 2: Microscope working area

2. To set up the scope of microscope moving stage scanning area with 44 x 12.786 mm. as shown in Figure 3. About the specimen stage this opening in the specimen stage is made as large as possible and exposes the full width of the specimen slide.

Figure 3: Scanning area

3. The automatic sequential examination of a group of microscope specimen slides comprising:

3.1. Moving stage design: Apparatus comprised a substage directing serves to move stage in a horizontal plane and there is provided further positioning means and operable for moving stage with a specimen slide supported therein horizontal as shown in Figure 4.

249

Microcontroller for Automatic Microscope Slide Volume 7 – Fall 2009

Figure 4: Moving stage design

3.2. Digital Electronics Design: The digital electronics architecture has two main functional blocks, Master Board and Slave Board. ICP (Instrument Control Processor): used a PIC 16F873 processor to perform all instrument control and event processing functions as shown in Figure 5. The ICP will be responsible for the following tasks: processing commands; monitoring source and adjusting the LCD readout mode as required; calculating centroids and transmitting centroid positions.

Encoders LCD X 16 Character 2 Line

Stepper Motor X SPI Slave Master Board Board Stepper Motor Y Supply

1 2 3 # Encoders Keyboard Y 4 5 6 * 7 8 9 0

Figure 5: Digital Electronics design

Figure 5 is to illustrate the overall digital electronics using 2 microcontrollers with serial communication and synchronous Serial Peripheral Interface (SPI).

250

P. Ueatrongchit Volume 7 – Fall 2009

3.3. Microcontroller in Slave Board:

3.3.1. Encoder: using sequential logic to control moving stage. The characteristic of encoder 2 signal using microcontroller PIC 16F873 via RA0­RA3 port as shown in Figure 6. the signals moving stage is to set up into 3 statuses 1. No movement 2. Increase the distance and 3. Reduce distance as shown in Figure 7.

Figure 6: The characteristic of encoder 2 signal

Figure 7: Logical control

3.3.2. Stepping motor: control moving stage using microcontroller PIC 16F873 via RB0­RB7 port and working together with IC ULN2803 to control stepping motor as shown in Figure 8.

Status 1a 1b 2a 2b

Figure 8: Characteristics of stepping motor control

251

Microcontroller for Automatic Microscope Slide Volume 7 – Fall 2009

3.3.3. Serial Peripheral Interface: using Master Synchronous Serial Port in microcontroller PIC 16F873 using Microcontroller in Slave Board as shown in Figure 9.

SDO SDI

รRegisterีจิสเตอร ์ SSPBUFSSPBUF Registerรีจิสเตอร ์SSPBUFSSPBUF SCK SCK

ชRegisterิฟรีจิสเตอ SSPSRร ์ SSPS R ชิฟRegisterรีจิสเตอ รSSPSR ์ SSPSR SDI SDO

มMasterาสเตอร ์ สSlaveเลฟ

Figure 9: Serial Peripheral Interface (SPI)

3.4. Microcontroller in Master Board

3.4.1. Input from keyboard using RB1­RB7 in tem of matrix 4 x 3 3.4.2. Display result using RA0­RA5 for Liquid Crystal Display 3.4.3. Serial Peripheral Interface SCK port 3.4.4. Data communication using SPI as shown in table 1

Table 1: Data communication using SPI

252

P. Ueatrongchit Volume 7 – Fall 2009

4. To design main menu as shown in Figure 9.

Microscope Scan. Version 1.00

Wait

Home 1:Home 2:Manual 1 4 Please Wait. 3:Scan 4:Config

2 3 #

Start. 1:Posit. #:Exit Please Wait. 2:Option Config 1 Wait

X 0000 #:Exit Y 0000 *:Scan # Start 0000x0000y 2 * Final 0000x0000y

X 0000 #:Exit Y 0000 *:Stop

X 0000 #:Exit 1: 0sec #:Exit Y 0000 *:Jump 2: 0unit Option

# : 1 or 2 * : X :Y 1 : T 2 : S

1:Start #:Exit X _000 #:Exit T:>_sec #:Exit T: 0sec #:Exit 2:Final < Save Y 0000 < Jump S: 0unit Time. S:>_unit Space

Figure 9: Main menu of control program

RESULTS AND DISCUSSION To test the points and the results is shown in table 2. The microscopic examination of the specimen slide can take place either visually or automatically. Motorized microscope components and accessories enable the investigator to automate live­ cell image acquisition and are particularly useful for time­lapse experiments about 20 milliseconds. For this purpose the X and Y positioning systems can be controlled manually or automatically. Thus, the specimen slide carried by the stage may be moved to any desired location relative to the optical axis by actuation of the Y­axis drive 43 and the X­axis drive 44. For automatic examination the drives 43, 44 would be energized under scan or other program control.

253

Microcontroller for Automatic Microscope Slide Volume 7 – Fall 2009

Table 2: Testing results

CONCLUSION After the examination of a particular series of specimen slides has been completed any individual specimen slide that requires re­examination can by either operator signals or by predetermined control signals be fed automatically back into the microscope viewing optics for further examination.

Upon completion of the examination of a slide the horizontal positioning Y­axis drive returns the specimen slide on the stage to the position. It can be concluded that the accuracy of this equipment for reexamination the specimen slide is 86.0 % accuracy.

REFERENCES 1. W.H. Barkas, Nuclear Research Emulsion, Academic Press Inc. (London) Ltd. (1963). 2. Powell, Power and Perkins, The Study of Elementary Particles by the Photo­graphic Method, Pergamon Press (1959).

254

P. Ueatrongchit Volume 7 – Fall 2009

3. The OPERA Collaboration, An appearance experiment to search for neutrino oscillation in the CNGS beam, CERN/SPSC 2000­028. 4. N. Armenise et al., High­speed particle tracking in nuclear emulsion by last generation automatic microscopes, accepted for publication on Nucl. Instr. Meth. A(2005).

255

The Correlation between Spectral Reflectance Data and Water Quality in Kung Krabaen Bay

THE CORRELATION BETWEEN SPECTRAL REFLECTANCE DATA AND WATER QUALITY IN KUNG KRABAEN BAY

N. Boonmanang1, E. Theamsumrid1, K. Riyabuth1, S. Adsavakulchai1 and R. Shrestha2 University of the Thai Chamber of Commerce1 and Asian Institute of Technology2, Thailand

ABSTRACT In the past decade, there has been a rapid growth of shrimp farm around Kung Krabaen Bay. This has caused enormous rise in generation of domestic and industrial wastes. Most of these wastes are disposed in the Kung Krabaen Bay. There is a serious need to retain this glory by better water quality management of this river. Conventional methods of monitoring of water quality have limitations in collecting information about water quality parameters for a large region in detailed manner due to high cost and time. Satellite based technologies have offered an alternate approach for many environmental monitoring needs. In this study, the high­resolution satellite data (LANDSAT TM) in 2007­8 was utilized to develop mathematical models for monitoring of chlorophyll­a. Comparison between empirical relationship of spectral reflectance with chl­a and band ratio between the near infrared (NIR) and red was suggested to detect chlorophyll in water with r2= 0.62. This concept has been successfully employed for marine zones and big lakes but not for narrow rivers due to constraints of spatial resolution of satellite data. This information will be very useful in locating point and non­point sources of pollution and will help in designing and implementing controlling structures.

Keywords: Spectral Reflectance, Water Quality, Kung Krabaen, Chlorophyll­a

256

SECTION 3 SCIENCES EDUCATION & SOCIAL

AACSB Accreditation and the Homogeneity of the Business Educational Experience

AACSB ACCREDITATION AND THE HOMOGENEITY OF THE BUSINESS EDUCATIONAL EXPERIENCE

Brian Heshizer and Curtis C. Howell Georgia Southwestern State University, USA

ABSTRACT AACSB accreditation is generally viewed as the highest level of accreditation that a business school can achieve. Business schools that attain AACSB accreditation have met faculty, academic, and program standards that connote a high level of commitment to quality teaching and student learning. To retain AACSB accreditation business schools must undergo periodic review to ensure that academic and program standards have been maintained. Business schools can also achieve a separate AACSB accreditation for their accounting programs. This raises the general question of the impact that accreditation has on business education, and as investigated in this study the specific question do AACSB accredited business schools provide a homogeneous educational experience for their students? This specific question is assessed by determining if two AACSB based measures (number of years accredited and separate accounting accreditation) have an impact on CPA exam performance. Variables used in previous research on the determinants of CPA exam performance (SAT score, GPA, CPA review program, and highest degree held when the CPA exam) are also included. A sample survey of practicing CPAs (n=124) provided the data. The study found that only GPA was significantly related to performance on the CPA exam. The fact that none of the other variables examined were significantly related to fewer attempts to pass the CPA exam might indicate the homogeneous nature of the educational experience at AACSB accredited business schools. Discussion and comments on the study’s results and future research questions are presented.

Keywords: Business Education, AACSB Accreditation

INTRODUCTION AACSB accreditation is generally viewed as the highest level of accreditation that a business school can achieve. Business schools that attain AACSB accreditation have met faculty, academic, and program standards that connote a high level of business education, faculty scholarship, and commitment to quality teaching and student learning. To keep AACSB accreditation business schools must undergo periodic review to ensure that academic and program standards have been maintained. Business schools can also achieve a separate accreditation for their accounting programs. According to recent AACSB records, 515 schools have achieved general business accreditation and 167 accounting programs have earned separate accounting accreditation (AACSB.edu/accredited members, March 2006).

Attaining AACSB accreditation signifies that the school’s business programs meet quality measures applying to faculty, curriculum, faculty intellectual contributions and assessment of student learning. Accounting accreditation is a separate accreditation beyond the accreditation of the business college. The accounting program accreditation is based on such factors as: the qualifications, development, and involvement of the faculty; the design and effectiveness of the curriculum; the processes in place to plan, assess, and assure student quality; and the intellectual contributions of the faculty (www.AACSB.edu, 2006). To maintain their accreditation, both the business school and the accounting program must undergo subsequent reviews to ensure that they are adhering to a process of strategic improvement (AACSB International, 2005).

Much of the research on AACSB accreditation has focused on the impact that the accreditation process has on business schools in terms of faculty contributions in research, curriculum, and professional activities (see Gaharan, Chiasson, Fourst, and Mauldin, 2007). Studies have noted that the accreditation process results in higher faculty professional and academic involvement and extensive program and curriculum review (Bailey and Bentz 1991; Kren, Tatum, and Billing, 1993; Sinning and Dykxhoorn 2001). The clear implication is that achieving accredited status connotes that the business school has reached a defined level of accomplishment in terms of faculty and program, and to keep accreditation, schools must continuously review and improve. Hence, once accredited, schools and accounting programs are not allowed “to rest of their laurels” but must demonstrate continued improvement.

258

B. Heshizer and C. C. Howell Volume 7 – Fall 2009

The particular focus of this study is the affect of AACSB accreditation on business education. Given the mandates necessary to earn and keep accreditation, institutional factors related to AACSB accreditation might be expected to have an impact on student learning. In general the length of time that a school has been accredited might result in an enhanced ability to educate their students and separate accounting accreditation might indicate a greater ability to educate accounting students in particular. It is possible that differences in these two institutional characteristics might be related to student learning.

Whereas the accreditation standards do call for assessment of student learning, these standards only call for a process of assessment and review. The accreditation standards do not identify an outside, consistent, and objective measurement device for the assessment of student learning; therefore, few studies have examined the effect on student learning of the accreditation process. This study examines if institutional characteristics based on AACSB accreditation have an effect on student learning as measured by performance on the CPA exam. Specifically, we want to see if the number of years the school has been accredited and if separate accounting accreditation differentiate among AACSB schools on the student learning.

There are several reasons why performance on the CPA exam is a relevant outcome metric. First, practicing CPAs, as well as educators and business schools, have an interest in exam candidates passing the Uniform Certified Public Accountant Examination (CPA exam) in as few attempts as possible. For future CPAs who would pursue careers in public accounting, passing the exam on fewer attempts will result in great savings of time, effort, and money. For educators and business schools, their students passing the CPA exam on fewer attempts would confirm their pedagogical design success (see, Howell and Heshizer, 2008).

Second, many studies have been conducted investigating the characteristics of persons who have taken the CPA exam (see, Ashbaugh and Thompson, 1993; Brahmasrene and Whitten, 2001; Marts et al., 1988; Pustorino, 1996; Shafer et al., 2003; Smith, 1992; Whitten and Brabmasrene, 2002). This research used the pre­2003 CPA test to assess the factors influencing performance. Under that test format, all four parts of the CPA exam had to be taken during a three day period with nineteen and a half hours of testing, and a candidate had to pass (score greater than 75%) at least two parts and “condition” (score at least 50%) the other parts in order to receive credit for any parts passed (see, Howell and Heshizer, 2006). The old CPA exam­testing format resulted in a very low first time passing rate that varied from state to state, but averaged from 10% to 20% (Brahmasrene and Whitten, 2001; Shafer et al., 2003; Whitten and Brahmasrene, 2002). The new testing format adopted in 2003 allows each part of the examination to be taken separately (not all at once) and at the convenience (within a fairly broad timeframe and at many testing sites) of the candidate (Burke and D’Aquila, 2004; Churyk and Mantzke, 2005; Handel, 2003; Holder and Mills, 2001).

This study was conducted under the assumption that the rigorous design of the old testing format will better reveal institutional and personal characteristics that contribute to passing the CPA exam on fewer attempts. Therefore, this study is based on data collected from persons who sat for the CPA exam prior to 2003, before the new exam format was in effect.

The personal characteristics (SAT, GPA, etc.) used in previous studies are included here along with this study’s two institutional characteristics of AACSB accreditation on student learning: number of years accredited and if the accounting program is separately accredited. This study also differs from previous research studies by using a different sample. Studies based on the pre­2003 exam surveyed recent candidates who had sat for the exam to determine what characteristics contributed to passing all four parts of the exam on the first attempt and/or what characteristics contributed to conditioning the exam (Ashbaugh and Thompson, 1993; Brahmasrene and Whitten, 2001; Lindsay and Campbell, 2003; Marts et al., 1988; Ponemon and Schick, 1988; Whitten and Brabmasrene, 2002). In other words, successful candidates (passed or conditioned) were compared to unsuccessful candidates (failed all or only passed one part) in order to identify personal success characteristics.

This study only surveyed persons who passed the CPA exam and graduated from an AACSB business school. Therefore, this study has the added dimension of obtaining information regarding the number of attempts needed to pass the CPA exam. Because of this design, this study not only provides insight into CPA exam success characteristics, but how those characteristics influence how quickly the exam is passed which would identify what characterizes (AACSB institutional and/or personal) are associated with student learning, a component of the educational experience.

METHOD An on­line survey was sent to a sample of CPAs employed in public accounting located across several eastern, southern, and mid­western states. One hundred and eighty responses were received, a response rate of 18% percent. Of those responding,

259

AACSB Accreditation and the Homogeneity of the Business Educational Experience

124 graduated from colleges that were AACSB accredited institutions and these graduates were the subjects of this study. Information collected from respondents included on what attempt was the CPA exam passed, year passed, highest degree held when passed, self­study or classroom review course, was the business college AACSB accredited, name of college and location, their SAT or ACT score, age when CPA exam was passed, and overall college GPA. These data were collected before the recent testing format changes were made to the Uniform CPA Exam (as explained above).

Interestingly, 70% of the respondents were unaware of their school’s AACSB accreditation status. The accuracy of respondents self reports of their college’s AACSB accreditation status was verified with information provided by the AACSB on when the school obtained general accreditation and whether separate accounting accreditation had been obtained (www.aacsb.edu/accredited). With this information, it was determined if the school was accredited when the respondent attended, the number of years the school has been AACSB accredited, and whether the school’s accounting program has separate accreditation. ACT scores were converted using the College Entrance Examination Board, New York, 1999, conversion scale that transforms the composite ACT score to its equivalent SAT score.

RESULTS Descriptive statistics for graduates of AACSB accredited schools are given in Table 1. The means and standard deviations for the full sample and two groups derived from the sample based on the number of attempts needed to pass the CPA exam are reported. Those passing in one attempt and those passing in two attempts were combined into one group because there were no significant differences between the two groups. The fact that there were no significant differences in any of the characteristics of interest between these groups has to do with the former CPA exam­testing format (as described above). Persons passing the exam on the second attempt are more likely to have “conditioned” the exam (passed two or three sections of the exam on the first attempt) than persons requiring three and more attempts to pass the exam.

Table 1: Descriptive Statistics: Total Sample, Passed in 1 or 2 Attempts Subgroup, and Passed in 3 or More Attempts Subgroup 1 2 3 Passed CPA in 1 Passed CPA in Variable Total or 2 Attempts 3 or more Attempts N = 124 N = 89 N= 35 Significant Difference Mean Standard Between Column 2 and 3

Deviation Means

Age Passed CPA 25.6 25.8 25.0 No 4.9 5.35 3.26

GPA 3.36 3.41 3.24 Yes** .41 .42 .33

Number of Times To 2.19 1.49 3.94 Yes** Pass CPA Exam 1.34 .50 1.19

SAT Score 1165.80 1172.60 1148.40 No 131.25 124.10 139.30

Masters’s* Highest .27 .30 .20 No Degree .45 .46 .41

Took Review Course* 0.57 0.60 .49 No (versus self study) .50 .50 .51

Number Years School 33.37 32.83 34.83 No AACSB accredited 20.90 20.98 20.91

Accounting Program* .73 .70 .80 No AACSB Accredited .48 .46 .41

*Proportions **Significant at p < .05

This results in the personal characteristics of persons passing the CPA exam on the first or second attempt to be more similar to one another than to the characteristics of persons passing the CPA exam on the third and more attempts. As a result of

260

B. Heshizer and C. C. Howell Volume 7 – Fall 2009 these findings, survey subjects who passed the exam on the first or second attempts were combined and compared to survey subjects who passed the CPA exam on three or more attempts.

There were no significant differences between those who took one or two attempts compared to those who took three or more attempts to pass the CPA exam with respect to SAT score, highest attained degree, took a review course to prepare for the CPA exam, number of years the school was accredited, and accounting program separate accreditation. The two groups differed significantly only with respect to GPA.

The correlations in table 2 show that for the entire sample of CPAs the number of attempts to pass the CPA exam was significant only with undergraduate GPA. The negative correlation indicates that higher GPA is associated with passing the CPA exam on fewer attempts. SAT score was significant and positively related to GPA, master’s degree, and length of accreditation. Interestingly, length of the school’s accreditation was significant and negatively related to the business school’s accounting program having separate accreditation. Hence, the longer the business school has been accredited the less likely that the school’s accounting program has separate accreditation. The method of study for the CPA exam was not significant with any other variable while age was only significant with having a master’s degree.

Table 2: Correlations for all respondents, n=124 Correlations (Pearson) Number Years Passed of times Master’s School in 1 or 2 to Pass Degree CPAprep SAT GPA Accredited tries

Degree ­0.140 0.122

CPAprep ­0.170 ­0.044 0.058 0.631

SAT ­0.159 0.180 ­0.065 0.077 0.046 0.474

GPA ­0.232 0.165 0.011 0.265 0.010 0.068 0.899 0.003

Yrs Accredited 0.078 0.118 0.045 0.242 ­0.082 0.388 0.190 0.617 0.007 0.367

Pass 1 or 2 ­0.826 0.104 0.100 0.085 0.194 ­0.044 0.000 0.249 0.271 0.346 0.031 0.628

Accounting 0.113 ­0.068 ­0.139 ­0.080 0.125 ­0.190 ­0.104 Accredited 0.213 0.453 0.124 0.376 0.166 0.035 0.249

Pearson correlation P­Value

To investigate the relationship between the number of attempts to pass the CPA exam (the dependent variable) and the study’s independent variables, regression analysis was conducted. Table 3 presents the multiple regressions analysis with the number of attempts to pass the CPA as the dependent variable. GPA was the only independent variable significantly related to number of attempts to pass. The relationship between GPA and the number of attempts to pass indicates that a higher GPA is associated with the passing the CPA on fewer attempts.

261

AACSB Accreditation and the Homogeneity of the Business Educational Experience

Table 3: Regression Analysis with Number of Times to pass CPA as Dependent Variable Predictor Coef SE Coef T P Constant 5.548 1.311 4.23 0.000 Degree ­0.3054 0.2663 ­1.15 0.254 CPAprep ­0.4588 0.2359 ­1.94 0.054 SAT ­0.0013206 0.0009819 ­1.34 0.181 GPA ­0.6048 0.3052 ­1.98 0.050 Yrsaccre 0.008759 0.005869 1.49 0.138 ActAcc 0.3608 0.2683 1.34 0.181

R­Sq = 13.0% R­Sq(adj) = 8.6%

F = 2.92, p = .011

In addition a logistic regression analysis was conducted (Table 4). The dependent variable was a binary variable coded so that 1 identified those who took one or two attempts to pass the CPA exam and 0 for those who took three or more attempts. The independent variables entered were GPA, SAT, master’s degree (coded as 1, 0 for bachelors only), formal review program to prepare for CPA exam (coded as 1, 0 for self­study), years school has been AACSB accredited, and if the school’s accounting program has separate AACSB accreditation (coded as 1, 0 if not). Logistic regression hypotheses tests are similar to multiple regression testing. The test for the overall significance of the logistic equation is based on the G statistic, which for this model is .13 indicating that the set of independent variables were not significant. Hypothesis tests for individual independent variables are based on the estimated regression coefficients using the z distribution. The results show that GPA, however, is significant.

Table 4: Binary Logistic Regression with Dependent Variable Pass on 1 or 2 tries =1, more than 2 tries = 0 Predictor Coef SE Coef Z P Odds Ratio Constant ­30.71 43.50 ­0.71 0.480 Yrpassed 0.01426 0.02228 0.64 0.522 1.01 Degree 0.3820 0.5182 0.74 0.461 1.47 CPAprep 0.3627 0.4359 0.83 0.405 1.44 SAT 0.000668 0.001803 0.37 0.711 1.00 GPA 0.9584 0.5800 1.65 0.098 2.61 Yrsaccre ­0.01012 0.01087 ­0.93 0.352 0.99 ActAcc ­0.7726 0.5341 ­1.45 0.148 0.46

Log­Likelihood = ­68.951 Test that all slopes are zero: G = 9.677, DF = 7, P­Value = 0.208

While testing for the significance of individual regression coefficients is straightforward, it is difficult to interpret the relation between the independent variables and the probability that the dependent variable equals one because the logistic model is nonlinear. This relationship, however, can be interpreted using the odds ratio which is the impact on the odds of the dependent variable occurring given a unit increase in only one of the independent variables. The variable with the strongest impact is GPA. For GPA, an increase in GPA of one unit, say from 2.5 to 3.5, means that someone with a 3.5 GPA is a little more than three times as likely to pass the CPA exam on one or two attempts compared to someone with a 2.5 GPA. Having a master’s degree and taking a formal CPA study program also increase the likelihood of passing in fewer attempts, while SAT has no effect on passing in fewer attempts.

DISCUSSION The purpose of this research was to see if AACSB based differences in accredited institutions had an effect on CPA exam performance. Two institutional measures based on AACSB accreditation were assessed: number of years the school has been accredited and if the accounting program is accredited, as well as several personal measures (GPA, SAT, etc.) A

262

B. Heshizer and C. C. Howell Volume 7 – Fall 2009 sample of practicing CPAs was surveyed to provide the data used in this study. The only significant mean difference between those who needed one or two attempts to pass the CPA exam compared to those who needed three or more attempts to pass the CPA exam was GPA. There were no significant differences in the means for SAT scores, advanced degree, preparing for the exam via a commercial program, years of AACSB accreditation, and if the accounting program had separate accreditation.

The multiple regression model with the number of attempts to pass the CPA as the dependent variable showed that only GPA had a significant impact. Higher GPA was associated with passing on fewer attempts. A logistic regression analysis, with the dependent variable defined as passing the CPA exam on one or two attempts versus three or more attempts, revealed only one significant difference in the likelihood of passing the CPA exam in fewer attempts. Again GPA was the significant explanatory variable. The odds ratios from the logistic analysis show GPA, having a master’s degree, and preparing for the CPA via a classroom study program increase the chance of passing the CPA exam on fewer attempts

The relationship of higher GPA with fewer attempts to pass the CPA exam can be understood in a straightforward way. Potential CPAs who have high GPAs would be expected to have a greater understanding of accounting that would in and of itself better prepare them for the CPA exam. If GPA indicates acquisition of accounting knowledge, an inverse relationship between number of attempts to passing the CPA exam, which under the old testing format was widely considered to be a “textbook” exam based on accounting knowledge, and GPA would be expected. As an indication of success on tests during a student’s college career, a high GPA would be expected to correlate with fewer attempts to pass the CPA exam. These results support the line of reasoning that effort in school as reflected in GPA is associated with success on the CPA exam.

The two measures based on AACSB accreditation did not emerge as factors explaining differences in the CPA passing rate. What might account for this finding? One possible explanation is that the measures used to assess differences among AACSB accreditation are deficient and simply do not capture the impact that accreditation has on a school of business. Alternative approaches to measuring quality differences among AACSB schools might capture an effect that neither years of accreditation nor separate accounting program accreditation captures. For instance, assessing faculty quality at AACSB schools might be another approach. AACSB accreditation does require faculty to meet standards of scholarship and have requisite degree and professional standing. Using faculty publications could be a better measure of differences among AACSB institutions. The argument could be made that higher quality faculty would lead to higher quality programs and to graduates better prepared to take the CPA exam.

Another explanation for the finding that the two AACSB institutional measures do not affect CPA exam performance is that schools that attain accreditation are on a level field relative to each other. In other words, by attaining and keeping accreditation, business schools have gone through a review process that makes schools more alike, resulting in less variation among institutions. This is akin to the effect of going through military boot camp; recruits go through a process designed to reduce diversity and inculcate a common military ethic. With the AACSB requirement that business schools maintain a program of continuous planning, assessment, and quality improvement, the educational environment at AACSB accredited schools might be more homogeneous. Thus, there might not be significant variation or differences in the quality of the educational experience of students who graduate from AACSB schools.

The correct interpretation of this study’s results might be that there may not be significant differences among AACSB accredited institutions such that the AACSB accredited school attended is not as significant as the fact that the school is accredited to begin with. The authors of this study found in a prior study that CPAs who graduated from colleges with AACSB accreditation were more likely to pass the CPA exam on fewer attempts compared to those graduating from non­AACSB accredited schools (see, Howell and Heshizer, 2006). The finding that GPA is the only factor that contributes significantly to explaining the number of attempts to pass the CPA exam at AACSB accredited schools could indicate that AACSB schools offer a homogeneous educational experience. Hence, students at AACSB business schools may be provided a similar quality education, and what explains differences in CPA passing rates is student effort. Since the institutions are alike in terms of their educational environment, CPA exam performance is most influenced by GPA, a measure of learning. The study indicates that at AACSB colleges with similar quality students, students who earn higher grades have passed the CPA exam on fewer attempts. It is somewhat satisfying to find that student learning is predictive of success on the CPA when AACSB based institutional and student characteristics are controlled.

In sum, this study proposed to see if AACSB based differences in business schools affected the educational experience of students. We found that only GPA had a significant impact on the respondents passing the CPA exam on fewer attempts. The two institutional measures, years of accreditation and accounting program accreditation did not influence the number of

263

AACSB Accreditation and the Homogeneity of the Business Educational Experience attempts to pass the CPA exam. The educational experience of these students was not affected by our measures of AACSB school quality. As suggested, it is possible that undergoing the accreditation process and retaining accreditation status may reduce differences among AACSB accredited schools and create a homogeneous educational experience where effort as reflected by GPA is the only differentiating factor. It may be that earning and keeping accreditation makes AACSB schools more alike (i.e., homogeneous), leaving effort as the differentiating characteristic.

REFERENCES AACSB International. 2006. Accreditation. Retrieved from www.aacsb.edu/accreditation, March 7. AACSB International. 2004. Accredited Members. Retrieved from WWW.aacsb.edu/accredited members. AACSB International. 2006. Accredited Members. Retrieved from WWW.aacsb.edu/accredited members. AACSB International. 2005. Eligibility procedures and standards for accounting accreditation. AACSB International. 2006. Eligibility procedures and standards for business accreditation. Ashbaugh, D.L. & Thompson, A.F. (1993). Factors Distinguishing Exceptional Performance on the Uniform CPA Exam. Journal of Education for Business, v68, n6, 334­340. Bailey, A. R., and W. R. Bentz. (1991). Accounting accreditation: Change and transition. Issues in AccountingEducation, (Fall): 168­177. Burke, J. & D’Aquila, J. (2004). A Crucial Test for New CPAs. CPA Journal, v74, n1, 58­61. Brahmasrene, T. & Whitten, D. (2001). Assessing Success on the Uniform CPA Exam: A Logit Approach. Journal of Education for Business, Sept./Oct. 2001, 45­55. Churyk, N. & Mantzke, K. (2005). The Computer­Based CPA Exam. CPA Journal, v75, n7, 60­62 Gaharan, C., Chiasson, M. A., Fourst, K.M., Mauldin, S. (2007). AACSB International Accounting Accreditation: Benefits and Challenges. The Accounting Educator’s Journal, XVII, pp. 13 – 29 Handel, K. C. (2003). Changes to the CPA Education Requirements and Examination. State of Georgia, Retrieved January 11, 2007 from http://sos.georgia.gov/plb/accountancy/cpachanges.htm. Holder, W., & Mills, C. (2001). Pencils Down, Computers Up – The New CPA Exam. Journal of Accountancy, March 2001, 57­ 61. Howell, C., & Heshizer, B. (2006). AACSB Accreditation and Success on the Uniform CPA Exam. Applies Business and Economics, Volume 6(3), 65­72. Howell, C., & Heshizer, B. (2008). Characteristics that Assist Future Public Accountants Pass the CPA Exam on Fewer Attempts. Applies Business and Economics, Volume 8(3), 57­66. Kren, L., K. W. Tatum, and L. C. Phillips. (1993). Separate accreditation of accounting programs: An empirical investigation. Issues in Accounting Education (Fall): 135­141. Lindsay, D.H., & Campbell, A. (2003). An Examination of AACSB Accreditation Status as an Accounting Program Quality Indicator. Journal of Business and Management, v9, n2, 125­135. Marts, J.R., Baker, J.D., & Garris, J.M. (1988). Success on the CPA Examination in AACSB Accredited and Non­accredited school. Accounting Educators’ Journal, v1, 74­91. Ponemon, L. & Schick, A. (1998). Arguments Against the CPA Exam to Gauge Accounting Program Success / Reply / Rebuttal. Issues in Accounting Education, v13, n2, 421­429. Pustorino, A. (1996). Some Thoughts on the Uniform CPA Examination. CPA Journal, v66, n8, 36­39. Shafer, W., Kunkel, J. G., & Hansen, K. A. (2003). Effects of the 150­Hour Education Requirement. CPA Journal, v73, n1, 72­ 74. Sinning, K. E., and H. J. Dykxhoorn. (2001). Processes implemented for AACSB accounting accreditation and the degree of faculty involvement. Issues in Accounting Education (May): 181­204. Smith, A. (1992). An Assessment of the Cognitive Levels of the Uniform CPA Examination and the CMA Examination. Doctoral Dissertation (Northern Illinois University, Dekalb, IL). Whitten, D. & Brabmasrene, T. (2002). Passing the Uniform CPA Exam: What Factors Matter? The CPA Journal, Nov. 2002, 60­62.

264

R. Davis Volume 7 – Fall 2009

HIGHLY QUALIFIED AND CULTURALLY COMPETENT: IS IT TOO MUCH TO EXPECT OF PUBLIC SCHOOL TEACHERS?

Rodney Davis Troy University, USA

ABSTRACT The medical profession has already begun to realize that their practitioners need to be culturally competent to work with the increasingly diverse population. Medical schools are already training healthcare workers to work proactively in multi­cultural settings. Researchers have noted that becoming culturally fluent plays and important role in the healing process. Education has lagged behind in this effort. According to Diller and Moule in their book “Cultural Competence: A Primer for Educators”, only 7 states have specific teacher standards regarding cultural competence. The literature shows that to truly have a highly qualified teacher, one that is also effective in the classroom requires a new kind of educator­one that is culturally competent.

Keywords: Cultural Competence; Teaching Effectiveness; Highly Qualified

INTRODUCTION The Reauthorization of the Elementary and Secondary Education Act (PL 107­110), more commonly known as the No Child Left Behind Act (NCLB,) has had a profound impact on public education in its seven year existence. For the first time in our nation’s history the federal government has taken a direct, if not intrusive, role in what has been traditionally a state’s right­to educate its citizens.(McColl, 2005) By mandating educational outcomes (e.g. every child will be reading by the third grade by the year 2014,) annual standardized testing, adequate yearly progress, and requiring a highly qualified teacher (HQT) in every classroom, the federal government has made it clear that it is no longer business as usual in the nation’s schools. The days of funding schools with no strings attached are gone. Schools will now be held accountable for the “product” that they produce. This piece of legislation makes sure that America is getting something for the money that it spends on its schools. If NCLB is successful in achieving its goal of raising the “academic bar” both in what is taught and who teaches it, it will change the very face of education for the next 50 years.

Contained within No Child Left Behind are some very lofty aspirations. The most far­reaching new requirement; and the one that worries urban and rural districts, is related to teacher quality (Berry, 2004.) It has been argued by policymakers and reported by the national media that too many classrooms are under the control of teachers who are not qualified to be there. By qualified it is meant that the teacher is certified and has a college degree but may be teaching out of field. The president and his education advisors emphasize that not having a qualified teacher in the classroom is one of the reasons for the low achievement of students. (U.S. Department of Education, 2003) This situation is more likely to happen in schools that are in urban areas and those that serve students from low socio­economic status (SES) in both urban and rural districts. To remedy this condition, we must improve the quality of those who are responsible for the education of our children (U.S. Department of Education, 2003.) NCLB uses the term Highly Qualified Teacher to explain the administration’s plan for improving the quality of America’s teaching force.

This paper will explore the definition of HQT not as it relates to teaching effectiveness, because the federal definition says nothing about effective teachers, but as it relates to another issue, need for culturally competent teachers. Why did drafters of the NCLB legislation settle for the definition of highly qualified rather than tackle the tougher effective teacher (and in the process examine the role of culture as it impacts the learning process)? Was it that highly qualified was easier to measure and quantify? The heart and spirit of No Child Left Behind is a demand for effective teachers, with which no one can argue. Parents, Teachers, Administrators, Policy­makers all want the most qualified and effective person teaching their children. What impact does culture play in the teaching and learning relationship? Some researchers in the medical and psychological fields have begun to realize that the culture of the patient and the practitioner’s ability to work with the patient’s culture are important factors in the healing process. (Leishman, 2004; Odom­Forren, 2005; Stanhope, 2005) No Child Left Behind guarantees parental choice rights so that; no matter their background, race, ethnicity, and economic station in life, their

265

Highly Qualified and Culturally Competent: Is it too Much to Expect of Public School Teachers? children will have access to the best education available. At the same time it ignores the power of culture in the teaching and learning process. The central focus of this article examines culturally competent teachers.

TEACHER QUALITY The impact of a highly qualified teacher cannot be over estimated when evaluating student achievement. According to former education secretary Rod Paige in his First and Second Annual Report on Teacher Quality he states that the research shows that teachers’ general cognitive ability is strongly correlated with effectiveness as well as their experience level and content knowledge. Pedagogical training and certification have yet to be linked to achievement and teacher effectiveness. While this is not necessarily an accurate reflection of the research (Darling­Hammond, 2002) it supports a well­known idea first posited by James Coleman in his 1967 report evaluating the effectiveness of the Elementary and Secondary Education Act of 1965 (No Child Left Behind is the 2002 reauthorization of that law.) Next to the parents, the child’s classroom teacher is most directly responsible for student achievement. This places an enormous responsibility upon the shoulders of teachers. It also underscores the need for ensuring that every professional teacher that steps into a classroom to teach has the qualifications and the ability to be effective. NCLB not only requires qualified teachers in US classrooms, it mandates that they be highly qualified. So, what does this mean?

HIGHLY QUALIFIED: WHAT’S IN A NAME? According to NCLB, there will be a “highly qualified teacher” in every classroom by the end of the 2005/06 school year. To be highly qualified teachers must hold at least a bachelor’s degree from a four­year institution (in addition this degree must require that the student have an academic major e.g. mathematics not math education, hold full state certification, and demonstrate competence in their subject area. (NCLB, 2002) This definition does not say anything about being qualified and effective. The definition suggests that so long as the teacher is “highly qualified” they will be effective in the classroom. In reality, this is not always the case. One could easily make the case that even though an individual teacher meets the criteria for being highly qualified they may not be effective in the classroom as measured by standardized testing or some other scale of professional performance. In the alternative a person who is effective in the classroom may not meet the definition of highly qualified under NCLB.

What meaning does this definition convey? It says that those who stand before classrooms are not just minimally qualified­ barely able to legally teach. They are more than average, they are highly qualified. This vague definition leaves a lot of room for states to define HQT in a manner that best suits their needs and still meet the federal guidelines for compliance in this area. In addition, it does not settle the issue of what does an effective teacher look like? Isn’t that what we (the educational stakeholders) are looking for? It does not connect the teacher’s credentials with their ability to create an environment where students learn. This definition also fails to address the need for educators that are prepared and willing to use culture to create a positive learning environment.

In all fairness, the highly qualified teacher definition reflects the minimum criteria for entry into the profession of teaching. Where it becomes problematic is that NCLB espouses the platitudes of seeking excellence in teacher quality and student performance and yet sets the “bar” pretty low. Is the teacher, representative of this definition, the kind of person that will be able to prepare our children for the future? Are the elements of the HQT definition enough to expect of educators who are charged with such an awesome task? Berry (2004) suggests that it is not by stating, “Given the growing diversity of America’s public school students and the demands that all students meet AYP [average yearly progress] there is substantial evidence that teachers need more than context knowledge to be effective.” Darling­Hammond (2002) finds that teachers need pedagogical training and experience to be effective in the classroom.

INCREASED DIVERSITY BRINGS NEW CHALLENGES. One does not need a crystal ball to see that composition of the American classroom will continue to change over the next 50 years. If demographic predictions hold true, the American classroom will be radically different 50 years from now. According to Hodgkinson (2003), classrooms are going to be increasingly diverse. By the mid 21st century, Hodgkinson and other researchers suggest elementary and secondary enrollments to be composed of 51% ethnic and racial minority students. (Champeneria, 2004; Futrell, 2003; Parish, 1996) Others predict that the current minorities will become the majority in as little as 20 years. (Flowers, 2005)

The classroom filled with Caucasian children found on Leave It To Beaver, a popular 1960s television show, doesn’t exist in many public schools across the nation and will no longer exist in 2050. If the demographic predictions are accurate, then this

266

R. Davis Volume 7 – Fall 2009 increased diversity is going to present challenges for educators and policymakers. With this diversity comes a cultural diversity that impacts the classroom. In spite of this reality, many teachers are conducting their classes as if they were standing in front of a room filled with students who look like little Theodore Cleaver (Jerry Mathers.) They refuse to see that the times—they are a changin.’ When challenged, they respond that “they do not see color.” By this they mean that education should be color blind or culturally neutral and that in fact, race and ethnicity, or in the broadest sense culture, does not impact instruction. The available research indicates that the contrary is true. (Banks, 2001; Cross, 1989)

This “color blind” perspective has been explained by Kenneth Pike (1967) as an “etic” perspective. He stressed that some anthropologists & sociologists believe that it is important, if not vital, to study individuals independently from the larger cultural group to which they belong because each members of the group have individual characteristics. The problem with this viewpoint is that it denies the value of the larger structure and its impact on the individual. True enough, not all members of a particular ethnic group see everything the same but they share many things in common and it is this commonality along with family values, and environmental factors that impact the development of the individual. (Surbone, 2004) The alternative viewpoint is Pike’s “emic” perspective which insists that to understand an individual one must understand the larger group to which they belong. (Pike, 1967; King, 2004) A student’s culture and race cannot, and should not, be overlooked as not important to the understanding of the individual. If teachers accept the “etic” viewpoint, they deny the existence of a huge part of the individual’s identity.

We cannot dismiss the role that teacher’s culture (and race for that matter) and the culture and race of the student play in the classroom. What does culture do for us? It helps us make sense of our world. Surbone (2004) states that culture is a reference framework that helps individuals interpret their world. It is central to our personal identity. Robins defines culture as a set of learned beliefs and behaviors that control how its members view the world. (McClean, 2004) To say to me that my being a white male of German, Irish, and Welsh decent does not impact how I think and act is to deny a great deal of whom I am. I view the world through those lenses. Culture then serves as a kind of filter that enables people to sort out the information that is coming at them from many different sources. For the public school student this most certainly includes the teacher. To overlook its importance is a critical mistake that might have a bearing on student achievement. (Odom­Forren, 2005)

Those “tried and true” methodologies that were successful in years gone by may not meet the needs of a multi­ethnic, multi­ racial, multi­lingual, and multi­cultural classroom our next generation of teachers will stand before. A new kind of teacher is needed­­one that can not only function successfully in a diverse environment but also thrives in it.

IS THERE A DOCTOR IN THE HOUSE? The term cultural competence is relatively new to education but it is one that has been around for the past 15 years. One will not find much in the US educational journals about it, we’re still touting our tolerance for diversity, cultural sensitivity and cultural awareness. There are some ground­breaking works that have been undertaken (See Howard, Ladsen­Billings) in this area, yet most educators are still unaware of it. Where one will find scads of material is from the medical and psychological fields. Searching the Pubmed database for the term cultural competence will produce 1365 references in a plethora of major medical journals spanning both healthcare and psychology. Some of the articles are as old as the early 90s and as recent as last month. The articles show that the health care profession has come to some really progressive conclusions regarding the elements of effective care and the role that culturally competent providers play in patient recovery:

1. America is much more diverse today than 20 years ago and will continue to diversify. (Champeneria, 2004; Fitch, 2004) 2. The general diversity means increased diversity in the healthcare practice. We are not just seeing increased diversity in one region; we are seeing it across the nation. 3. Sensitivity to diverse populations is not enough, practitioners need to be proactive in their understanding of culture and have the ability to use a patient’s beliefs in the care protocol. This may mean designing new kinds of hospital gowns that reflect differing understandings of modesty. To reach this level of care, the workers at all levels must be culturally competent. (Carter, 2003; Flowers, 2004; Kachur, 2004) 4. One size fits all does not really fit all. Effective healthcare treats the individual as an individual and recognizes that culture plays a role in the patient’s healing. (Flowers, 2004) 5. Race, ethnicity, and language have a substantial impact on doctor­patient relationships. (Cole, 2004; Surbone, 2004) 6. New ways of working with and relating to the patient are needed. Providers need to be open to alternative medicines and holistic approaches to healthcare. (Flowers, 2004) 7. Understanding the culture of the patient is important in providing quality service. (Kachur, 2004)

267

Highly Qualified and Culturally Competent: Is it too Much to Expect of Public School Teachers?

8. Healthcare workers will need to be trained to effectively work with diverse populations. This means that workers must be culturally competent. This training must be part of an on­going effort. (Caffrey, 2005) 9. The objective is providing effective care for the patient. Caregivers that are not culturally competent may impede the healing process. (Matus, 2004; McClean, 2004, McGinnis, 1994, Odom­Forren, 2005) 10. The results of culturally incompetent care can be serious for the patient. (Cartledge, 2002) 11. Leaders in healthcare must take a proactive approach in helping their employees to become culturally competent. (Larson, 2005)

A simple review of the medical literature reveals that as a profession the healthcare industry is coming to understand the importance of culture on the healing process. They are realizing that to provide effective care means that their employees must understand their own culture and its influence on their actions and the culture of their patients. In addition effective care happens when caregivers are culturally competent not just culturally sensitive. Finally, to be culturally competent is not something one achieves but it is something to which a person strives.

These articles also cause one to question if these conditions are true for the medical profession, one that is client/patient centered, is it not also true for education that promotes its child­centeredness? In other words, is the medical analogy relevant to the education profession? Students are not patients in need of a cure but in many ways the comparison between medicine and education is accurate and relevant. At the nexus both professions are focused on people. Whether they are called patients or students, the doctor and the teacher are dedicated to physical and mental well­being. Figure 1 briefly identifies the ways in which the medical profession and education are similar.

Comparing Medical and Education Profession Doctors Teachers Preparation requires advanced degree Same Residency required Internship/ Student Teaching required Pre­serve doctors must pass board exams to earn their license Pre­service teachers must pass an exam to earn a teaching to practice medicine certificate/license Communication is an important part of the job Same Deals with diverse populations Same Works with a clientele that would rather be elsewhere Same Time spent advising clients regarding their medical situation Time spent advising parents regarding their children’s academic performance Diagnose symptoms Determine learning needs Prescribe treatments Prepare materials to meet needs Patients must be involved in the treatment process for optimal Student must be involved in the learning process for maximum results achievement Observable results (patient gets better or worse) Observable results (student demonstrates learning or not) Medicine is science Education is an art Doctors practice their craft Teachers practice their art Figure 1: Comparison of Medical and Education Profession

Is there a connection between a teacher’s cultural competence and their teaching effectiveness? The medical research indicates that there may be a link between effective medical care and understanding a patient’s culture. Theoretically, we can extrapolate that there may be a connection between effective teaching and a teacher cultural competence. Increased diversity throughout the population necessitates service industries developing new methods of customer care. These new methods of customer care not only require an awareness of the client’s cultural background, but also means that the service provider must be skillful in responding to the client in culturally relevant ways.

CULTURAL COMPETENCE: TOWARDS A WORKING DEFINITION Diller (2005) observes that in a recent survey of 24 states only 7 had some specific requirements about cultural competence. The majority had nothing at all or generic language about diversity. Those (7) that had a specific requirement for cultural competence had no specific way to measure performance of this objective. For years educational policy makers have

268

R. Davis Volume 7 – Fall 2009 promoted the diversity of public schools and the tolerance of it. Countless workshops have been conducted over the years addressing the topic of the importance of understanding cultural diversity and the requisite sensitivity that goes along with it. The medical profession has sounded a clarion call that having an understanding or sensitivity is not enough to be an effective health care worker. What is needed is cultural competence. (Carter, 2003)

The term cultural competence has been used a few times throughout this paper. It was chosen because it connotes a core ability missing in many teachers who might otherwise be considered highly qualified according to No Child Left Behind. To adequately understand it, one must first look back to the medical profession for a definition because it is there that the term seems to be used most often. Secondly, a proposed working definition that relates to the circumstances found in education.

Cultural competence has been defined as:

“A willingness to recognize and accept that there are other legitimate ways of doing things, as well as a willingness to meet the needs of those who are different, including those with disabilities.” (Cartledge, 2002) “Delivery of care within the context of appropriate knowledge, understanding, and appreciation of cultural distinction.” (Pediatrics, 1999) “Campinha­Bacote has defined cultural competence as the process in which the health care provider continuously strives to achieve the ability to effectively work within the cultural context of a client, individual, family or community. This process requires nurses to see themselves as becoming culturally competent rather than being culturally competent.” (Doutrich, 2004) “The ability to understand and attend to the total context of the client’s situation, it involves knowledge, attitudes and skills.” (Fitch, 2004) “Barrera & Kramer define cultural competence as the ability of service providers to respond optimally to all children, understanding both the richness and the limitations of the socio­cultural contexts in which the children and families as well as the service providers themselves may be operating.” (Le Roux, 2002) “Successfully teaching children who come from cultures other than your own.” (Diller, 2005)

From these definitions we can identify the key or most important aspects of the working definition of cultural competence. The major element that they have in common is that to be culturally competent is to be proactive. In other words to be so knowledgeable of other cultures that the provider or in this case the teacher, is so comfortable that they can create a learning environment that is culturally friendly and conducive to individual learning for all students. It is much more than a simplistic awareness or sensitivity of diversity. Secondly, the literature has shown that before one can understand other cultures the individual must understand his or her own culture and how it influences their behavior. Thirdly, these definitions indicate a willingness to be culturally competent. In reality, no one can be forced to be or become anything. The desire begins with the individual. Fourthly, cultural competence is more than understanding race or gender. It involves knowledge and action in relation to beliefs, values, rituals, and language. Finally, cultural competence extends beyond the patient or the student but includes the family and the larger cultural group.

Taking these definitions and synthesizing them into a working definition results in the following. A culturally competent teacher is:

1. Knowledgeable of the specific elements of other cultures as well as their own. 2. Sensitive to the needs of students from other cultures. 3. Able to incorporate the values, beliefs, traditions, customs, rituals, religion, and language of diverse cultures into the teaching and learning process. (no small feat!) 4. Aware of the perceptions of distinct culture groups towards education and public schooling. 5. Able to communicate with parents and students from other cultures. 6. Willing to use alternative methods that make material culturally relevant to the students. 7. Understands that because students come from different cultures, they do not see the world through the same lens. Therefore, what is appropriate to one culture may not be appropriate to another.

IS THERE A RELATIONSHIP BETWEEN EFFECTIVE TEACHERS AND CULTURAL COMPETENCE? There is a growing body of research that describes the link between teaching effectiveness and cultural competence. (See McAllister & Irvine, 2000.) In the medical literature, there is a belief that to be an effective healthcare worker demands that one be culturally competent. Dr. Richard Carmona, U.S. Surgeon General, supports this belief when he stated, “We have to

269

Highly Qualified and Culturally Competent: Is it too Much to Expect of Public School Teachers? really appreciate the culture these patients come from … and embrace it, because we cannot be effective in our jobs as health professionals without understanding how patients understand their health and illness.” (Odom­Forren, 2005, p. 79) While he was speaking of medical professionals the analogy to teachers can be easily drawn. To be an effective teacher requires that one be culturally competent.

On a basic level, it seems to make sense that a teacher who is knowledgeable about the various cultures within her or his room and is able to use that knowledge to construct a learning environment that incorporates diverse beliefs, rituals, and learning styles, etc. would be effective. At this time there is insufficient empirical evidence demonstrating the correlation between teaching effectiveness and cultural competence. Certainly this is an area that needs further study.

WHY DO WE NEED TO BE CULTURALLY COMPETENT? The question of why be culturally competent is one that on the surface may not need to be asked because it is one of those questions that has an obvious answer. Why be culturally competent? Aside from the fact that it is the ethical thing to do here are some benefits:

1. American classrooms are becoming and will continue to become more diverse. Having the ability to work with multiple cultures and racial groups strengthens the effectiveness of a highly qualified teacher. Imagine sitting in a meeting where the speaker used terms that were unfamiliar to you, referred to things that only an insider would know, and spoke in tones that were condescending. How would you feel? You might get up and walk out. You might start to look out the window or even text­message someone from your cell phone. Adults would not put up with a situation like this. Imagine now that you are a 9 year old African American boy in a rural school. How do you react? Probably you would fidget in your seat, look out the window, and talk with your neighbor. The result would be the teacher sending you to the office to see an administrator. So much of the discipline problems in school might be related to cultural incompetence on the part of the teaching staff. (Cartledge, 2002) It is incumbent that teachers learn how to use cultural peculiarities to enhance the learning taking place in the classroom. 2. A greater degree of participation by the student in the learning process. What is true of horses, (you can lead them to water but not make them drink) is also true of humans. You can teach to them but not make them learn. According to Parker Palmer (1998), the role of the teacher is to build bridges between themselves and the student and between the student and the subject. Using terms and techniques that are culturally responsive may encourage students to participate in the learning process to a greater degree. This action tells the student that their beliefs are important and relevant to the learning process. It also tells them that if they are going to learn anything they are going to have to be involved in the effort. 3. A reduction in the failure rate. How many students fail each year because they are not being taught by someone who understands how culture impacts the learning environment? The basis of the teaching and learning process is communication. If the teachers fail to communicate effectively, students will not be as likely to learn. (Fitch, 2004; Le Roux, 2002) 4. Cultural competence may help to change students’ and their family’s attitudes toward education. Medical professionals have realized that there is a segment of our population that is underserved by the healthcare profession. This group tends to be from a lower socioeconomic group, limited English proficient, minority, and culturally and ethnically diverse. Their attitudes about health, causes of illness and how to cure it can alienate them from seeking care from western medicine which does not recognize the value of homeopathic remedies or the spiritual dynamic in the healing process. When health care workers are culturally competent they are more adept at understanding the barriers and provide solutions that will give more effective care for their patients. These same attitudes that keep people from medical care can also keep their children from engaging in the western educational system. For some, western education has not been a positive experience. Blame is easy to dole out but a culturally active response can begin the process of changing attitudes.

CONCLUSION Do we need to change the definition of Highly Qualified? Are we asking too much of our public school teachers to be highly qualified and culturally competent? With many more tasks being placed upon teachers and administrators as a result of increased accountability legislation, some might argue that asking that educators find ways to be culturally proactive might be the “last straw” for an over­burdened professional. When we consider the primary goal of teaching; that students learn, then we must accept that culturally competent teachers may be more successful in meeting this goal. The medical literature suggests that there is no question that to provide the highest standard of care for an increasingly diverse patient population requires that the workers must be equipped to proactively use culture in the care of their patients. What is true for the medical

270

R. Davis Volume 7 – Fall 2009 profession is also true for education. We are not just experiencing increased diversity in doctor’s offices and hospitals. We are seeing it in every segment of society including the K­12 classroom.

Parker Palmer (1998) states that it is the job of the teacher to build connection points between the student and the subject and the student and the teacher. Cultural differences can be a barrier to creating the connection to the subject as well as the teacher. The results of cultural differences and disconnects can be increased inappropriate behavior and school failure. (Cartledge, 2002) A culturally capable or competent teacher knows and understands the culture of his or her students and can use this knowledge to create linkages for the student. With this in mind the student is not being forced to assimilate to a foreign system but the “system” is adapting to the needs, experiences, and background of the student with the goal of increasing learning.

REFERENCES Banks, J.A., Cookson, P., Gay, G., Hawley, W.D.,Irvine, J.J., Nieto, S., et al. (2001). Diversity within unity: Essential principles for teaching and learning in a multicultural society. Phi Delta Kappan, 83(3), 197­203. Berry, B. (2004). Recruiting and retaining “highly qualified teachers” for hard­to­staff schools. Bulletin, 88(638), 5­27. Caffrey, R.A.; Neander, W.; Markel, D.; & Stewart, B. (2005). Increasing cultural competence of nursing students: Results of integrating cultural content in the curriculum and an international immersion experience. Journal of Nursing Education, 44(5), 234­40. Carter, R.T. (2003). Becoming racially and culturally competent: the racial­cultural counseling laboratory. Journal of Multicultural Counseling and Development, 31(1), 20­30. Cartledge, G., Kea, C., & Simmons­Reed, E. (2002). Serving culturally diverse children with serious emotional disturbances and their families. Journal of Child and Family Studies, 11(1), 113­26. Champaneria, M.C., & Axtell, S. (2004). Cultural competence training in u.s. medical schools. Journal of the American Medical Association, 291(17), 2142. Cole, P.M. (2004). Cultural competence now mainstream medicine: responding to increasing diversity and changing demographics. Postgraduate Medicine, 116(6), 51­3. Cross, T. (1989). Toward a culturally competent system of care. Washington, D.C.: Georgetown University. Culturally effective pediatric care: Education and training issues. (1999). Pediatrics, 103(1), 167­70. Darling­Hammond, L. & Youngs, P. (2002). Defining “highly qualified teachers”: What does “scientifically­based research” actually tell us? Educational Researcher, 31(9), 13­25. Diller, J. & Moule, J. (2005). Cultural competence: A primer for educators. Belmont, CA: Thompson Wadsworth. Doutrich, D. & Storey, M. (2004). Education and practice: Dynamic partners for improving cultural competence in public health. Family Community Health, 27(4), 298­307. Fitch, P. (2004). Cultural competence and dental hygiene care delivery: Integrating cultural care into the dental hygiene process of care. Journal of Dental Hygiene, 78(1) 11­21. Flowers, D.L. (2004). Culturally competent nursing care: A challenge for the 21st century. Critical Care Nurse, 24(4), 48­52. Flowers, L. (2005). Giving culturally competent care another element in patient safety. Operating Room Manager, 21(2), 1,17­ 18,24. Futrell, M., Gomez, J., & Bedden, D. (2003). Teaching the children of a new America: The challenge of diversity. Phi Delta Kappan, 84(5), 381­385. Hodgkinson, H. (2003). Educational demographics: What teachers should know. In A.C. Ornstein, L.S. Behar­Horenstein, & E.F. Pajak (Eds.), Contemporary Issues In Curriculum (349­353). Boston: Allyn & Bacon. Kachur, E.K. & Altshuler, L. (2004). Cultural competence is everyone’s responsibility! Medical Teacher, 26(2), 101­5. King, J.T. (2004). Leaving home behind: Learning to negotiate borderlands in the classroom. Intercultural Education, 15(2), 139­49. Larson, L. (2005). Is your hospital culturally competent? Trustee, 58(2), 20­3, 30. Leishman, J. (2004). Perspectives of cultural competence in health care. Nursing Standard, 19(11), 33­38. Le Roux, J. (2002). Effective educators are culturally competent communicators. Intercultural Education, 13(1), 37­48. Matus, J.C. (2004). Strategic implications of culturally competent care. Health Care Manager, 23(3), 257­61. McAllister, G. and Irvine, J. (2000). Cross­cultural competency and multi­cultural teacher education. Review of Educational Research, 70(1), 3­25. McColl, A. (2005). Tough call: Is No Child Left Behind Constitutional? Phi Delta Kappan, 86(8), 602­610. McGinnis, S. (1994). Cultures of instruction: Identifying and resolving conflicts. Theory Into Practice, 33(1), 16­22. McLean, M. (2004). Is culture important in the choice of role models? Experiences from a culturally diverse medical school. Medical Teacher, 26(2), 142­9. Odom­Forren, J. (2005). Cultural competence: A call to action. Journal of Perianesthesia Nursing, 20(2), 79­81.

271

Highly Qualified and Culturally Competent: Is it too Much to Expect of Public School Teachers?

Palmer, P. (1998). The courage to teach. San Francisco: Josey­Bass. Parish, R., & Aquila, F. (1996). Cultural ways of working and believing in school. Phi Delta Kappan, 78(4), 298­305. Pike, K.L. (1967). Language in relation to a unified theory of structure of human behavior. The Hague: Mouton. Stanhope, V., Solomon, P., Pernell­Arnold, A., & Sands, R.G. etal. (2005). Evaluating cultural competence among behavioral health professionals. Psychiatric Rehabilitation Journal, 28(3), 225­233. Surbone, A. (2004). Cultural competence: Why? Annals of Oncology, 15(5) 697­699. U.S. Department of Education, Office of Policy Planning and Innovation. (2003). Meeting the highly qualified teachers challenge: the secretary’s second annual report on teacher quality (ED­00­CO­00126). Washington, D.C.: Author.

272

M. Kariuki and T. Ryan Volume 2, Issue 4 - Winter 2009

PRESERVICE TEACHERS’ AWARENESS OF CYBERBULLYING ISSUES

Mumbi Kariuki and Thomas Ryan Nipissing University, Canada

ABSTRACT Preservice teachers graduating from a post graduate teacher preparation program were surveyed on their awareness of cyberbullying issue in schools. Preliminary results indicate that majority of the preservice teachers in the study are aware that cyberbullying is a problem in schools, and that children are affected by cyberbullying. A majority of the preservice teachers are concerned about cyberbullying and would do something if cyberbullying occurred in school. However, only a small number felt confident that they would be able to identify instances of cyberbullying and an even smaller number felt confident about managing cyberbullying.

Keywords: Cyberbullying, Preservice Teachers

PRESERVICE TEACHERS’ AWARENESS OF CYBERBULLYING ISSUES Cyberbullying has been defined as "the use of information and communications technologies such as e­mail, cell phone and pager messages, instant messaging, defamatory personal web sites, and defamatory online personal polling websites, to support deliberate, repeated, and hostile behaviour by an individual or group, that is intended to harm others" (Belsey, 2005).

Cyber bullying has also been defined as “the use of electronic devices and information, such as e­mail, instant messaging (IM), text messages, mobile phones, pagers and web sites, to send or post cruel or harmful messages or images about an individual or a group. (SperoNews, 2009)

Bullying as a human behaviour is not new. The idea of school being a setting where most bullying activities originate or happen is also not new. Many readers of this paper will remember that insidious note being passed under the desks in a classroom setting and many might remember the effects of being on the receiving end of the contents of the note. One of the newer methods of bulling has led to the introduction of the prefix "cyber" before bullying. Cyber is a prefix "used in a growing number of terms to describe new things that are being made possible by the spread of computers"(Webopedia, 2009). Cyberbullying therefore is bullying that happens in cyberspace, and cyberspace is the "non­physical terrain created by computer systems."(Webopedia, 2009).

Subsequently, the internet has opened up this entire “new realm of possibilities for children. New communications technologies have put the world at their fingertips. The possibilities for learning and socializing are endless but so too are the possibilities for doing serious harm to others" (Dueck, 2006). Because of having these technologies at their fingertips, and given the speed provided by the same, cyberbullying has exponential potential. While the old note may have taken minutes to spread through the chosen a few in the class, a similar cybernote can reach hundreds or thousands in a fraction of a minute. “This is a freer form of bullying than traditional physical or name­calling attacks as the individual responsible can be anonymous. Also, unlike standard bullying, there is no respite or refuge for the victims since cyber bullying can go on 24 hours a day and invade a victim's home” (SperoNews, 2009).

Polls and studies report a variety of statistics of the extent of cyberbullying. A study by the National Institutes of Health, based on 2005/2006 survey of human behaviour by the World Health Organization's shows “that many children in grades 6 through 10 have either bullied classmates or been bullied by them, sometimes online or through cell phones” and proceeds to say that “Cyber Bullying affects one in 10 students” (HealthyDay, 2009).

According to the study, 20.8 percent of respondents reported being perpetrators or victims of physical bullying in the past two months; 53.6 percent were victims of verbal bullying; 51.4 percent were victims of relational bullying, which involves social exclusion, and 13.6 percent of cyber bullying on a computer, cell phone or other electronic device. (HealthyDay, 2009).

273

Preservice Teachers’ Awareness of Cyberbullying Issues

A poll conducted by Opinion Research Corporation for Fight Crime found that “One­third of all teens (12­17) and one­sixth of children ages 6­11 have had mean, threatening or embarrassing things said about them online” and “10 percent of the teens and four percent of the younger children were threatened online with physical harm” The report went on the say that “Preteens were as likely to receive harmful messages at school (45 percent) as at home (44 percent). Older children received 30 percent of harmful messages at school and 70 percent at home”. (SperoNews 2009).

A local North Bay Ontario report prepared by the North Bay Police Service Liason officer and presented to the North Bay Police Services Board cyberbulling "is at the forefront of school related issues and has devastating effects on it's victims" (North Bay nugget Feb 11th 2009). International studies, national polls, local police reports provide evidence that point to the same message: that cyberbullying is on the increase.

In the movie, Dead Poets Society, Robin Williams (Starring a teacher, Mr. Keating) makes the statement “This is a battle, a war, and the casualties could be your hearts and souls”. This quote though used in an entirely different context rings so true of cyberbullying in schools today. It is a battle....and the casualty could be society’s most valued possessions­ our children.

Without doubt, teachers are on the forefront of this battleground. The knowledge and skills needed to combat this problem will need to come from all possible fronts, but the contribution of teachers is going to be critical. Many teachers in the field may be acquiring skills on how to deal with cyberbullying because of being immersed in the settings. How about preservice teachers? One of the findings of a study conducted in Alberta (Li, 2006) was that "although a majority of the preservice teachers understand the significant effects of cyberbullying on children and are concerned about cyberbullying, they do not think it is a problem in our schools". The study also adds that "a vast majority of our preservice teacher do not feel confident in handling cyberbullying, … and "they do not know either how to identify the problem, or how to manage it when it occurs"(5).

This is a disconcerting issue that calls for investigation and action from all possible angles. The purpose of this study is to examine perceptions and understandings of cyberbullying among preservice teachers in a teacher preparation program in Northern Ontario. In addition to providing comparative information between the two settings (Alberta and Ontario) the results of the study will also provide valuable information regarding the preparedness of the 2009 graduates of a Northern Ontario teacher preparation program’s with respect to cyberbullying.

This Northern Ontario university teacher preparation program’s offers a one­year Bachelor of Education post­degree program. The one year normally takes the form of a total 32 weeks between September and April. Of these 32 weeks, 19 weeks are spent in the university taking course work, and 13 are spent in schools as practicum. The practicum weeks are spaced into 4 sessions, one 1­ week session in September, one 3­eek session in October, another 3­week session in November, and the final 6­week session in February/March. The program prepares teachers in all three division in the Ontario system, namely Primary Junior (Kindergarten to grade 6), Junior intermediate (Grade 4­10) and intermediate Senior (grade 7­12).

LITERATURE REVIEW Within each school and classroom there is an adult who is an authority, a teacher who is responsible for the health and safety of all within the classroom and school. Teachers attempt to meet the needs of all students however; student needs are diverse just as the context, which is constantly shifting within the school and classroom landscapes, is assorted and wide­ ranging. Levin and Nolan (2004) suggest,

A student’s need for a sense of power revolves around the need to feel that she is not simply a pawn on a chess board. We all feel that we have control over the important aspects of our lives. When students are deprived of the opportunity to be self­directing and to make responsible choices, they often become bullies or totally dependent upon others, unable to control their own lives. A teacher can enhance the student’s sense of power by providing opportunities to make choices and by allowing the student to experience the consequences of those choices. (p.204)

As a classroom manager and leader, a teacher’s decisions to teach, utilizing a certain democratic and empowering approach is critical to each student. However, the characterization of the bully as an impulsive person with a strong need to dominate others, including the teacher (Olweus, 1993), suggests classroom turbulence is unavoidable. The bully and the victim in the classroom may reveal an affective and behavioural mode that is arguably dysfunctional; the educator in a classroom is “often overwhelmed and challenged by students with problem behaviour. Teachers want to create schools that are places of learning, not places of constant struggle “(Crone & Horner, 2003, p. 3). A recent Canadian Broadcasting Corporation (2005) documentary explained how,

274

M. Kariuki and T. Ryan Volume 2, Issue 4 - Winter 2009

David Knight's life at school has been hell. He was teased, taunted and punched for years. But the final blow was the humiliation he suffered every time he logged onto the internet. Someone had set up an abusive website about him that made life unbearable.

"Rather than just some people, say 30 in a cafeteria, hearing them all yell insults at you, it's up there for 6 billion people to see. Anyone with a computer can see it," says David. "And you can't get away from it. It doesn't go away when you come home from school. It made me feel even more trapped." He felt so trapped he decided to leave school and finish his final year of studies at home.

These days the internet is a crucial part of teenage culture. Kids can't imagine life without it. They run home from school and the first thing they do is log on. They "talk" for hours using instant messaging, bulletin boards and chat­ rooms. But the chatter and gossip can spin out of control, slip into degrading abusive attacks. (Retrieved October 10, 2006, from, http://www.cbc.ca/news/background/bullying/cyber_bullying.html)

As a result an educator’s goal of a healthy learning environment is undermined by technology due to the sociological tensions created by bullies in class and online within cyberspace. For instance, Raskauskas and Stoltz (2007) surveyed 84 adolescents,

regarding their involvement in traditional and electronic bullying. Results show that students' roles in traditional bullying predicted the same role in electronic bullying. Also, being a victim of bullying on the Internet or via text messages was related to being a bully at school. Traditional victims were not found to be electronic bullies. (p.1)

Still we need to unearth and identify psychological sources underlying bullying behaviour; Olweus (1993) realized three motives. First, bullies have a strong need for power, dominance, and control. Second, negative environmental childhoods produce a degree of hostility toward their environments leading them to derive satisfaction from inflicting injury and suffering on others. Third, bullies obtain money, assets and cigarettes through hostile behaviour. The bully who is in control and dominant is getting what they need and that is attention, yet it is negative attention via fearful and upset peers. The bully lacks compassion and enjoys the power imbalance (Hutchinson, 2002). Li (2005) conducted,

A survey of 177 grade seven students (80 males and 97 females) that . . . . almost 54% of the students were bully victims and over a quarter of them had been cyber­bullied. More than half of the students knew someone being cyber­bullied. Over 40% cyberbully victims had no ideas who cyber­bullied them. Further, there was a close tie among bullies, cyberbullies, and cyberbully victims. (Retreived August, 2006, from http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/1b/c2/f4.pdf

Within schools bullying surfaces quickly and is a constant source of concern for all members of society particularly, teachers, peers, and parents. The learning begins early in childhood when children begin to affirm themselves at the expense of others to achieve social dominance (Rigby, 2004). In some cases Darwinism or ‘the survival of the fittest’ within a competitive environment (school) is being played out. It is an evolutionary explanation for bullying especially in schools that place a high value on achieving dominance over others to ensure that the strongest prevail and the human species is prolonged (Rigby, 2004). We need to look for evidence as educators so that we can intervene, educate and empower both the bully and the victim.

BULLYING: MOTIVATIONS The gender variable does not predispose one to bullying and children who engage in this dysfunctional social system discover “long­term emotional consequences for both the victim and the bully” (Emmer, et al., 2006, p. 190). The bully is largely situated in a social system or hierarchy in which there are assistants who facilitate bullying and onlookers who reinforce maladaptive behaviours (Smith, et al., 2005). Peer audiences can often incite the bullying and in­directly encourage and reinforce bullying behaviours. Bullying most often occurs due to, differences between students, developmental processes, social cultural phenomenon, responses to peer pressure and a skewed perspective of restorative justice (Rigby, 2004). These elements are further explored henceforth.

BULLYING: DIVERSITY “Bullying unfolds in the social context of the peer group, the classroom, the school, the family and the larger community “(Mishna, Scacello, Pepler, Weiner, 2005, p. 719) Bullying may surface when power imbalances exist between people and some enjoy dominating others who are less dominant (Rigby, 2004). Both direct and indirect bullying may be involved

275

Preservice Teachers’ Awareness of Cyberbullying Issues regardless of the culture, ethnicity, or diversity of the class or peer group. However, of note is the finding that dysfunctional families and oppressive parenting, and not diversity, can lead to aggressive behaviour of children towards their peers (Rigby, 2004). As well, bullies tend to come from homes where physical punishment is prevalent and where children are taught to strike back physically as a way to handle problems (Manning & Butcher, 2007). Teachers need to decide if the conduct is truly bullying and this is a difficult and confusing task as Mishna, et al., (2005) concluded,

many teachers . . . expressed their concern about their lack of ability to deal with the bullying incidents due to pressure to cover the curriculum and to respond to children and also conveyed a lack of systematic support, particularly related to indirect bullying. (p.734)

Moreover, bullying does not always occur because of outer physical variations between individuals for instance being obese, thin, with an overbite, red hair, freckles, being tall, short, or having glasses does not predispose a person to bullying. Further, it is possible to find external deviations within all people (Olweus, 1993). Actually, external deviations play a much lesser role in the origin of bully/victim problems than commonly believed. Still we are left with the disturbing fact that some adults truly believe bullying is a traditional ingredient of childhood (Bullock, 2002). In one study it was found that “the majority of the teachers stated that they did not know how to deal with indirect bullying “(Mishna, et al., 2005, p. 728), which is the mode that includes cyberbullying which requires more analysis and dialogue.

METHODS One hundred and eighty (180) Primary Junior preservice teachers enrolled in the BEd program in a teacher preparation program in Northern Ontario responded to a questionnaire on their perceptions on cyberbullying. The questionnaire was administered after the last practicum in March 2009.

This study is partially a replication of the Alberta study (Li 2006). Replication studies are conducted either because of methodological flaws in the original studies or with the goal of redoing the study “with a different population...to see if the findings are the same” (Lodico, Spaulding, Voegtle 2009). This study was replicated to use a slightly different population, in a different geographical location to see if the findings are the same. Permission to use and adapt the questionnaire was granted by the author, (Li 2006).

This study focuses on the following research questions:

1. What is the level of concern of cyberbullying in schools in a sample of Primary Junior preservice teachers in Ontario? 2. What is the level of confidence in identifying and managing cyberbullying in schools in a sample of Primary Junior preservice teachers in Ontario? 3. For the preservice teachers in Ontario what is the perception of the level of preparedness received from the teacher preparation program they were graduating from.

RESULTS Preliminary results indicate that majority of the preservice teachers in the study are aware that cyberbullying is a problem in schools, and that children are affected by cyberbullying. A majority of the preservice teachers are concerned about cyberbullying and would do something if cyberbullying occurred in school. However, only a small number felt confident that they would be able to identify instances of cyberbullying and an even smaller number felt confident about managing cyberbullying.

These results point to the need for further research to explore the level of preparedness of inservice teachers with regard to cyberbullying. Do the inservice teachers have a higher level of awareness and preparedness? If so understanding of the sources higher levels of awareness and preparedness would help in the process of training preservice teachers to combat cyberbullying.

REFERENCES Belsey, B. (2009). Making connections to make a difference. Retrieved July 311st 2009 from http://www.cyberbullying.ca Bullock, J. (2002). Bullying among children. Childhood Education, 78, 130­133.

276

M. Kariuki and T. Ryan Volume 2, Issue 4 - Winter 2009

Canadian Broadcasting Corporation. (2005). Cyber­bullying. Retrieved October 10, 2006, from, http://www.cbc.ca/news/background/bullying/cyber_bullying.html . Crone, D. A., & Horner, R. H. (2003). Building positive behaviour support systems in schools: Functional behavioral assessment. New York, NY: Guildford Press. Dueck, S,(2006). Cyberbullying: A New Place For An Old Practice. Retrieved July 311st 2009 from http://www.lba.k12.nf.ca/cyberbullying/pdf/cyberbullying.pdf Egan, S.K., & Perry, D.G. (1998). Does low self­regard invite victimization? Developmental Psychology. 34, 299­309. HealthDay, (2009). Cyber Bullying Affects One in 10 Students. Retrieved July 311st 2009 from http://www.nlm.nih.gov/medlineplus/news/fullstory_86198.html Hutchinson, N. L. (2002). Inclusion of exceptional learners in Canadian schools: A practical handbook for teachers. Toronto, ON: Prentice Hall . Ladd, G.W., & Burgess, K.B. (1999). Charting the relationship trajectories of aggressive, withdrawn, and aggressive/withdrawn children during early grade school. Child Development, 70, 910­929. Leslie, A. M. (1987). Pretense and representation: The origins of “theory of mind.” Psychological Review, 94, 412–426. Levin, J., & Nolan, J. F. (2004). Principles of classroom management: A professional decision­making model. (4th ed.), New York, NY: Allyn & Bacon. Li, Quing. (2005). Cyberbullying in schools: Nature and extent of Canadian adolescents' experience.(Report No. ED490641). (Retrieved August, 2006, from http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/1b/c2/f4.pdf . Li, Quing. (2006). Cyberbullying in schools. School Psychology International. 27(10), 27­37. Manning , M.L., Bucher, K.I. (2007). Classroom management: Models, applications, and cases (2nd ed.). Columbus, OH: Pearson Merrill/Prentice Hall. Mishna, F., Scarcello, I., Pepler D., & Weiner, J. (2005). Teachers understanding of bullying. Canadian Journal of Education, 28 (4), 718­738. Mouttapa, M., Valente, T., Gallaher, P., Rohrbach, L. A., & Unger, J. B. (2004). Social network predictors of bullying and victimization. Source Adolescence, 39(154), 315–335. Olweus D. (1993). Bullying at school: What we know and what we can do. Oxford: Blackwell. Pope. A.W., & Bierman, K.L. (1999). Predicting adolescent peer problems and antisocial activities: The relative roles of aggression and dysregulation. Developmental Psychology, 35, 335­346. Raskauskas, J. & Stoltz, A. D. (2007). Involvement in traditional and electronic bullying among adolescents. (EJ766118). (Retrieved March, 2007, from http://www.eric.ed.gov/ERICWebPortal/Home.portal?_nfpb=true&_pageLabel=ERICSearchResult&_urlType=action&new Search=true&ERICExtSearch_SearchType_0=au&ERICExtSearch_SearchValue_0=%22Raskauskas+Juliana%22 . Rigby, K. (2004). Addressing bullying in schools: Theoretical perspectives and their implications. School Psychology International, 25(3), 287­300. SperoNews (2009). 1 of 3 teens are victims of cyber bullying. Retrieved July 311st 2009 from http://www.speroforum.com/site/article.asp?id=4994 Snider, M. and Borel, K. (2004, May 19) ‘Stalked by a Cyberbully’, Macleans 117: 76–77. Sutton, J., Smith, P.K., & Swettenham, J. (1999a). Bullying and ‘theory of mind’: A critique of the ‘social skills deficit’ view of anti­social behaviour. Social Development, 8 (1), 117­134. Sutton, J., Smith, P.K., & Swettenham, J. (1999b). Social cognition and bullying: social inadequacy or skilled manipulation. British Journal of Developmental Psychology, 17, 435­450. Webopedia, (2009). Cyber .Retrieved July 311st 2009 from http://www.webopedia.com/TERM/C/cyber.html

277

Perceptions of Online and On-Campus Business Programs: Implications for Marketing Business Programs

PERCEPTIONS OF ONLINE AND ON­CAMPUS BUSINESS PROGRAMS: IMPLICATIONS FOR MARKETING BUSINESS PROGRAMS

Ramaprasad Unni and L.P. Douglas Tseng Portland State University, USA

ABSTRACT There is an increasing interest in online business programs. Traditional on­campus programs are faced with competition from online business programs from for­profit universities as well as online programs from other traditional business programs. As traditional business programs develop and refine their strategies for online course offerings, it is important to leverage advantages they have over their competition. Therefore, it is important to understand how online and traditional programs are perceived. This study examines perceptions on two broad classes of attributes, consumption and investment characteristics, sought by students while making an educational choice. Consumption characteristics refer to the quality of learning and the learning experience from a program or institution. Investment characteristics pertain to the notion that students have expectations of a good job or a promising career from their expenditure of money, time, effort, and other personal sacrifices associated with their education. A survey of students and nonstudents in and around an urban university in the Northwestern region of United States is used to examine perceptions of online business programs and traditional on­campus business programs. Not surprisingly, on­ campus programs were perceived to have more desirable qualities than online business programs. Online business programs from traditional universities were perceived to be superior to those from for­profit entities on a number of attributes such as quality of instruction, admission requirements, learning environment, job preparation, and job opportunities. The gap between online and on­campus programs appears to be narrowing for several attributes such as availability of financial aid, access to technical support services, and access to library services. Programs offered by online (for­profit) universities were more likely to be associated with offering greater convenience and lower costs. Implications for marketing of online business courses from the perspective of traditional business programs are discussed.

INTRODUCTION Interest in online business education has been growing for several years. Online technologies have enabled educational institutions to reach out beyond the constraints of time and location in a way that was not possible with pre­Internet distance education programs. Administrators and educators recognize the potential of online technologies in shaping the future of business education (e.g., Eastman and Swift 2001; Close, Dixit, and Malhotra 2005). Consistent with the upward trend in growth of online education (Allen and Seaman 2007), enrollment in online business programs is expected to grow in the coming years and generate significant additional revenue for universities (Hollenbeck, Zinkhan, and French 2005). Non­ traditional, for­profit universities, such as the University of Phoenix have also significantly expanded their online programs to take advantage of a growing need for flexible options in higher education (Hollenbeck et al. 2005).

Business schools face competition from other traditional programs, as well as online programs of reputed traditional schools, and online programs of for­profit institutions (Eastman and Swift 2001; Sevier 2003). Investment in online delivery is also a strategic response by many traditional institutions to an increasingly competitive and dynamic environment. As business schools develop and refine their strategies for online course offerings, it is important to leverage advantages they have over their competition.

Early studies indicated that students chose online business programs primarily for convenience and not for quality (e.g., Ponzurick, France, and Logar 2000). Since then there have been significant improvements in the design as well as delivery of online business education (Hollenbeck et al. 2005). How well this has translated in improving perceptions of quality as well as perceptions of other relevant attributes in the selection of a business program such as financial aid, career placement services, and financial aid is not known. Educational choice is driven to a great extent by perceptions and interpretations of signals about quality (Collis 2002)

278

R. Unni and L. P. D. Tseng Volume 7 – Fall 2009

This study examines perceptions about (i) on­campus business programs offered by traditional universities, (ii) online business programs from traditional universities, and (iii) online business programs from for­profit universities. The results of the study provide a basis for understanding areas of perceived differentiation and similarities between online and on­campus business programs.

The results will help in positioning on­campus programs as well as online programs of traditional universities by identifying specific areas of strength that can be leveraged to differentiate these programs from those of competition. It also identifies areas for emphasis and improvement in business programs, especially in online programs. Finally, an understanding of students’ perceptions of traditional and online modes of teaching can help in examining innovative approaches to program and curriculum design that promote more efficient and effective learning.

BACKGROUND Prior to the use of online technologies, several studies concluded that there was no significant difference in learning outcomes between distance education and traditional formats (Russell 1999). Clark, Flaherty and Mottner (2001) found no significant differences between online and on­campus versions of marketing courses for effectiveness, value, and knowledge. Other studies show that students seem to prefer the traditional class. Pronzurick et al. (2000) compared student evaluations and preferences of on­campus (face­to­face) classes and distance education classes using real­time audio and video for an MBA­ level marketing management class. They found the distance education option to be less satisfying and less effective than face­to­face delivery. The traditional class was also preferred over a hybrid model that combined face­to­face and web­based modes of teaching (Haytko 2001; Priluck 2004). These studies highlight the importance of understanding how students perceive traditional and online modes of delivery, especially in light of the predictions of greater use of online technologies in the coming years.

We adopt a human capital perspective to frame the perceived benefits students derive from college education (Mixon and Hsing 1994). These benefits may be classified as investment and consumption characteristics. Consumption characteristics refer to the quality of learning and the learning experience from a program or institution. Investment characteristics pertain to the notion that students have expectations of a good job or a promising career from their expenditure of money, time, effort, and other personal sacrifices associated with their education (Kim, Markham, and Cangelosi 2002).

Consumption Characteristics Consumption characteristics are shaped by students’ perceptions of the nature of the curriculum, structure of the program, class schedules, class sizes, quality of students, and quality of faculty. Criticisms about distance education courses include lack of academic rigor, isolation, and lack of interaction with instructor (Jones and Kelley 2003). Yet, online education is attractive to many prospective students because of the flexibility and convenience in these programs. These characteristics may enhance the consumption quality of an on­line program. With the right use of technologies, content, guidance and self­ motivation, it is possible for online classes to provide richer educational opportunity for student than a traditional class (Jones and Kelley 2003).

The growth of online programs from for­profit institutions has evoked negative associations with diploma mills (Armour 2003). Faculty qualifications and their reputation serve to signal the likely quality of learning. These characteristics are often used to market business programs. Online programs may suffer from lack of participation by senior faculty or even other faculty who may view the time needed to master the intricacies of teaching an online class not worthwhile (Eastman and Swift 2001; Sautter, Pratt, and Shanahan 2000).

Smaller class sizes and lower student­faculty ratios are likely to suggest a higher quality of education. The level of interaction between faculty and students is a signal for quality of learning experience. Appropriate use of technology such as chat rooms, discussion boards, email can facilitate effective interaction between faculty and online students. However, perceived lack of interaction is widely perceived as a drawback of online classes (Jones and Kelley 2003; Pronzurick et al. 2000), even though the level of interaction in many large traditional classes would be lower than that in an online class.

Other consumption characteristics include institutional characteristics such as the learning environment as well as resources such advising, registration services, and technology support (Sjogren and Fay 2002). Athletics and other social aspects of university life also offer desirable consumption benefits (Mixon and Hsing 1994) that for­profit online programs would find hard to match.

279

Perceptions of Online and On-Campus Business Programs: Implications for Marketing Business Programs

Investment Characteristics Perceptions of investment in a business program depend on the kind of returns students expect on completion of the program. Business curriculum is typically designed to impart skills and knowledge that help students to be better prepared for future success in jobs (Mohr 2000). Employment opportunities and higher starting salaries are the primary reasons for choosing marketing major or for that matter a business degree (Kim et al. 2002; Labarbera and Simonoff 1999).

Reputation and accreditation have a significant effect on the employability of a graduate. Many employers consider the online degree from for­profit institutions to be inferior to those from accredited traditional programs (Dash 2000). However, these perceptions may be changing. A survey by Vault Inc., a reputed career services firm, shows that 85% of employers feel that online degree are more acceptable than they were five years ago. However, a majority of employers in the survey indicated they would favor job applicants who got their degree from a traditional college (eMarketer 2005). Online business degrees from traditional universities are less likely to face this initial hurdle because many diplomas do not explicitly state whether it was online or face­to­face (Dunham 2003). Traditional schools may also benefit from the perception that they have good career counseling and placement services. Such services improve the likelihood of future employment as these primarily fulfill students’ need to find a job after finishing their education (Clark et al. 2001; Sjogren and Fay 2002).

These factors also have an effect on future plans for advanced studies. Traditional universities consider academic standards at for­profit universities to be low and rarely recognize the course work completed at these institutions while determining amount of credit for past work (Hechinger 2005). Early lessons from online business programs suggest that individuals give great importance to the credentials of the educational institution that grants their degree (Wilson 2003). Finally, the cost of a program is salient in any choice of this nature. Costs of business programs vary widely and fees for online business programs from reputed schools often exceed those of less reputed on­campus programs.

There is general agreement that students primarily choose distance education for convenience and not quality. However, the online business education landscape has evolved considerably with online programs from reputed universities and for­profit institutions that promise quality and convenience in online education. Therefore, this paper addresses the following research questions:

(1) What are the similarities and differences in perceptions of online and on­campus business programs from the same traditional university? (2) Do online business programs from traditional universities have an edge over those from online­only (for­profit) entities?

METHOD A survey was used to assess perceptions about online and traditional business education programs from students of an urban university from the West Coast and other members of the community around the university. Respondents who had never attended a business program and had no plans to attend one were screened out. A list of attributes and factors used to judge business programs was finalized based on inputs from an undergraduate Marketing Research class and prior literature on curriculum and institutional characteristics. The consumption characteristics that were used included: challenging curriculum, superior quality of instruction, better learning environment, and stronger identification with the university, higher access to technology support, library resources, and convenience. The main investment characteristics were more job opportunities after graduation, better job preparation, lower costs, easier availability of financial aid, and overall value.

Respondents indicated perceptions of attribute associations of online and traditional business programs by selecting one of three choices: “more descriptive of traditional on­campus program”, “equally descriptive of either program,” and “more descriptive of online program.” A similar scale was used for assessing attribute associations of online programs from traditional universities and those from online­only universities. Respondents also indicated the most desirable combination of online and traditional courses in a business program and the type of program (certificate, undergraduate, or graduate) most suited for online delivery. Finally, they responded to questions about their demographic background such as age, gender, Internet proficiency, and employment status.

RESULTS The sample comprised of 360 respondents. Table 1 provides details of key demographic characteristics of the sample. Nearly two­thirds of the sample (65%) was in the 21­30 years age bracket. A significant number of respondents (73%) had some form of employment. About 41% were currently attending a traditional business program, 23% had previously attended a

280

R. Unni and L. P. D. Tseng Volume 7 – Fall 2009 traditional business program, and the remaining 36% planned to attend a business school in the future. A majority (53%) claimed to be expert Internet users, and 38% considered themselves proficient Internet users.

Table 1: Sample Characteristics % Key Characteristics (N = 360) Gender Male 55 Female 45

Age Under 20 years 17 21 – 30 years 65 31 – 40 years 12 More than 40 years 6

Education Currently enrolled in business school 41 Previously attended a business school 23 Plan to attend a business school 36

Employment Full­time 36 Part­time 37 Not employed 27

Internet skills Internet Expert 53 Proficient Internet users 38 Adequate Internet user 9

On­Campus vs. Online Business Programs from a Traditional University Respondents indicated whether the different attributes were descriptive of on­campus business programs, online business programs from the same traditional university, or with either type of program. Nonparametric goodness­of­fit analysis was used to check if the proportion of responses for each category was significantly different from one another. The results of this analysis (Table 2) revealed that attributes associated with on­campus and online programs differed significantly.

Table 2: Comparisons of On­campus and Online Business Programs from the Same Traditional University More descriptive of on­ Equally descriptive of More descriptive of χ2

campus program (%) either program (%) online program (%) df = 2 More challenging 53.9 31.1 15.0 82.47** Superior quality of instruction 71.4 24.2 4.4 255.62** Better learning environment 70.0 22.5 7.5 229.95** Higher admission requirements 58.9 36.1 5.0 158.07** More important role for instructor 61.1 25.0 13.9 131.67** Better access to tech. support 33.6 39.2 27.2 7.72* Better access to library resources 45.8 39.7 14.4 59.82** Better job preparation for the business world 58.1 35.3 6.7 143.22** More job opportunities after graduation 51.7 43.3 5.0 133.80** Stronger identification with university 82.8 13.9 3.3 402.07** Greater convenience 7.5 17.5 75.0 286.65** Lower costs 8.3 29.7 61.9 157.32** Easier availability of financial aid 30.3 56.4 13.3 101.62** Better overall value 51.9 34.4 13.6 79.55** Sample size: N = 360; row percentages reported. ** p<0.001; * p<0.05

281

Perceptions of Online and On-Campus Business Programs: Implications for Marketing Business Programs

Traditional on­campus business programs were more likely to be perceived as being more challenging (54%), having superior quality of instruction (71%), having higher admission requirements (59%), having better learning environment (70%), and having a more important role for the instructor (61%) than online business programs from the same traditional school. Not surprisingly, stronger identification with the university was more characteristic of traditional business programs (83%). Traditional on­campus programs were also more likely to be associated with better job preparation (58%) and more job opportunities (52%) and better overall value (52%). On the other hand, online business programs from traditional universities were more likely to be associated with greater convenience (75%) and lower costs (62%).

Interestingly, more than half the respondents associated easier availability of financial aid with either program (56%). Better access to technology support was also likely to be associated with either program (39%). Better access to library resources with either program (40%), and 46% associated this characteristic with only the traditional program.

Additional contingency analysis (cross­tabs) revealed a similar pattern of results across those currently enrolled, previously enrolled, and intending to enroll in business programs. No gender or age differences were observed. Surprisingly, the pattern of responses was similar among those who were employed, part­time employed, or not employed.

Online Business Programs from Traditional vs. For ­ Profit Universities Nonparametric goodness­of­fit analysis was again used to check if the proportion of respondents associating specific attributes with online programs offered by traditional universities and those offered by online­only institutions were significantly different. The results in Table 3 show that online business programs offered by traditional universities were perceived as being relatively more challenging (45%), though nearly 39% indicated that either program was associated with this attribute. Online programs from traditional universities were associated with having superior quality of instruction (63%), having higher admission requirements (56%), having better learning environment (53%), having a more important role for the instructor (53%), stronger identification with the university (74%), better job preparation (55%), and more job opportunities (54%).

Table 3: Comparisons of online business programs from traditional university and online university More descriptive of on­ More descriptive of online line business program Equally descriptive of business program offered χ2

offered by traditional either program (%) by an online university df = 2 university (%) (%) More challenging 44.7 38.9 16.4 48.35** Superior quality of instruction 62.5 29.4 8.1 162.52** Better learning environment 52.8 38.9 8.3 111.67** Higher admission requirements 56.1 37.2 6.7 134.47** Better access to tech. support 34.4 45.3 20.3 33.95** Better access to library resources 46.7 42.8 10.6 84.87** Better job preparation for the business world 54.7 36.9 8.3 118.32** More job opportunities after graduation 54.2 41.1 4.7 141.82** Stronger identification with university 74.2 20.6 5.3 282.72** Greater convenience 11.9 36.7 51.4 85.82** Lower costs 12.2 38.6 49.2 78.22** Easier availability of financial aid 30.3 55.3 14.4 91.55** Credibility 64.4 30.8 4.7 193.62** Better overall value 47.8 41.4 10.8 84.22** Sample size: N = 360; row percentages reported. ** p<0.001; * p<0.05

Majority of respondents (55%) associated easier availability of financial aid with either program. Better access to technical support services was associated with either program by 45% of the respondents. Better access to library services was also associated with either program by a large section (43%). However, programs offered by online universities were more likely to be associated with offering greater convenience (51%) and lower costs (49%), and online programs from traditional universities were associated with greater credibility (64%). Interestingly, while 48% of respondents associated better overall value with online programs from traditional universities, a sizeable 41% associated it with either program. These results highlight the inroads that online for­profit universities have made over the last few years. Additional analysis on gender, employment status, and age did not show differences in patterns.

282

R. Unni and L. P. D. Tseng Volume 7 – Fall 2009

Desired Combination of Online and On­Campus Courses Results presented in Table 4 suggest that students may not be averse to an online component in business programs though the preference for on­campus offerings is evident. The greatest preference was for predominantly on­campus courses and some online courses (46.9%). Few desired an exclusively online program (3.1%) or even a predominantly online program (6.1%). Interestingly, only 16.7% indicated an exclusively on­campus traditional program as the most desirable option. These results are consistent with notion that combination of multiple modes of instruction delivery may be desirable over a single mode (Meyer 2002) and do not necessarily contradict recent findings that showed evaluations of early initiatives with hybrid format, were less favorable than the traditional classes with no online components (Haytko 2001; Priluck 2004). In these hybrid formats, the same class consisted of both face­to­face meetings as well as online interactions. The results in our study suggest a greater preference for predominantly on­campus traditional courses with some online­only courses. Perhaps, such an approach enhances flexibility in some courses where the trade­off against lack of face­to­face interaction may not be huge.

Table 4: Desirable combination of online and on­campus courses Desirable combination % All courses on campus 16.7 Predominantly on campus 46.9 Equally divided between on­campus and online 27.2 Predominantly online with some on­campus 6.1 All courses online 3.1 Total (N= 360) 100

The preference for mostly on­campus programs is consistent with the perception that online programs are not as good as traditional on­campus programs. A majority (51%) indicated online delivery of instruction to be most suitable for certificate programs. About 36% indicated undergraduate programs as most suitable for online format, and only 13% indicated graduate programs as most suitable for online format.

DISCUSSION There are four key findings of this study. First, on­campus business programs appear to be superior to online business programs. This per se may not be surprising. However, the results indicate that these differences are very pronounced even when the programs are from the same traditional university. On­campus business programs were seen as being more challenging, having superior quality of instruction, and a better learning environment. These programs also were associated with better job preparation, more job opportunities, and better overall value. Not surprisingly, respondents also overwhelming associated on­campus business programs with greater sense of identification with the university. The online program was viewed as being cheaper and more convenient than on­campus business programs.

Second, online business programs benefit from association with the traditional bricks­and­mortar university. Such programs seem to have an advantage over online business programs from for­profit online institutions. The former are more likely to be associated with desirable consumption and investment characteristics, such as superior quality of instruction, better learning environment, and more credibility. However, results of our study suggest that the gap in student perceptions for a number of key characteristics is narrowing. For instance, the level of challenge was similar for both types of online programs. Similarly there were little differences in perceptions regarding access to technology support and library resources. More than forty percent of the respondents also perceived job opportunities and overall value to be similar for both programs. These findings further reinforce the need to strengthen the quality of online business programs in traditional universities to effectively counter competitive pressures from for­profit entities that offer online programs. Traditional universities are well placed to leverage their institutional strengths to provide quality online programs.

Third, the perceived lack of quality in online business programs extends to the question of suitability of online delivery mode to graduate business programs. Online delivery was judged as more appropriate for certificate programs than undergraduate or graduate business programs. This finding resonates with the low quality associations of online MBA degrees among employers (Dash 2000; eMarketer 2005). It further indicates the need to address core qualities of online business programs.

Finally, business students may not be averse to having some online courses in traditional business programs. This is different from prevailing hybrid approaches in online business programs where students and instructors interact on campus for a few days (Hollenbeck et al. 2005). Other types of hybrid classes offer online and traditional teaching in the same class with the

283

Perceptions of Online and On-Campus Business Programs: Implications for Marketing Business Programs objective of combining the best aspects of both forms. These have belied expectations of superior outcomes relative to traditional on­campus classes (Haytko 2001; Priluck 2004). The results of our study suggest a hybrid approach with some courses online and most courses on­campus. While this approach is not prevalent, such a development could be a win­win situation. Schools have the potential to cut program costs by moving some courses online (Symonds 2003). They would also provide needed flexibility and convenience to students who still want to take most of their classes on campus. Eventually, this approach would enable schools to offer customized programs that allow students to choose a mix of online and on­campus courses. This requires a significant investment to improve online curriculum quality and minimize the gap in perceived quality between online and on­campus courses.

Implications for Marketing Online Business Programs We frame the implications from the perspective of a traditional business school. In light of the significant investments traditional universities are making in online programs (Allen and Seaman 2007), educators and administrators should be concerned about the perceived quality of their online offerings. Sacrificing quality for convenience to attract students to online programs would be detrimental to a university’s mission and erode its reputation in the long run (Gallagher 2003; Sevier 2003).

Prospective students of online education are looking for similar benefits as students attending traditional schools (eMarketer 2005). While flexibility in online programs is an important factor, a number of other factors such as quality of teaching, reputation, availability of financial aid, and job opportunities after graduation are also almost equally important. Potentially, any online business program can offer flexibility and convenience. However, reputation, quality of its curriculum, learning environment, and important support services such as career and placement services are qualities that are more likely to be unique to traditional schools. These qualities should be emphasized while marketing online programs.

It is not unusual for online business programs to be relegated to a secondary status or to be operating out of a university’s distance education wing with minimal involvement of the business school. Improving the quality of online programs helps not only close the gap between online and on­campus offerings, but it serves to enhance the difference between online programs of traditional schools and those of for­profit universities. This approach is also a strategic imperative to stave off competition from more reputed traditional schools. Emphasis on improving quality of online programs would also facilitate better integration with on­campus programs allowing for student benefits such as ease in transferring credits between programs.

Target Markets Initial enrollments suggested that online business programs were not really stealing students from traditional programs. They were catering to new students who otherwise would not have enrolled in traditional programs due to constraints such as time, schedule, and location (Mangan 2001). Online programs are typically targeted to graduate students and working professionals than first­time students (Hollenbeck et al. 2005). This group has constraints in attending on­campus programs and therefore would be attracted to the convenience of online programs. However, the results of our study indicate an overwhelming preference for a mix of some online and predominantly on­campus courses. Students in traditional on­campus programs may actually like having the option of taking some courses online.

This is consistent with current trends in undergraduate student population. The distinction between the traditional and non­ traditional groups of students will blur in the coming years. Recent statistics from the U.S. Department of Education show that only forty­five percent of traditional­age college students finished college in four years. Nearly eighty percent of this group also works (Holloway 2005). Therefore, it may be worthwhile to target innovative curriculum that has some online courses to traditional students as well. However, students may not be used to the self­directed active learning style that is needed for effective learning in online classes. Schools should use assessments for students to ascertain their ability to adapt to online classes (Priluck 2004). Online courses could potentially be customized to match preferred learning styles of students (Granitz and Greene 2003).

As more resources are devoted to developing innovative curriculum, business schools should also target the lucrative corporate market with appropriately customized online degree and certificate programs (Hollenbeck et al. 2005; Lee, Bhattacharya, and Nelson 2002).

Limitations and Future Research The sample was drawn from in and around an urban university campus with most having been exposed to traditional business programs. This may bias their responses in favor of traditional on­campus programs. However, many of the students in the

284

R. Unni and L. P. D. Tseng Volume 7 – Fall 2009 sample shared characteristics of the population that would be attracted to online programs, such as being older and employed. Therefore, the sample may be appropriate to examine the nature of perceptions of online and on­campus business programs.

The study itself is exploratory in nature and future research should investigate the underlying relationships between consumption and investment properties and the extent to which these perceptions are susceptible to change.

The growth of hybrid programs is inevitable. Research on suitability of different marketing courses for the online mode is needed. Some marketing courses may need a radically different approach for online delivery. There is little evidence of such approaches in the marketing education literature. It would be important to ascertain the degree of flexibility students seek in online and on­campus courses. Ascertaining faculty perceptions of online and hybrid programs is also important topic for future research. The anticipated growth in online programs and hybrid programs may likely make inter­campus consortia and alliances with for­profit entities or even other more reputed institutions more common in the future. In the long run, this may reduce the contribution and role of faculty in shaping curriculum and threaten conventional business models for higher education (Mangan 2001). Therefore, a clearer understanding of future role of faculty and how they add value to the online education process warrants further research.

CONCLUSIONS This paper provides underlying reasons for differences in perceived quality of online and on­campus business programs among students. Online business programs are perceived to be inferior to on­campus business programs. They are considered less challenging, offering fewer job opportunities, and less overall value. Online business programs from traditional universities are still considered superior to online programs from for­profit universities. The lower quality associations of online programs delivery relegate it to be perceived as most appropriate for certificate programs. Interestingly, the respondents, mostly prospective business students, were not averse to having a mix of mostly on­campus courses and a few online courses. Many barriers to effective implementation at the student side such as fast Internet access, interactive technologies, availability of Internet2, and Internet proficiency are getting lowered. As traditional universities expand their online business offerings, marketing educators have an opportunity and challenge to use online technologies to create high quality innovative marketing curriculum.

REFERENCES Allen, I. Elaine and Jeff Seaman (2007), Online Nation: Five Years of Growth in Online Learning, Sloan­C, Needham: MA. Armour, Stephanie (2003), “Diploma Mills Insert Degree if Fraud into Job Market, USA Today, September 28. Retrieved July 10, 2005 from web site: http://www.usatoday.com/money/workplace/2003­09­28­fakedegrees_x.htm Clark III, Irvine and Theresa B. Flaherty, and Sandra Mottner (2001), “Student Perceptions of Educational Technology Tools,” Journal of Marketing Education, 23(December), 169­177. Close, Angeline, Ashutosh Dixit, and Naresh K. Malhotra (2005), “Chalkboards to Cybercourses: The Internet and Marketing Education,” Marketing Education Review, 15(Summer), 81­94. Collis, David J. (2002), “New Business Models for Higher Education,” in The Future of the City of Intellect: The Changing American University, David J. Collis (Ed.), Palo Alto, CA: Press, 181­202. Dash, Eric (2000), “The Virtual MBA: A Work in Progress,” Businessweek, Issue 3701, 96­97. Dunham, K. J. (2003), “Online Degree Programs Surge, But Do They Pass Hiring Tests?” Wall Street Journal, 241(19), B8. Eastman, Jacqueline K. and Cathy Owens Swift (2001), “New horizons in Distance Education: The Online Learner­Centered Marketing Class,” Journal of Marketing Education, 23(April), 25­34. eMarketer (2005), “All I Really Need to Know I Learned Online,” eStat Database, June 17. Gallagher, Sean R. (2003), “Maximum Profit and ROI in Distance Education,” University Business, 6(5), 47­49. Granitz, Neil and C. Scott Greene (2003), “Applying E­Marketing Strategies to Online Distance Learning,” Journal of Marketing Education, 25 (April), 16­30. Haytko, Diana L. (2001), “Traditional Versus Hybrid Course Delivery Systems: A Case Study of Undergraduate Marketing Planning Courses,” Marketing Education Review, 11(Fall), 27­39. Hechinger, J. (2005), “Battle over academic standards weighs on for­profit colleges,” The Wall Street Journal, CCXLVI (66), September 30, 1. Hollenbeck, Candice R., George M. Zinkhan, and Warren French (2005), “Distance Learning Trends and Benchmarks: Lessons From an Online MBA Program,” Marketing Education Review, 15(Summer), 39­52. Holloway, Kate (2005), “Different Paths Lead to a Degree,” USA Today, November 9. Retrieved November 9, 2005 from web site: http://www.usatoday.com/news/education/2005­11­09­college­degree­cover_x.htm

285

Perceptions of Online and On-Campus Business Programs: Implications for Marketing Business Programs

Jones, Kirby O. and Craig A. Kelly (2003), “Teaching Marketing via The Internet: Lessons Learned and Challenges to be Met,” Marketing Education Review, 13(Spring), 81­89. Kim, David, F. Scott Markham, and Joseph D. Cangelosi (2002), “Why Students Pursue the Business Degree: A Comparison of Business Majors Across Universities,” Journal of Education for Business, 78(1), 28­32. LaBarbera, Priscilla and Jeffrey S. Siminoff (1999), “Toward Enhancing the Quality and Quantity of Marketing Majors,” Journal of Marketing Education, 21(April), 4­13. Lee, Reggie V., Sumita Bhattacharya, and Tina Nelson (2002), “Relearning e­Learning: Principles for Success,” Strategy + Business, Issue 28, 12­13. Mangan, Katherine S. (2001), “Expectations Evaporate for Online MBA Programs,” Chronicle of Higher Education, 48(6), 31­ 33. Meyer, Katrina A. (2002), “Quality in Distance Education,” ERIC Digest, ED470542, ERIC Clearinghouse on Higher Education, Washington, DC. Mixon, Franklin G., Jr. & and Yu Hsing (1994), “College Student Migration and Human Capital Theory: A Research Note,” Education Economics, 2(1), 65­73. Mohr, J. (2000), “The marketing of high­technology products and services: implications for curriculum content and design,” Journal of Marketing Education, 22(3) 246­259. Ponzurick, Thomas G., Karen R. France, and Cyril M. Logar (2000), “Delivering Graduate Marketing Education: An Analysis of Face­to­Face Versus Distance Education. Journal of Marketing Education, 22(December), 180­187. Priluck, Randi (2004), “Web­Assisted Courses for Business Education: An Examination of Two Sections of Principles of Marketing,” Journal of Marketing Education, 26(August), 161­173. Russell, Thomas L. (1999), The No Significant Difference Phenomenon, Raleigh: NC, North Carolina State University. Sautter, Elise Truly, Eric R. Pratt, and Kevin J. Shanahan (2000), “The Marketing Webquest: An Internet Based Experiential Learning Tool,” Marketing Education Review, 10 (Spring), 47­56. Sevier, Robert A. (2003), “Marketing Your Distance Ed Program,” University Business, 6(8), 20­21. Sjogren, J. and J. Fay (2002), “Cost Issues in Online Learning,” Change, 34(3), 53­57. Symonds, William C. (2003), “Colleges in Crisis,” Businessweek, Issue 3830, 72­79. Wilson, Jack M. (2003), “Is There a Future for Online Ed?” UniversityBusiness, 6(3), 7.

286

W. H. Kazarian Volume 7 – Fall 2009

AN EXAMINATION OF THE CAREERS OF ADJUNCT FACULTY IN HIGHER EDUCATION INTELLECTUAL CURIOSITY – MIGRANT LABORERS: HOW ADJUNCT TEACHING SERVICES ARE UTILIZED AND VALUED

William Howard Kazarian Hawaii Pacific University, USA

INTELLECTUAL CURIOSITY – MIGRANT LABORERS Within the scope of research regarding the increased reliance upon adjunct faculty in higher education, few studies have focused on what might be a “greater crisis” in the growing dependence upon and the general acceptance by the part­time teachers who willingly fill these positions in spite of obvious disconnects in pay and benefits parity, support, inclusion, and opportunity. Most of the literature either anecdotal or research oriented has looked at the physical conditions in which part­ time faculty find themselves, ways to ameliorate their mitigated circumstance, and innovative perspectives in adjusting to a career that offers little to nothing in the way of a growth pattern and stability.

This research conducted over the past six years on the utilization of adjunct faculty centered upon their service in a private university and a) how they viewed their work and opportunities for inclusion; b) how their labor and contributions were viewed and valued by full­time faculty; and c) how their labor and services were utilized in the fabric of the university mission by the department chairs and deans of colleges. To contextualize this particular research area, it should be noted that the adjunct faculty and full­time faculty taught in the English department and the population size of the adjunct faculty numbered 56 while the full­time faculty held at a steady 15 instructors. This research also looked into how adjunct faculty at this research site differed from affiliate (part­time faculty) who taught courses within other academic areas. Adjunct faculty whose lives and careers were dependent solely on teaching held part­time positions in the liberal arts sector and were relegated to teaching general education, lower division courses while their affiliate counterparts taught both undergraduate and graduate courses in fields of medicine, law, business, economics, and other professional studies venues.

This second group was accorded the status of “affiliate” because of the perceived special real world cachet they brought with them into the classroom. For them, teaching was a part­time aspect from their full­time career in the world outside of academe. Additionally, affiliates enjoyed greater prestige, recognition, institutional support, and a more stable working relationship.

Adjunct faculty, on the other hand, were viewed as generalists who in spite of their particular academic background or scholastic and teaching experience were qualified enough to teach composition, basic math, history, and humanities courses. Their primary function was viewed as satisfying two basic institutional needs: a) filling in on last minute course additions; and b) teaching courses that would free up much needed time for full­time faculty to serve in research project capacities. As noted earlier, a great deal of the literature and research focused on the issues surrounding the physical aspects and environment concerning adjunct faculty. In this sense, the issues might best be understood through the lens of Maslow’s “Hierarchy of Needs” (Maslow, 1954) wherein he defined an individual’s existence as based upon needs that when recognized would be fulfilled and when missing, would be acted upon to satisfy what was missing. Maslow's theory posited that as an individual moves toward self­understanding (actualization), one begins to develop patterns and ways that direct them toward a better physical and psychological understanding of themselves in a wide array of life scenarios. As viewed in this light, Maslow believed that individuals discovered their potential based upon their own values, beliefs, behaviors, and motivations to act on these factors in reaching a transcendent or actualized existence. This pre­supposes that individuals share a common and equal access to all the benefits, opportunities, and capacities in a balanced fashion.

The reality is determined by any number of factors that impact and shape the lives of all in physical, philosophical, psychological, socio­economic, and environmental ways. To narrow this down in relation to the particular stakeholders – adjunct faculty in higher education, it makes sense to see how their circumstances are mitigated and controlled not by their own device, but by the constructed environment that makes up the career venue they have chosen to self­actualize themselves into.

While the general research has looked mainly at the basic physiological needs (pay, occupational stability, health care, retirement benefits), it is clear that adjunct faculty do not have any path toward accessing the greater needs of

287

An Examination of the Careers of Adjunct Faculty in Higher Education Intellectual Curiosity – Migrant Laborers: How Adjunct Teaching Services are Utilized and Valued

“belongingness, esteem, and the need to know and understand” necessary toward achieving “self­actualization” and what may be viewed as a tangible and meaningful career acquisition. To understand what may be preventing the transition from the lower or more basic needs to the higher needs, it is helpful to look at Maslow’s hierarchy in a more nuanced way as provided by Alderfer (1972) who developed a similar hierarchy that examined the clusters as “existence, relatedness, and growth” wherein he looked more closely at the causal effects on these aspects based upon other factors. Alderfer’s work was based upon the studies conducted by Gordon Allport (1960, 1961) who brought forward the notions contained in his systems theory where he stated that at each stage of development, one’s interests are in the here and now and whatever serves as a motivator responds to the present effect(s). Viewed in this modified way, Alderfer’s hierarchy provides a more articulated and realistic view of what drives the issues and events directly affecting the professional lives and careers of part­time faculty in higher education and what may lie at the heart of their continued utilization as a migrant, underclass labor force within academia.

Alderfer's Hierarchy of Motivational Needs Level of Need Definition Properties Impel a person to make creative or Satisfied through using capabilities in engaging Growth productive effects on himself and his problems; creates a greater sense of wholeness environment and fullness as a human being Satisfied by mutually sharing thoughts and Involve relationships with significant Relatedness feelings; acceptance, confirmation, under­ others standing, and influence are elements Includes all of the various forms of When divided among people one person's gain Existence material and psychological desires is another's loss if resources are limited (http://chiron.valdosta.edu/whuitt/col/regsys/maslow.html.)

The operative conditions become apparent in the “properties” where Alderfer notes that “one person’s gain is another’s loss if the resources are limited” (Alderfer, 1972). This fits into the developing corporatization of colleges and universities increasingly faced with budget cuts, higher labor costs, and competition from on­line or “virtual” schools such as Phoenix which relies less on brick and mortar campuses and tenured professors and more on technology, flexibility, and adapting to a new learning audience.

What is of greatest concern for adjunct faculty is not only how their services are utilized, but how they may be kept from a path toward “relatedness” or enfranchisement within the faculty and through the professional venues and opportunities resting with the university itself. Hence “growth” and “self­actualization” (permanence of a stable career) are abnegated by the very fact that there is often no bridge from “existence” to “relatedness” no matter what the individual adjunct may aspire toward.

It must be understood that not all colleges and universities are guilty of using adjunct faculty in ways that do not conform either to good labor practices, and more importantly, to fulfilling the stated mission of the university to maintain professional integrity and adhere to the highest teaching and research standards expected of them. To provide some contrast, this research project examined practices that were in place at a state community college system which utilized and valued the teaching, research, and collegial assets of adjunct faculty in positive ways. Since salary and benefits are a primary concern for all teachers (and especially in places where the cost of living may be higher than elsewhere), it was necessary to examine the pay differential between the private university and the community college. The Community College also provided a ladder system toward full­time position opportunity while the latter did not. The salary ranges for adjunct faculty provided by the Community College and the Private University showed a significant difference in pay and a greater incentive to teach at one site over the other – a situation which has led to a crisis of hiring from a shrinking pool of eager and qualified teaching candidates for the latter.

Table 1: Adjunct Faculty Pay: State Community College Appointment Rank Pay(per credit hour) Total Instructor $1,237 $3,711 Assistant Prof. $1,426 $4,278 Associate Prof. $1,551 $4,653 Professor $1,739 $5,217 Personnel Action Form as of 6/7/2006 Note: This form is available on the campus website (http://www.lcc.edu).

288

W. H. Kazarian Volume 7 – Fall 2009

The pay scale at the Private University was not available to the general public but the figures were current as of June, 2008. While the salary level for adjuncts was determined by educational level with instructors with doctorates paid more than others (see Table 2), only those with an earned Master’s Degree were hired. Salaries for adjuncts were not determined by rank or title since neither existed. All adjunct faculty were hired as “instructors” and they could not, by contract, accumulate rank or time in service since neither a ranking system nor a ladder or development system was offered.

Table 2: Adjunct Faculty Pay: Private University Appointment Rank (all three credit courses) Pay Per Course Doctorate (Ph.D.) $2,750 Masters (M.A./M.S./M.B.A.) $2,350 Bachelors (B.A./B.S.) $2,200 Note: (Adjunct Instructor/Lecturer Salary Scale as of 6/1/2008)

The pay differential was significant not just in the amount paid in salary for teaching courses, but as important, how such a difference played into both the perception of personal and professional value on the part of the individual adjunct and the net loss of wages that might go toward providing greater financial support to them within an already high cost of living economy as is the case in Hawaii. Conceivably that difference per semester for a teaching load of three courses was as much as one who earned $11,133 (at the instructor level at the Community College) compared to $7,050 (at the instructor level at the Private University) indicating nearly a 41% difference ($4,083) in the salaries paid for comparable work. Neither could be considered on its own a living wage for Hawaii.

Adjuncts working in a state system of community colleges and universities could and often did cobble together multiple courses at different campuses in order to earn somewhat better wages than those reflected within the site of this case study. This raised the issue of how many different courses at different campuses could an individual adjunct instructor teach each semester while maintaining consistent quality professional standards in teaching and in advising their students.

Studies showed that issues of compensation varied from institution to institution with some schools providing shared insurance coverage, parking, supplies, and limited access to technology and education grants funding (for research and publication or presentation) while others provided no such compensation. In what may be considered a sliding scale based upon time invested and experience, institutions have offered pay and compensation packages which increased incrementally (Gappa, 2000) but most colleges and universities did not for various reasons, not the least of which was high turnover of part­ time faculty.

The Private University provided limited access to office keys and a restricted limit was designated for copy requests done in a semester. The recent salary increase came about through a combination of necessity in attracting faculty to teach given stiff competition from the University of Hawaii state system. Nonetheless, the wages and compensations were far below the state university and the national average.

Essentially, the findings clearly indicated that the more institutional support in terms of pay, physical needs (office space, equipment, parking), opportunity, and collegial respect was provided, the more stable and consistent the part­time faculty would be in service to the academic areas and to the institution itself in both teaching and research.

Most of the literature research about adjuncts centered on Gappa (1984a); Gappa and Leslie, (1993); Kantrowitz, (1981); Tuckman and Pickerill, (1988); and Warme and Lundy, (1988). More recent and emerging literature dealing with part­time and full­time faculty issues began to develop out of a sense of emergency in apprehension that other long­held and well­ established practices might erode (Hersh and Merrow, 2006). Some concerns had to do with faculty unity, good teaching practices, development of research and teaching strategies, and preservation of tenure. Salaries nationally ranged from $1,000 to $3,500 per course offering with some colleges actually paying slightly less than the lowest figure (Gappa, 2000; Avakian, 1995). Research conducted at the Private University found that adjuncts teaching composition studies earned $1,100 per three credit course in 2004 and saw an increase to $1,600 per three credit course in 2005. Writing in Steal this University (2003), Kavanagh shed some personal insight based on his own teaching experiences.

Teaching expectations for ladder faculty at eminent universities declined in the late 1970s to the now­standard load of two courses or fewer each semester. Their courses are most often taught by adjuncts paid somewhere between $2,000 and $3,000 for each course. When more elite faculty are teaching their normal load, adjuncts are most likely

289

An Examination of the Careers of Adjunct Faculty in Higher Education Intellectual Curiosity – Migrant Laborers: How Adjunct Teaching Services are Utilized and Valued

to teach the lower level, “basic” courses that are less “sexy” to teach – and further removed from the research agendas of senior faculty. A shrinking Brahmin class of professorial­rank faculty enjoys academic careers and compensation commensurate with advanced training while a growing class of “untouchable” education service workers can obtain only poorly remunerated semester­to­semester jobs that offer no career prospects. (p.77)

It would be unfair and unwise to conclude that the environment of academe resembles this landscape and that all adjunct faculty had been relegated to the lower classes as noted here; but, the general fact in evidence was that with a growing reliance upon adjunct faculty and their employment specifically engaged solely in teaching, the standards by which these faculty worked and taught were determined by forces compelling colleges and universities to adapt, for better or worse, corporate style out­sourcing measures.

UTILIZATION OF ADJUNCT FACULTY – TWO DISTINCT APPROACHES To discern differences regarding the status and condition of hire and utilization of adjunct faculty at the Community College, the interview questions focused upon the following areas: (a) utilization, responsibilities, opportunities, and protocol for inclusion and long­term employment for adjunct faculty; (b) forms and types of institutional support; and (c) institutional vision and commitment to needs both from the standpoint of the institution as well as any union voice (NEA and UHPA) for the adjunct faculty.

Throughout the course of the interview, pamphlets, instruction booklets, and fliers were provided as evidence of on­going efforts to establish ways in which adjunct faculty at the Community College were provided opportunity for personal and professional growth and inclusion. The issues of utilization, opportunity, and inclusion have been summarized in the narrative that follows as viewed and met within the direction and discretion of the institution for its adjunct faculty and how they serve and support the academic programs. The following narrative indicates the responses to the interview questions posed to the Staff Development Coordinator representing the Community College. These responses were clustered based upon their relationships in similarity of themes and issues.

STAFF DEVELOPMENT COORDINATOR INTERVIEW RESPONSES Measuring utilization of adjunct faculty and their services in teaching and other institutional venues, it was noted that the Community College provided a “Guidebook” and a newsletter that offered opportunities to serve in institutional professional growth areas. A centralized office of administrative support provided means for rewards for teaching and staff development and equitable dissemination of information and free and equal access to the institution. Programs allowed adjunct faculty a choice on investment into their own professional lives and careers.

The traits most favored by the Community College regarding their adjunct faculty related to opportunity, access, and reward. To these ends the institution and departments provided forms of support campus­wide which included an “outstanding lecturer” award, a chance to invest in their professional career in many ways, and other incentive­oriented programs that embraced a banking system which offered means to greater opportunity, equity, and tracking toward long­term employment.

The types of institutional support to provide greater inclusion and incentives were based upon communication, cultural guides in teaching, and guides to student and administrative services. The institution supported adjunct programs that included, inculcated, and promoted an inviting cultural environment to do more than simply teach. Tied to issues of currency, evaluation, and development for adjunct faculty, the institution supported on­going programs that engaged adjunct faculty involvement, career development opportunities in working with cohorts, shared professional venues, and greater voice.

The institution supported adjunct involvement and career enhancement to develop steady, stable, and competent faculty; greater voice in self­governance; and clear and active communication among faculty, students, and the administrative staff.

It was fairly clear from the interview and by the thorough scope and variety of newsletters and communication types that institutional support was of key importance in ensuring that all adjunct faculty as well as new hires had an opportunity to learn about the campus environment and culture, and as important, understand ways in which opportunity and involvement in academic and student activities were seen as valued cultural assets. The key term used throughout the literature and in the interview was opportunity. There was no requirement on the part of the administration to compel adjunct faculty to become any more involved than they (adjuncts) saw as in their own best interests.

290

W. H. Kazarian Volume 7 – Fall 2009

One means of institutional support was that provided through the “lecturers group” which served as a means of communication between the faculty and the administration. The purpose was two­fold: to inform and to invite. The operative terms used most frequently were that every opportunity was voluntary rather than expected or required.

Institutional Commitment and Union Involvement Much has been written about adjunct faculty members attempting to either fold into already existing unions that represent full­ time faculty or those adjuncts who have attempted to join or form a separate union. The primary difficulties faced in both areas have been that full­time faculty and the unions which represented them were usually not willing to invest the same amount of energy, time, and commitment to part­time faculty primarily because of their (adjunct faculty) mitigated labor status.

Full­time faculty, while supportive of the issues faced by their part­time colleagues, often were not willing to take up the fight since they were already occupied with their own professional concerns. Additionally, it has proven difficult to formalize union representation on a group that is disparate in both their professional currency and their investments in teaching opportunity as possible stakeholders when there seems little or no hope of ever gaining full­time employment in higher education.

Another factor was the quid pro quo from union affiliation for adjunct faculty in that many felt they could not afford the dues and that compensation in salary and benefits parity might not be forthcoming. Change in terms of adjunct faculty needs, as noted in the literature and in the sentiments of full­time faculty participants within this case study, was the institution itself. This question of union membership was a consideration for the Community College and differed from the Private University which disallowed unions.

The collective reflection by adjunct faculty expressed more pleasure in teaching at the Community College and focused upon issues such as (free) parking, inclusive communal atmosphere; greater accessibility to support systems (supplies, office help), and the overriding belief that opportunities for full­time or long­term employment were real and attainable. The information provided by the Staff Development Coordinator reflected the practices at other colleges and praised in literature about the utilization of adjunct faculty.

UTILIZATION OF ADJUNCT FACULTY AT THE PRIVATE UNIVERSITY While the fiscal aspects and size were somewhat different between the two colleges, the issues regarding adjunct service in teaching were the same and had an impact on all stakeholders and issues equally including the full­time faculty, academic programs, institutional research, faculty self­governance, and students and teaching effectiveness.

The key areas examined the following:

1. Reliance, accommodation, inclusion, support, and professional opportunity; 2. Coordination and centralization of authority in development of protocols in programs with regard to hiring, evaluation, promotion, and enfranchisement of adjunct faculty; 3. Inclusion in governance, committee work, participation in research and community service; 4. Concerns regarding the quality and integrity of service of adjunct faculty to all stakeholders.

The interview questions sought to provide means through which each administrator could respond in ways that best reflected real events, activities, and desired outcomes related to the lives and careers of adjuncts and specifically how these adjuncts best served the needs of each academic department and its contribution to support the university mission and vision.

The overall result of these interviews responded favorably toward bringing attention to while working for constructive resolution upon the issues surrounding adjunct faculty with respect to reliance, centralization, inclusion, and evaluation.

Based upon the responses from the deans throughout the four interviews conducted at the Private University, some relevant and interconnected themes emerged to show a pattern of acknowledgement of the issues related to adjunct concerns, reliance upon their teaching services, and the need for accountability in hiring, retention, support, inclusion, and compensation. It should be noted as each dean acknowledged a need for change and implementation of policies that would better support adjunct faculty services in line with institutional standards, no changes or implementation had actually taken place.

291

An Examination of the Careers of Adjunct Faculty in Higher Education Intellectual Curiosity – Migrant Laborers: How Adjunct Teaching Services are Utilized and Valued Reliance on Adjunct Faculty The dean (vice­president and dean of academic administration) reiterated the university’s need to continue to employ and utilize adjunct faculty because of the kinds of service they bring to a changing and complex system of student needs, emerging technologies, flexibility, and professional expertise outside of education. There were other considerations regarding fiscal planning and outlay for emerging projects of greater priority and continuing responses to other university needs and programs.

Centralization of Hiring Policies and Acquisition of Adjunct Faculty The dean acknowledged the implementation of a program that placed the initial procedures for hiring and vetting within individual academic departments with the final approvals under the purview of the deans. Another change was the creation of the Academic Support Council who would act on behalf of the various colleges and academic programs in responding to and forwarding individual needs and requests.

Inclusion of Adjunct Faculty The issue of inclusion of adjunct faculty examined a range of values comprised of parity, support, opportunity, and terms of hire. The dean referred to change in this particular set of circumstances as implementing a tiered process (similar to a ladder progression or step­pay process). The driving force was termed “educational effectiveness” which essentially asks each academic area of study to conduct a self­evaluation program which assesses current practices, capacities, and learning outcomes which are in line with both the university vision and academic mission and are, at the same time, in line with current good practices across the higher education landscape.

Evaluation of Adjunct Faculty The message coming from this conversation looked at how each academic area regarded its faculty. Full­time faculty and the coordinators were responsible for the kinds of opportunity and training they could provide to both new hires and adjunct faculty and that the primary tool for evaluation grew out of peer reviews and counseling. Students would also continue to fill out a “course/instructor” form using a Lykert’s Scale evaluation each semester, but the results of these would go directly to the deans as printouts. Academic coordinators then would be advised about the performance of an adjunct instructor. Two significant ideas emerged from this interview.

Institutional Capacity The first had to do with capacity and ability. If each department were responsible for hiring, vetting, evaluating, and supporting adjunct faculty, then this task would be made difficult since there was no institutional support (protocol, course release for administrative time, consistency across the university, support for mentoring, and other ethical and legal issues). The lack of support in terms of time and availability of willing and qualified full­time faculty to serve on committees has made situations of faculty self­governance, participation in community service, and professional studies and research efforts already increasingly difficult to accomplish.

Hiring Guidelines The second concerned priorities. Throughout the interview, money and institutional priorities (outside of hiring new full­time faculty) took a first seat at the table. Since each college had its own system, the process of hiring, evaluating, and retaining a stable and qualified pool of candidates, particularly in English, was problematic.

In the case of the English department, the coordinator had very little authority over hiring, mentoring, or supporting adjunct faculty since all the decisions in these areas were made by the dean of the college of liberal arts who stated that any changes involving adjunct affairs would directly affect the salaries of full­time liberal arts faculty. However, there was no such cause and effect relationship in other colleges and their support for adjunct faculty concerns.

SUMMARY The views throughout the interviews with the four key administrators focused in on some notable issues most having first to deal with the adjunct faculty who fill the requisite needs and responsibilities of each individual college and second, the perception in the nature of higher education and its institutions both as in ways of traditional and emerging shifts. The former view was characterized by the understanding that those affiliates (this term came up most frequently when referring to

292

W. H. Kazarian Volume 7 – Fall 2009 adjuncts who teach courses in business, communication, and nursing) did not fall under the same set of circumstances as adjunct faculty who teach in other academic areas since they (the former) were not as reliant upon their contributions in higher education (salary) as were their counterparts (teaching composition, history, and math) seemed to be.

One salient argument regarding the support provided to affiliate faculty by the institution was directly related to technology, economics, and social law. These professionals had the licenses, credentials, contacts, and immediate real world and real time affiliations and avenues that would better endow students with the kinds of expertise they would need for their chosen careers. Their contributions outside academe were seen a valuable connections toward internships and eventual employment for students.

Another consideration had to do with the self­awareness of individuals who chose to enter a profession in the corporate world and those who chose a position in higher education. While at one time there may have been a belief that each held out great opportunity for employment and compensation, the reality has been more about changing real world paradigms in factors which included more people being educated (a glut of Ph.D.’s) creating greater competition, fewer tenure­track or career­track teaching positions, and a very low attrition rate. While there were certainly teaching opportunities within the general education (K through 12) frame of work in education, most adjuncts expressed views that teaching in the elementary and secondary levels was not what they had in mind for a career. The problem created by higher education systems, as well as by the adjunct faculty themselves, had to do with the fact that fiscal expediencies tied in with economic and labor initiated realities displaced the hoped­for opportunities in teaching at the university or college level for these part­timers.

A third concern involved the terms and responsibilities of self­governance. The literature echoed the sentiments of many full­ time faculty who bemoaned the fact that as their ranks became less populated, self­governance became both time consuming and physically overwhelming. While faculty populations increased with the greater reliance upon adjunct faculty, the self­ governance needs as well as attention to research, conference participation, community service, and other scholarly activities diminished due to lack of time and availability. In the interviews with administrators and full­time faculty, active participation by all faculty, full­ and part­time, was considered a substantial cornerstone to integrity of the university community but could not be satisfactorily realized when the pool of full­time faculty was shrinking.

Through the surveys, interviews, and modified working focus groups, the key issues having to do with the increasing reliance upon adjunct faculty especially as those described as teaching part­time in the English department at the primary research site, were universally acknowledged.

In each case of query from among the three groups of participants (adjunct and full­time English faculty and key administrative personnel), the issues and responses began to develop into a clear picture based upon the perspectives and values of each group. There was general agreement that there existed a difference between adjunct and affiliate faculty, differences in compensation, opportunity, inclusion, and support, and differences in how the basic professional needs of part­ time faculty were being met. Of greatest concern, however, was the intention of the institution in its stated view within the “mission statement” and how key administrative personnel at the primary research site did not reflect or embrace the same set of values statements concerning “the opportunity to excel, including resources and rewards commensurate with individual contributions and potential.” The over­reliance and under­supportive realities belied the “mission”.

The key administrative personnel (deans) interviewed in this research at the Private University were unanimous in how they viewed adjunct faculty instructional services as opposed to those employed as “affiliates”. In seeking to discover ways in which the institution might be looking for substantial changes in utilization, inclusion, and opportunity, there were no reliable processes put forward. Another consideration was that other institutional priorities had precedence over labor issues. Deans responded in unison that providing better utilization and accountability of the adjunct teachers was up to the individual academic departments to implement protocols.

These protocols would not, however, be established by or provided with institutional support. In response to this latter statement, the institution looked at how it valued its programs and employees in terms of what viable economic growth or development might be forthcoming. If a department showed significant contribution in terms of some form of measurable accountability (money), then institutional support would be available.

There were several factors which comprised the problematic situation surrounding the issues of part­time faculty in implementing new programs and protocol and advancing institutional support at the Private University. These included

293

An Examination of the Careers of Adjunct Faculty in Higher Education Intellectual Curiosity – Migrant Laborers: How Adjunct Teaching Services are Utilized and Valued institutional vision, financial capability, addressing changing needs of student populations, and institutional investment into both the global economy and global citizenship.

Throughout the literature and experiences of all participants in this case study and faculty within higher education settings, there was a harsh realization that the face and tenor of what was once viewed and revered both inside and without the university campus irrevocably changed and the forces that compelled that change demonstrated not just the particular issues of labor and parity, of utilization and accountability, but more important, issues of control of direction, of purpose, of mission, of destiny. The move toward providing meaningful change as seen in the literature, as heard through the voices of the participants, and viewed in the activities and environments developed and implemented at other colleges and universities, pointed to what directions were necessary in the emerging shifts in higher education. There seems to be no definitive answer or solution to what appear to be pandemic concerns involving the greater reliance on adjunct teaching services, and erosion of full­time faculty availability to provide oversight management of part­timers, and continued service in research and self­ governance.

DISCUSSION The primary factor limiting the depth and range of implications of the data had to do with lack of accessibility for adjuncts within the department and the university. Limited access to opportunity necessarily limited any type of professional movement and vested interest in both parties: the adjunct instructor and the university. This condition painted a fairly narrow view given the range of possibilities that qualified adjuncts could lend themselves to with regard to issues such as research, curriculum development, and advising student organizations or serving in other university related social opportunities.

By their nature, adjunct faculty are transient in their work and therefore access to them was difficult and eliciting their thoughts over a period of time was problematic. There was also the issue of a non­stable group of adjunct instructors in English and this contributed to some difficulties in establishing a pattern of consistency in the attitudes and values of adjunct faculty participants at the Private University. The greatest factor depressing morale appeared to be the absence of any clear university­wide policy that covered the issues concerning adjunct faculty in terms of utilization, accountability, and development of professional currency. Since there were no visible and codified procedural protocols regarding hiring practices, committee evaluations for vetting and position availability, position announcement and salary, and very few requirements placed upon adjunct faculty (in composition), then there was less chance for both adjuncts and their full­time colleagues to reflect clearly or deeply on the status of the part­time faculty at the primary site. The Private University had little to offer adjunct faculty concerning their professional welfare, compensation, opportunity, or inclusion.

The on­going needs and terms of hire were so fluid, that other than the requisite credential (a Master’s Degree) and the expectation that they teach and hold office hours, there was little else to move forward on with respect to the array and range of issues raised in this research. While full­time English faculty in this research stated their empathy, they also felt there was little they could do to change things.

The comparison to the Community College was a way to measure the desirable traits as expressed by the participants in this research as opposed to the non­existent conditions of teaching and reward experienced at the Private University.

Some full­time English faculty participants expressed complete confidence in the abilities and contributions made by adjunct faculty (40%) and stated that more needed to be done to provide them with ways connecting them to both the full­time faculty and to the university. Yet among the full­time English faculty, most (60%) felt that adjunct faculty level of service was “sometimes” met.

Specifically under the categories of teaching skills and effort, most full­time faculty (77%) believed adjuncts were “sometimes” adequate; and that in the categories of effectiveness and adequacy, a clear majority (93%) agreed that these components were sometimes met. One condition for these responses was directly linked more to what limitations by the institution were placed upon the adjunct faculty participants rather than by their own professional capabilities and desire to be valued contributors to the university community and its programs. Full­time English faculty and the administrative participants at the Private University in this case study stated that if any changes were to be brought forward and put into action, any new initiatives specifically related to adjunct faculty affairs could only happen with full administrative and institutional support.

This impression was one of two important concerns expressed by the full­time English faculty. The survey results of full­time English faculty indicated that the majority stated that greater fulfillment of needs of adjunct faculty had to be met (100%) and

294

W. H. Kazarian Volume 7 – Fall 2009 that improved conditions regarding opportunity, stability, and inclusion were considered equally significant (93%) to effect improvement in terms of utilization and support of academic programs.

Any decisions with regard to policies governing all faculty in terms of hire, utilization, evaluation, and opportunity were the ultimate discretionary stewardship and control of the senior administrators (deans and the president). These issues included not only creating programs that provided support, guidance, and inclusion (mentoring, collaborative research, professional opportunity) but concerns that were reliant upon institutional support for office space, equipment, pay and benefits, and long­ term commitment in employment.

Through modified focus groups and interviews, full­time faculty and administrators noted that any change in the lives and welfare of adjunct faculty that specifically included a monetary obligation would directly impact full­time salaries, support for research endeavors, and hiring new full­time instructors. These considerations were a significant part of an on­going conversation within the Liberal Arts Faculty Assembly addressing issues of adjunct faculty, and pay parity for the liberal arts full­time faculty. The Faculty Council had (and still has) on its agenda the issue of benefits for full­ and part­time faculty and alignment of salaries to other colleges and universities in Hawaii.

This last, viewed by all participants as an important step toward inclusion and support toward better teaching and developing a more viable connection of adjuncts to the rest of the faculty required that full­time faculty who wanted to implement such programs would have to do so on their own and without compensation. Without institutional support, this also begged two additional considerations which were related to issues of academic program consistency and integrity (fairness, appropriateness, good practices) and legal issues.

A second substantial issue noted by the full­time faculty participants in general conversation and outside the boundaries of this research had to do with some practical considerations. The belief was that if adjunct faculty were included in voting rights, attending faculty meetings, and serving on university committees, then how would their voice affect full­time faculty issues which were necessarily different than those of the adjunct faculty? How and in what ways, if any, would they (adjunct faculty) be compensated?

The anxiety from full­time faculty and as explicitly expressed in meetings of the liberal arts faculty assembly (Spring/Fall, 2007) was that any changes to enhance and enrich the professional lives of adjunct faculty at the Private University would be drawn from the professional benefits allotted to full­time faculty. The metaphor most often cited by the dean of the college of liberal arts was that of a pie divided along lines of who received what among the various constituencies within the Private University community. A larger slice for adjunct faculty would necessitate a smaller slice for full­time faculty. The image was vivid and lasting.

GUIDELINES FOR MANAGEMENT REFORM To gain a clearer insight into the views and expectations of all the participants in this research, it was necessary to examine all the issues in light of the context of recent research concerning labor and management issues in higher education, shifts and trends in how education is distributed, changes in student populations and transmission of teaching information, and cultural and political exigencies that influence and drive these changes to traditional venues and traits of higher education.

In one study by Gumport and Sporn (1999) sponsored by the National Center for Postsecondary Improvement (NCFPI), a number of suggestions were brought forward that mirrored some of the issues brought out in this research. The evolving challenges to colleges and universities noted as “environmental changes for universities” indicated several patterns and reasons behind what the authors term a “point of revolutionary, rather than evolutionary change and the demands of global capitalism (which) hinder the university’s ability to fulfill its cultural mission.” These sentiments included:

1. Financial crisis caused by decreased government support for students. 2. Devolution or decentralization of responsibility to the institutional level. 3. International competition for funds, faculty, and students. 4. Governmental regulations to improve quality in teaching and learning. 5. Changing student demographics. 6. New technologies. (http://.www.stanford.edu/~gumport/publications.html)

The critically important factors out of the above listed areas and connected to this research had to do with decentralization of responsibility; international and regional competition for funds; changing student demographics; and new technologies.

295

An Examination of the Careers of Adjunct Faculty in Higher Education Intellectual Curiosity – Migrant Laborers: How Adjunct Teaching Services are Utilized and Valued

The Private University does not receive state of federal funding for its programs and thus must compete in other ways. These “ways” may help explain how and why the institutional policies pay less attention to the needs of adjunct faculty and more to the needs and priorities the university deems more important in terms of generating and maintaining economic viability and growth in enriched and lucrative market venues such as distance learning programs and alliances with corporate entities.

Gumport and Sporn (1999) noted the expectation for higher education in contributing to a country’s national productivity (they) are “expected to innovate new products and services, as well as to collaborate in product development with industry” (p. 7).

FINDINGS In focus groups conducted at the primary research site, the issues surrounding and confounding the adjunct teaching population began to emerge in ways that revealed the initial examination of these concerns. When given the opportunity to voice their collective and personal feelings openly, responses to the key issues helped to construct meaning for both adjunct participants as well as the researcher. While the surveys and interview protocols were relatively standard, it was necessary to modify the focus group protocol involving adjunct participants. The choice of using a modified service focus group protocol was due to the fact that all of the adjunct participants expressed an unwillingness to participate in the research because they felt such research was meaningless unless the results led to immediate change; and, because most felt that participating in the research only served to highlight the negative issues of their labor and amplify their feeling of victimization.

Part­Time Faculty Teaching in the English Department at the Private University Two Sessions: September 16, 2003­12 participants; February 23, 2004­7 participants

1. What kinds of professional support are important? (Red) 2. What significant changes would you like to see? (Orange) 3. What investments would you be willing to make? (Blue) 4. Do you plan to stay in education? (Yellow) 5. What compels you to continue teaching part­time? (Green)

Question One: Fair pay and benefits, good working conditions (equipment, access to support), and fair and respectful treatment by colleagues. 100%

Question Two: Fair treatment as a professional. More opportunities to belong or participate in activities. 63%

There needs to be open communication and access to university services and support. 36%

Question Three: How can I be expected to do any other work if all my time is spent going from job to job. 42%

I will make investments when I know for sure what will count toward getting a full­time position or a longer contract. 58%

Question Four: I am keeping my options open but I am still hopeful I might be able to get a full­time job. 21%

Seek out new opportunities 26%

I will stay in education either as a teacher or administrator. 53%

Question Five: I like teaching, especially at the college level. I enjoy being with the students and I really feel like the position is somewhat prestigious. 74%

I look at teaching part­time as a way to develop my skills toward another career. 26%

Surveys and interview responses reflected the same attitudes, values, beliefs, and visions for present and future personal and professional career directions.

The responses to questions four and five point to a belief that there will be some career opportunity available in the long run. Sadly, this is a wan hope. While colleges and universities ought to do all they can to support their faculty full­ and part­time,

296

W. H. Kazarian Volume 7 – Fall 2009 the distinct reality that the temporal plane both economic and social does not promote what might be considered wishful thinking or blind devotion to belief.. Adjuncts who willingly take on the responsibility of teaching under conditions that reflect a less­than­fulfilling or rewarding future are not being realistic. This dream is at the heart of what is problematic with adjunct teaching, individual desires and abilities, and the vagaries of circumstance.

CONCLUSION AND RECOMMENDATIONS To better understand the lives and the views of the adjunct participants in this case study, it was necessary to look toward some contexts that point directly to the problems surrounding the prevalent issues and concerns that face both adjunct faculty as well as their full­time colleagues. From the real world perspective, three aspects emerge.

One, many student candidates majoring in English, to use a typical example, view their studies from a narrow window of their academic accomplishments leading toward the teaching profession. In fact, many outside academia have the impression that those who specialize in English studies will most likely become teachers or relegated to teaching. These beliefs were echoed in the sentiments elicited in interviews with the deans at the Private University.

Two, few advisors seem willing or able to articulate the fact that there are many and various career opportunities which are suitable to the academic studies of those who major in English. Golde, Walker, et al, 2006 in “Envisioning the Future of Doctoral Education” noted:

The field of English studies sees itself solely in relation to academia. The often­used term “the profession” refers to the academic profession, obscuring the significant contributions of English doctorate holders to the publishing industry, writing and editing professions, government and nonprofit agencies, and secondary teaching. (http://www.josseybass.com)

Three, in what may be the case and condition of adjunct faculty who teach English and specifically in Hawaii, many candidates lack the access to multiple institutions as noted on the mainland and are limited to one state university system. When the options for full­time employment are minimized, opportunity for full­time teaching comes into direct conflict with the competing interests of few openings and more applicants.

Additional problems that emerged from this case study research revealed that many adjunct faculty were unable to move from Hawaii due to other obligations such as a spouse who maintains a full­time job and factors such as desirability of living in this particular environment, long­term kinships, and other ties that directly affect mobility.

Another factor in this “effect” that paints the world of the adjunct and in particular those participants in this research is the fact that many if not all who have a doctorate in English have spent much of their academic life in the classroom as students and then later as teachers or teaching assistants. This life, as expressed by many in this research, was viewed as comfortable and rewarding. However, with the reality of little to no hope of gaining full­time teaching status in higher education coupled with the other reality of having to meet expenses incurred in earning that diploma and surviving in an inflated economy, adjunct issues and concerns are even more critical.

Much of the literature including the anecdotal stories which comprise a great deal of the content in publications such as Adjunct Advocate and Adjunct Nation comment upon these very issues. These views reflect part of the problems associated with the adjunct populace and their circumstance in terms of labor, utilization, value, and currency.

Due to the lack of visibility of nonacademic careers and resistance from some in the discipline to actively promote these careers, “there remains a group of Ph.D. holders trained to perform and teach literary research and criticism and unable to find positions to pursue those interests in the way they had imagined; they are unwilling to think about the profession more broadly” (Golde, Walker, et al, 2006, p. 353).

SUMMARY The central issues derived from the problems adjunct faculty face are based upon the external conditions placed upon the educational paradigm, its changing needs, and the fluctuating exigencies. An adjunct, from a mitigated standpoint, is not always at the best advantage to make “productive effects upon himself and his environment” (Alderfer, 1972). The agent for change is the adjunct who must realize that in the particular case of higher education, he or she must first develop and

297

An Examination of the Careers of Adjunct Faculty in Higher Education Intellectual Curiosity – Migrant Laborers: How Adjunct Teaching Services are Utilized and Valued promote a strong pedagogical and research history in order to be competitive where competition is not simply intellectual curiosity but where positions are highly framed in low turnover and nearly zero attrition.

An essential part of a successful venue to career achievement and self­fulfillment also requires a strong sense of dealing in realistic terms both for the short­ and long run and to embrace a balanced approach that values work both inside as well as outside academe. The successful candidate is one who is capable and willing to view and value what they can bring not only to their own lives but as well to the lives of others they might serve.

REFERENCES Alderfer, C. (1972). Existence, relatedness, & growth. New York: Free Press. Allport, G. (1960). Personality and social encounter: Selected essays. New York: Beacon Press. Allport, G. (1961). Pattern and growth in personality. New York: Holt, Rinehart and Winston. Avakian, A. (1995). Conflicting demands for adjunct faculty. Community College Journal, 65 (6), 34­36. Gappa, J. M. (1984a). Part­time faculty: Higher education at a crossroads. ASHE­ ERIC. Higher Education Research Report No. 3. Washington, D.C.: Association. Retrieved March 12, 2004: http://www.advancingwomen.com/awl/winter2000/ blanke­hyle.html Gappa, J.M. (2000). The new faculty majority: Somewhat satisfied but not eligible for tenure. New Directions for institutional research. (105), 77­86. Gappa, J. M., & Leslie, D.W. (1993). The invisible faculty. San Francisco, CA: Jossey­Bass Publishers. Golde, C., Walker, G., et al. (2006). Envisioning the future of doctoral education. San Francisco, CA: Jossey­Bass Publishers. Gumport, P. & Sporn, B. (1999). Institutional Adaptation: Demands for Management Reform and University Administration. National Center for Postsecondary Improvement. Educational Research and Development Center. R309A60001CFDA 84.309A. U.S. Department of Education. NCPI Technical Report Number 1­07. Retrieved January 10, 2007: Hersh, R. & Merrow, J. (Eds.). (2006). Declining by degrees: Higher education at risk. New York: Macmillan. Huitt, W. (2004). Maslow's hierarchy of needs. Educational Psychology Interactive. Valdosta, GA: Valdosta State University. Retrieved December, 16, 2008 from, http://chiron.valdosta.edu/whuitt/col/regsys/maslow.html. Johnson, B., Kavanagh, P. & Mattson, K. (Eds.). (2003), Steal this university: The rise of the corporate university and the academic labor movement. New York: Routledge. Kantrowitz, J. (1981). Paying Your Dues, Part­time. In: G. De Sole, & L. Hoffman (Eds.). Rocking the boat: Academic women and academic processes. New York: Modern Language Association of America. Maslow, A. (1954). Motivation and personality. New York: Harper. Tuckman, H.P. & Pickerill, K.L. (1988). Part­Time Faculty and Part­Time Academic Careers. In: D. W. Breneman & T. I. K. Young. (Eds.) Academic Labor Markets and Careers. New York: Falmer Press. Warme, B. & Lundy, K. (1988). Erosion of an ideal: the presence of part­time faculty. Studies in Higher Education, 13(2), 201­ 213.

298

D. Bagot-Allen, S. Harney, J. W. McKay Volume 7 – Fall 2009

SELF­CONCEPT, BEHAVIOR AND CITIZENSHIP STATUS; RELATIONSHIPS AND DIFFERENCES BETWEEN ADOLESCENTS SELF­CONCEPT, BEHAVIOR AND CITIZENSHIP STATUS IN MONTSERRAT, BWI

Donnette Bagot­Allen, Suzy Harney and Joane W. McKay University of the Virgin Islands, USA

ABSTRACT The impact of immigration in relationship to adolescents’ self­concept, behaviour and citizenship status is pertinent in the context of Montserrat which is a British dependent island that has experienced and is still experiencing an ongoing eruption of the Soufriere Hills volcano since 1995. This eruption has forced two­thirds of the islands population to relocate to Britain, United States of America and other Caribbean islands. Since then, people from other Caribbean islands and countries have immigrated to Montserrat possibly to fill the labour force, to seek economic and other opportunities. As a result of this immigration there are many immigrant students and return migrant students enrolled in Montserrat’s education system in all the primary and secondary schools. The new students include many from other Caribbean countries, some of whom are unfamiliar with the English language. Literature supports that immigrant adolescents must learn new cultural norms, and new behaviours consistent with these norms and adjust their self­concept to these new behaviours and norms to survive in their new environment/society. The Bracken Multidimensional Self­concept Scale which examine six primary environmental contexts in which children find themselves operating as either passive or active agents was utilize in this study since the typical child spends most of his or her time acting on or within these six primary environmental context: social, competence, affect, academic, family, and physical.

BACKGROUND AND INTRODUCTION Montserrat, a British dependent island located in the Caribbean Sea is in the throes of volcanic eruption spanning 14 years which started on July 18th 1995 by the Soufriere Hills volcanic eruption. Much of the island was devastated and two­thirds of the population was forced to migrate abroad while others were internally displaced in the Caribbean region. Montserrat had a population of 10,000 Montserratian before the volcano; since the last eruption occurring in July 2003, an estimation of 8,000 left the island. Most of the students and many of the teachers left the island which severely impacted the numbers on roll in the primary and secondary schools. Enrolments fell below 100 in the secondary school and have risen steadily to the present 350. Since then, people from other Caribbean islands and countries have immigrated to Montserrat possibly to fill the labour force, to seek economic and other opportunities.

As a result of this immigration there are many immigrant students and return migrant students enrolled in Montserrat’s education system in all the primary and secondary schools. The new students include many from other Caribbean countries, some of whom are unfamiliar with the language. According to the Ministry of Education Montserrat Annual Report (2007), the total student enrolment in the primary and secondary education is 856. Montserrat’s 2001 census provides a total of 296 children (0 to 19 years) who were born abroad and came to live in Montserrat after 1991.

A report from a review of Montserrat Secondary School reveals parents believe that teaching is frequently thwarted by bad behaviour where students do not attend lessons, instead, they wander the site and disturb other lessons; while there have been incidents of violence directed toward adults in the school; allegations of serious misconduct – drug­taking and sexual misconduct for example – while students are out of lessons (Review of Montserrat Secondary School, 2007; Montserrat Reporter, 2007). The review further revealed several angry students who resort readily to violence and ethnic tension between Montserratians and immigrants which was also referred to as a contributory factor (Review of Montserrat Secondary School, 2007).

The high enrolment in the primary and secondary schools of students born outside of Montserrat and the rise in behaviour problems including attention seeking, hyperactive, limit­testing, sexual, violent and aggressive behaviour in the schools, have indicated the need for research in relation to self­concept and behaviour in immigrant and non­immigrant students in Montserrat.

299

Self-Concept, Behavior and Citizenship Status; Relationships and Differences Between Adolescents Self-Concept, Behavior and Citizenship Status in Montserrat, BWI

The literature review has revealed that immigrant adolescents must learn new cultural norms, learn new behaviours consistent with these norms and adjust their self­concept to these new behaviours and norms; they have also indicated various direct and indirect expressions of distress, and has also expressed less satisfaction with their lives relative to their classmates. It further revealed immigrant adolescents who experience discrimination are more likely to embrace their native identity and reject identification with the host country culture which can impact their present behaviour and self­concept. Also immigrant students who have linguistic barriers may experience psychological problems such as depression, low self­esteem, anxiety, and loneliness and can become frustrated, irritated, and lethargic which can further have an impact their self­concept and behaviour in their present environment. Children develop as many self­concepts as the unique environmental contexts in which they find themselves operating as either passive or active agents. The typical child spends most of his or her time acting on or within the six primary environmental contexts as depicted in the Multidimensional Self­concept Scale: social, competence, affect, academic, family, and physical.

Nevertheless, this study investigates if there is a relationship between citizenship status, present self­concept and behaviour as it relates to early adolescent immigrant students in the Primary schools in Montserrat using a theoretically sound, nationally normed instrument, the Multidimensional Self­concept Scale (MSCS; Bracken, 1992). Importantly, however, literature and research appears to be lacking on citizenship status as it relates to children’s self­concept and behaviour. It was hypothesized that there is no relationship between self­concept and behaviour; citizenship status and self­concept, and citizenship status and behaviour.

PURPOSE OF THE STUDY The purpose of this study was to find out if there is any relationship between citizenship status, self­concept and behaviour among early adolescent immigrant students.

Research Questions 1. Is there a relationship between self­concept and behaviour? 2. Is there a statistically significant self­concept mean rank difference between citizen and immigrant students? 3. Is there a statistically significant behaviour mean difference between citizen and immigrant students?

METHODOLOGY Setting This study was done in two government schools and one private church school. Participating government schools are located in a newly developed community for relocated Montserratians from the volcano; a developing central area that is proposed to be the new town (capital) where a mixture of different nationalities resides and finally, the private school is located in a residential upscale community. All studies were done in classroom settings at each participating schools.

Participants Participants in this study included 96 students selected from fifth and sixth grade in regular education classes from three primary schools in Montserrat. Demographically, the sample was 54% (n = 52) boys, 46% (n = 44) girls, 58% (n =56) citizen status and 42% (n = 40) non­citizen status. The citizens’ sub­sample included students who were born in and out of Montserrat by Montserratian parent while the non­citizens sub­sample included adolescents from Antigua, Dominica Republic, Guyana and Jamaica, living with at least one parent or relative and attending school in Montserrat. The mean age of participants with non­citizenship status was 10.8 years (SD= 0.64 years) and the mean age of participants with citizenship status was 10.78 years (SD= 0.56 years). The citizenship status subgroup included 52% boys and 48% girls respectively while the non­citizenship status subgroup included 57% boys and 43% girls respectively.

Sampling The random sampling technique was utilized in this study. All fifth and sixth grade students from the three primary schools names were complied on a master sheet. A random number was selected (number 4) then the first three names that followed number 4 were selected from the population while every fourth number (name) after number 4 was omitted. This pattern continued until the last set of three names were selected. This sampling technique provided a sample size of 96 participants which was representative of the population.

300

D. Bagot-Allen, S. Harney, J. W. McKay Volume 7 – Fall 2009

Instrumentation Self­concept. The Bracken’s Multidimensional Self­Concept Scale (MSCS) was chosen since it deals with global and domain­ specific self­concepts as was discussed in the literature. The six MSCS context­dependent domains are represented regularly in the vast number of existing self­concept scales and the self­concept and psychosocial adjustment literature. This can be found in Bracken & Mills, 1994; Hattie, 1992; Keith & Bracken, 1995 and Wylie, 1979 and 1989. No other primary self­concept domains are identified as regularly in the literature as these six domains. The full range of available psycho­educational assessment tests and scales also frequently assess these domains.

The MSCS is founded on the assumption that children’s self­concepts are learned behavioural patterns that have come under the stimulus control of context­specific environments. It is presumed that children respond in fairly predictable fashions when in specific settings, thus demonstrating relatively stable self­concepts in each respective context­specific domain. Additionally, children’s developed self­concepts in the various domains allow for the prediction of future behaviour in each respective domain. Hence, self­concept is an interaction between environmental contexts and the child’s behavioural response to the environment (Bracken, 1996). The MSCS is a 150 item self­report inventory appropriate for either individual or group administration to youth between the ages of 9 and 19 years, inclusive. The MSCS provides a Total Scale Score, as well as standard scores (M=100; SD = 15) for each of the six domain­specific scales (i.e., Social, Competence, Affect, Academic, Family and Physical). Each of the six MSCS subscales is comprised of 25 items; thus each scale contributes equally to the total scale (Jackson, L. D. 1998). The MSCS was normed on a national sample of 2,501 students, which was matched closely to national demographics. The scale was normed in 17 sites drawn from all four regions of the United States. Analysis of the MSCS data according to age, race, and sex have found only minor differences across the 11 age levels, sex groups, and racial groups (Crain & Bracken, 1994).

The MSCS examiner’s manual reports Total Scale internal consistency at .98 for the entire sample. Internal consistency of the six scales ranges from .87 to .97 for the six scales (mdn = .92). The MSCS Total Scale stability over a 4­week interval is .90, and subscale stability coefficients range from .73 to .81. Several empirical studies are reported in the Examiner’s Manual (Bracken, 1992) and elsewhere (e.g., Montgomery, 1994; Schicke and Fagan, 1994. The MSCS technical adequacy has been described in independent reviews as “both reliable and valid” (Rotaratori, 1994) and “excellent” (Willis, 1995), and the instrument was described overall as “perhaps the most psychometrically sound measure of self­concept” (Bear, Minke, Griffin, & Deemer, 1997). Self­concept classifications corresponding to the standard score ranges were adapted from the MSCS Manuel as follows: above 135 = extremely positive self­concept; 126 to 135 = very positive self­concept; 116 to 125 = moderately positive self­concept; 86 to 115 = average self­concept; 76 to 85 = moderately negative self­concept; 66 to 75 = very negative self­concept; and below 66 = extremely negative self­concept.

The Piers­Harris Children’s Self­Concept Scale (Piers, 1984), Coopersmith Self­Esteem Inventory (Coopersmith, 1984), Self­ Description Questionnaire – I (Marsh, 1988), Self­Description Questionnaire – II (Marsh, 1990), Tennessee Self­Concept Scale, Revised (Roid & Fitts, 1988) and Multidimensional Self­concept Scale test were assessed by six measures of self­ concept, i.e. social, competence, affect, academic, family and physical.In comparison the MSCS is based on a more comprehensive context­dependent, multidimensional model of social­emotional adjustment and assessment (Bracken, 1996).

Behavior. The observation of students’ behavior is founded on the assumption that behavior is “everything we do, both verbal and non­verbal that can directly be observed” (Educational Psychology, p. 227). In applied behavior analysis, the quantifiable measures are a derivative of the dimensions. These dimensions are repeatability (http://en.wikipedia.org/wiki/Applied_ behavior_analysis), i.e. how many times the behavior occurs, temporal extent i.e. how long the behavior occurs, and temporal locus i.e. when the behavior occurs (Johnston, J.M., Pennypacker, H.S.1993b).

The number of times each acting out behavior occurred over the period of observation for each participant were recorded on a frequency table. The raw scores were obtained for all behaviors and were put into a range of: below 8 = extremely well behaved; 8 to 16 = very well behaved; 17 to 25 = average behavior; 26 to 35 = moderately poorly behaved; 35 to 45 very poorly behaved; and 46 to 80 = extremely poorly behaved. These ranges are a representation of the measurement of the observations made. The observations were calculated out of 80 for each behavior since 80 represents the maximum possible occurrence of behavior within the period of observation. The range ‘below 8’ means participants repeated one acting out behavior between 0 – 7 times out of the maximum possible time of occurrence which was 80. Teachers used the Behavior Record Sheet (BRS; Hinds, 2005) to record observed behaviour. The resulting classification does not label the child; it however provides a description of the degree of positive behavior and or negative behavior the child expressed on each observed behavior. The original BRS was used on 36 children on Saba, a 5 square mile Caribbean Island.

301

Self-Concept, Behavior and Citizenship Status; Relationships and Differences Between Adolescents Self-Concept, Behavior and Citizenship Status in Montserrat, BWI

Procedure Permission from schools’ principals was obtained followed by parental approval for each participant. The MSCS was administered to both fifth and sixth graders in their classrooms at their separate schools. The examiner read the MSCS directions printed in the test booklet aloud while the participants read along. Each item on the MSCS was read aloud for participants who might have had reading difficulty. All participants were encouraged to complete the MSCS with the examiner available to answer any questions that arose. To reduce unanswered items, the examiner went through each participant booklet to ensure all items were answered. Then participants completed items that were not answered. No participant was forced to complete items they deliberately left out.

Participants’ behavior was carefully observed throughout all class sessions/periods and was recorded daily for two consecutive weeks on a behavior record sheet under the following displayed behaviors: attention­seeking behavior, limit­ testing behavior, antagonistic/hurtful behavior, violent and aggressive behavior, and hyperactive behavior. The first week of observation was done in the morning sessions/periods; the second week observation and recording was done in the afternoon sessions. This took participants who displayed acceptable or unacceptable behaviors predominantly during mornings and/or afternoons into consideration.

DATA ANALYSIS Research Question 1: Is there a relationship between self­concept and behavior? Descriptive statistical analysis. Self­concept was classified into four groups, namely: extremely negative self­concept, very negative self­concept, moderately negative self­concept and average self­concept and were put into a frequency and percentage table. After that, the variable self­concept was put into a descriptive median distribution showing the 25th, median and 75th percentiles. Behavior was also classified into groups namely: extremely well behaved, very well behaved, average behavior and moderately poor behavior and were also put into a frequency and percentage table. The mean and standard deviation table was constructed to further analyze behavior.

Inferential statistical analysis. The Spearman’s Correlations two­tailed test was used to statistically analyze this research question. The Spearman’s Rho calculation was employed because the variable self­concept is ordinal in scale of measurement. The variable behavior is ratio (but can be scaled as ordinal). The correlations were conducted on the individual participants’ MSCS domain scores and type of behavior scores.

Research Question 2: Is there is a statistically significant self­concept mean rank difference between citizenship status? Descriptive statistical analysis. A distribution of citizenship status frequency and percentage table was created to analyze Research Question 2 where the self­concept domains were classified under extremely negative self­concept, negative self­ concept, moderately negative self­concept and average self­concept. This table further showed a frequency and percentage distribution of each group of self­concept (citizens and non­citizens) that could be compared. A table showing self­concept percentiles by citizenship status was further set up to assist in analyzing Research Question 2 where comparison among groups were made.

Inferential statistical analysis. The Mann­Whitney Test was used to statistically analyze this research question. A test statistic table was set up comparing citizens and non­citizens via mean rank. This test was conducted on the individual participants’ MSCS scores on each of the six domains.

Research Question 3: Is there a statistically significant behavior mean difference between citizenship status? Descriptive statistical analysis. A table showing citizenship status breakdown of the behavior, its frequency and percentage were set up to assist in analyzing Research Question 3. Each behavior category was classified into extremely well behaved, average behavior, moderately poorly behaved and very well behaved. This breakdown showed how each group behaved. Another table was constructed to show the means and standard deviations on each behavior per group.

Inferential statistical analysis. The independent­t test was used to statistically analyze this question. A test statistic table was set up comparing citizens and non­citizens via mean. The participants’ individual scores on each type of behavior served as the dependent variables.

302

D. Bagot-Allen, S. Harney, J. W. McKay Volume 7 – Fall 2009

RESULT Research Question 1: Is there a relationship between self­concept and behavior? Self­concept. The Bracken’s Multidimensional Self­Concept Scale (MSCS) was used as it deals with global and domain­ specific self­concepts as was discussed in the literature. The six MSCS context­dependent domains are: social, competence, affection, academic, family and physical. They represent the vast number of existing self­concept scales and the self­concept and psychosocial adjustment literature. The range of scores for self­concept were as follows: Above 135 represented extremely positive self­concept, 126 – 135 represented very positive self­concept, 116­125 represented moderately positive self­concept, 86­115 represented average self­concept, 76­85 represented moderately self­concept, 66­75 represented very negative self­concept and finally below 66 represented extremely negative self­concept.

Behavior. The Behavior Record Sheet was used to observed adolescents acting out behavior in class. The variable behavior has five categories namely: attention seeking, limit testing, antagonistic/hurtful, violent and aggressive and hyperactive. The maximum possible occurrence of behavior within the period of observation for each category was 80. The behavior scores were put into a range of: below 8 represented extremely well behavior, 8 to 16 represented very well behaved; 17 to 25 represented average behavior; 26 to 35 represented moderately poorly behaved; 35 to 45 represented very poorly behaved; and 46 to 80 represented extremely poorly behaved.

Research question 1. In statistically analyzing Research Question One; the Spearman’s Correlations two tailed test was used. The result in Table 5 shows one statistically significant relationship between self­concept and behavior; specifically between affection and antagonistic/ hurtful behavior at the 0.05 level of significance.

Table 5: Self­concept and Behavior Table

Limit /hurtful Testing Seeking Attention Violent & Aggressive Hyperactive Antagonistic Correlation Coefficient ­.09 .01 .11 .05 .08 social Sig. (2­tailed) .41 .89 .29 .66 .46 Correlation Coefficient ­.09 ­.01 .18 .16 ­.02 competence Sig. (2­tailed) .34 .92 .08 .12 .82 Correlation Coefficient .01 .06 .21* .08 .14 affection Spearman's Sig. (2­tailed) .96 .56 .04 .42 .18 rho Correlation Coefficient .01 .04 .15 .04 .19 academic Sig. (2­tailed) .89 .69 .15 .71 .06 Correlation Coefficient ­.09 ­.01 .09 ­.01 .06 family Sig. (2­tailed) .34 .92 .38 .91 .59 Correlation Coefficient .00 .13 .13 .12 .06 physical Sig. (2­tailed) .99 .22 .22 .24 .55

Although significant, the correlation coefficient for this relationship (self­concept and behavior) is only 0.21, indicating that only 4.4% of the variation in affection is connected to the variation in antagonistic/hurtful behavior. This result shows a weak positive correlation between affection (self­concept) and antagonistic/hurtful behavior that is minimal in terms of practical significance. None of the 29 other correlations between self­concept and behavior were significant at the 0.05 level. This result, therefore, does not support an overall relationship between self­concept and behavior. In this situation Null Hypothesis 1, which says, there is no relationship between self­concept and behavior, is retained.

Research Question 2: Is there is a statistically significant self­concept mean rank difference between citizen and immigrant students? Self­Concept by Citizenship Status. Two groups of adolescents completed the self­concept test. These two groups of adolescents were as follows: Citizen (n1 = 56) and Non­citizen (n2 = 40). Table 6 represents the distribution of citizenship status frequency and percentage. Under the social domain of self­concept, Table 6 shows a range of citizens’ self­concept

303

Self-Concept, Behavior and Citizenship Status; Relationships and Differences Between Adolescents Self-Concept, Behavior and Citizenship Status in Montserrat, BWI

from extremely negative to average self­concept while non­citizens reflected only negative self­concept. Competence, affection and academic all show a range from extremely negative self­concept to average self­ concept for groups, citizen and non­citizens, with a high percentage of negative self­concept when compared to average self­concept. In the family domain of self­concept, citizens and non­citizens represented a range from negative to average self­concept with a high percentage of average self­concept. In the physical domain, citizens and non­citizens represented a range from negative to average self­ concept with a high percentage in average self­concept for the group of citizens while the non­citizens group represented a high percentage in negative self­concept.

Research question 2. In statistically analyzing Research Question 2, the Mann­Whitney Test was used. Each domain score for self­concept (social, competence, affection, academic, family and physical) was tested. Table 8 provides a distribution of the test statistics where some of these variables are significant.

Table 8: Mann­Whitney Test Statistics Social Competence Affection Academic Family Physical Mann­Whitney U 742.50 927.00 841.00 1077.50 814.00 895.50 Citizen Mean Rank 55.24 51.95 53.48 49.26 53.96 52.51 Immigrant Mean Rank 39.06 43.68 41.52 47.44 40.85 42.89 Asymp. Sig. (2­tailed) .01 .15 .04 .75 .02 .09

According to the Table 8, social, affection and family show a significant self­concept mean rank difference between citizenship status while academic, physical and competence show no significant self­concept mean rank difference. The mean rank differences for citizens on the social domain was greater than the mean rank of non­citizens on the same domain, followed by citizens on the family domain when compared to non­citizens on the same domain and the same can be said of citizens and non­citizens on the affection domain. Since exactly half of the values in all six domains of self­concept are significant, Research Question 2 cannot be definitively answered statistically, except at the individual domain level.

Research Question 3: Is there a statistically significant behaviour mean difference between citizen and immigrant students? Behaviour by citizenship status. In all five categories of behaviour both groups (citizens and non­citizens) obtained very high frequencies and percentages in the extremely well behaved classification. The hyperactive category has the highest percentage for extremely well behaviour followed by violent and aggressive, antagonistic and hurtful, limit testing, and attention seeking. Results further reflected citizens group obtained the highest means (most observed misbehavior) when compared to the non­citizens group. The means alone for each category appear to show some differences. However, the inferential statistical result of group mean differences adjusted for standard deviations in each behavior category shows the correct interpretation.

Research question 3. Table 11 shows the independent­t test that was used to statistically analyze Research Question 3. The t­test p­values for all behaviors in Table 11 are >.05. Since none of the behavior categories are significant, it is necessary to retain Null Hypothesis 3 which states there is no citizenship status difference in behavior.

304

D. Bagot-Allen, S. Harney, J. W. McKay Volume 7 – Fall 2009

Table 11: Independent– t Sample Test Levene's Test for t­test for Equality of Means Equality of Variances

Sig. Mean Std. Error 95% Confidence Interval F Sig. t df (2­tailed) Difference Difference of the Difference Lower Upper Attention Equal variances .85 .36 .78 94.00 .44 1.04 1.33 ­1.60 3.67 Seeking assumed Equal variances Limit Testing 11.14 .00 1.67 88.80 .10 2.02 1.21 ­.38 4.42 not assumed Antagonistic Equal variances .38 .54 1.17 94.00 .25 1.60 1.37 ­1.12 4.32 Hurtful assumed Violent and Equal variances 7.36 .01 1.79 93.96 .08 2.13 1.19 ­.24 4.50 Aggressive not assumed Equal variances Hyperactive .63 .43 .71 94.00 .48 .86 1.21 ­1.54 3.27 assumed

DISCUSSION Null Hypothesis 1: There is no relationship between self­concept and behaviour. According to the results in chapter 4, Null Hypothesis 1 is retained which states that there is no relationship between self­ concept and behaviour. In the literature review, the Multidimensional Self­concept Scale (MSCS) model, viewed self­concept as being acquired according to behavioural principles (Bracken, 1996). Children behaviours are shaped according to their failures and successes, others reaction to their actions and how others model behaviours and communicate expectations as children act on or within their environments. The table of descriptive statistics on self­concept (see Table 1 in chapter 4) revealed most adolescents’ self­concepts fell in the negative self­concept range while the table of descriptive statistics on behaviour (see Table 3 in chapter 4), revealed most adolescents demonstrated well behaviour. Possible reasons for results to yield an overwhelming positive response in good behaviour could be as follows: There was no major number of students who experienced consistent and frequent behavioural problems; or, there were a minor number of students who may have had experienced behaviour problems in only one of the five areas observed under acting out behaviour. Further suggestions could be that students were informed by their classroom teachers that they were under observation and were asked to exhibit good behaviour for that period of time or maybe students could have observed their class teachers recording their behaviours and became scared and behaved good for the period of observation. However, in support of Bracken’s Multidimensional Self­ concept Model, social academic and the family domains were all statistically significant with the family domain in the average category.

Wong­Reiger (1984) on the other hand made a model of cross­cultural adjustment where he presented three processes that are activated by the immigrant’s encounter with a new culture: Learning the new cultural norms, learning new behaviours consistent with these norms, and adjusting the self­concept to these new behaviours and norms(Ullman, Tatar, Moshe, 2001) . While results in Chapter 4 yielded good behaviour and negative self­concept, Wong­Reiger (1984) model can assist in giving an explanation for this result. While adolescents probably learnt the new cultural norms and behaviours consistent with these norms, they probably experienced difficulty in adjusting their self­concept to these new behaviours and norms. Wong­Reiger (1984) further submitted that the change in self­concept includes a redefinition of central aspects of the self such as ethnic identity, values, and perception of competence (Ullman, Tatar, Moshe, 2001).

Null Hypothesis 2: There is no statistically significant mean rank difference between citizenship status and self­ concept. Results in Chapter 4 revealed that on the social, affection and family domains there is a significant self­concept mean rank difference between citizenship status (see table 8). At the individual domain level, the mean rank differences for citizens on the social, affection and family domains were greater than non­citizens on the same domains. Some possible reasons for this can be drawn from Jackson and Bracken’s study (1998) on Relationship Between Students’ Social Status and Global and Domain­Specific Self­Concepts, where they found on the social impact level children interact and share with each other, their social acceptability is the foundation of their social acceptance among students (Coie, Dodge, & Coppotelli, 1982; Rubin, Hymel, LeMarc, & Rowden, 1989). The same can be suggested for the citizens group of adolescents whose mean rank

305

Self-Concept, Behavior and Citizenship Status; Relationships and Differences Between Adolescents Self-Concept, Behavior and Citizenship Status in Montserrat, BWI

difference was greater than that of non­citizens on the social domain level. Since, citizens are used to their environment, and in this context, Montserrat being a small society, citizens are more likely familiar with each other hence it is likely that they would easily accept each other, unlike non­citizens who have to get familiar with their new environment, and get accustomed to the norms and behaviour of their new environment are more likely to encounter lower social acceptability by their peers.

Literature further supports this results where Ullman and Tatar (2001) did a research on psychological adjustments among Israeli adolescent immigrants and examined the relationships between the process of adjusting to immigration and two psychological constructs particularly important during adolescence: self­concept and self­esteem. Results from their study showed that immigrant adolescents indicated various direct and indirect expressions of distress, and also expressed less satisfaction with their lives relative to their classmates(Ullman, Tatar, Moshe, 2001). In one study, two­thirds of the immigrant youth reported fatigue, less involvement in school, withdrawal, and higher rates of absence from school as well as fears of the future (Tatar, 1998). Non­citizen adolescents in this study may have also experience a lower mean rank difference in self­ concept when compared to citizens since the immigration process for non­citizens may include uprooting against adolescent wishes, loss of previous support systems, changes in perception of parents as well as the demands for absorption into the new culture (Tatar & Horenczyk, cited in Ullman & Tatar, 2001).

Null Hypothesis 3: There is no statistically significant behaviour mean difference between citizenship status. Results in chapter 4 indicated that there is no statistically significant behaviour mean difference between citizenship status since none of the behaviour categories were significant. Results in Table 10 (see chapter 4) revealed citizens obtaining a higher mean compared to non­citizens in these categories. Here again, literature supports possible reasons for non­citizens mean to fall below citizens, since non­citizens are expected to make necessary adjustments to their behaviours to fit into their new environment. Immigrant students interact with the dominant society and become aware of different ways of behaving, different expectations (Utley, Kozleski, C. A., Smi, E. 2002); and they may change their thinking individually or collectively (Shade, 1997).

CONCLUSION The findings that there is no relationship between self­concept and behaviour; nor behaviour mean difference between citizenship status, are pertinent to the current times where immigration has become one of the major concerns in many industrialized countries like the United States of America, Canada and the United Kingdom, just to name a few. The Caribbean is no exception of this, such as Montserrat, a small British Dependent Caribbean island which is still in the throes of volcanic eruption spanning 12 years, and where two­thirds of its population immigrated to the United Kingdom, United States of America and other parts of the Caribbean. As a result of this mass migration, other Caribbean nationals have immigrated to this island for differing reasons. As a result, the schools are filled with many immigrant (non­citizen) students. Report from a recent review of the Montserrat Secondary School revealed that parents believed that teaching is frequently affected by students acting out behaviours which include skipping classes, interrupting classes in progress and violent and aggressive behaviours. The review further revealed several angry students who resorted readily to violence and ethnic tension between Montserratians and immigrants which was also referred to as a contributory factor (Review of Montserrat Secondary School, 2007).

As a result of the alleged behaviour problems within the school context, the findings in this study is pertinent and relevant to the issues presented, and provides educators and managers in the education department, parents and other stake holders with scientific information on students acting out behaviours, self­concepts (how they perceive themselves), citizenship status and its relationship to self­concept and behaviour. The findings have shown that both groups of students (citizens and non­ citizens) exhibited good behaviours in school, and their behaviours have no relationship with their self­concept. It further revealed that both groups of students exhibited negative self­concepts on the six domains, namely: social, competence, affection, academic, family and physical. And finally, citizens were found to obtain a higher mean rank than non­citizens on self­concept.

RECOMENDATION Based on the findings in this study, it is recommended that educators and managers in the education department and other stakeholders re­evaluate school curriculum that deals with building students’ self­concepts, and make adequate provision to rebuild students’ self­concepts in all six domains. With regards to the family domain, it is also necessary for educators and managers in the education department and other stakeholders to educate parents and provide them with necessary training so that they too can complement the schools’ effort in rebuilding students’ self­concept. It is further recommended that educators and other stakeholders ensure all schools have records on their schools’ population distribution in relationship to

306

D. Bagot-Allen, S. Harney, J. W. McKay Volume 7 – Fall 2009 students’ citizenship status. With this information, educators and other stake holders should be able to recognize the need for the following recommendation which deals with sociocultural theory and multicultural perspectives. Since it is evident from the research population that there are many immigrant students in the schools, educators should take immigrant students past experiences from their native country into consideration and make necessary accommodation to facilitate learning in their school environment. The implications of multicultural perspectives for educators are that social behaviours are influenced by culture; learning and social interactions are inextricably connected and inseparable from cognition. Based on findings in relation to behaviour and self­concept, it is recommended that further research be conducted in this field since, no significant relationship was found in this study even though, there are much existing literature that supports a positive relationship between those two variables.

REFERENCES Atkinson, D. R., Whiteley, S., & Gim, R. H. (1990). Asian­American acculturation and preferences for help providers. Journal of College Student Development, 31, 155­161. Bear, G. G., Minke, K. M., Griffin, S. M., & Deemer, S. A. (1997). Self­concept. In G. G. Bear, K. M. Minke, A. Thomas (2005). Children’s needs II: Development problems, and alternatives, 257­270. Birman, D., Trickett, E. J. (2001). Cultural transistions in first­generation immigrants: Acculturation of Soviet Jewish refugee adolescents and parents. Journal of Cross­Cultural Psychology, 456­477. Birman, D., Trickett, E. J., Buchanan, R. M. (2005). A tale of two cities: Replication of a study on the acculturation and adaptation of immigrant adolescents from the former Soviet Union in a different community context. American Journal of Community Psychology, 83­90. Bracken, B. A. (1993). Assessment of interpersonal relations. Austin, TX: Pro­Ed. Bracken, B. A. (1996). Clinical applications of a context­dependent, multidimensional model of self­concept. In B.A. Bracken (Ed.), Handbook of self­concept: Developmental, social, and clinical considerations (pp.463 ­503). New York: Wiley. Bracken, B. A., Bunch, S., Keith, T. Z., & Keith, P. B. (1992, August). Multidimensional self­concept: A five instrument factor analysis. Paper presented at the meeting of American Psychological Association, Washington, DC. Bracken, B. A. & Mills, B. C. (1994). School counselors’ assessment of self­concept: A comprehensive review of 10 instruments. The School Counselor, 14­31. Bracken, B. A., (1992). Multidimensional Self­concept Scale. Austin, TX. Pro­Ed. Cartledge, G., & Feng, H. (1996). The relationship of culture and social behavior. In Cartledge G. & Milburn J. F. (Eds.), Cultural diversity and social skills instruction:Understanding ethnic and gender differences, (pp. 13­44). Champaign, IL Research Press. Cheney, D., Blum, C., & Walker, B. (2004). An analysis of leadership teams’ perceptions of positive behaviour support and outcomes of typically developing at­risk students in their schools: Assessment for effective intervention. Chiu, y.­W & Ring, J. M. (1998). Chinese and Vietnamese immigrant adolescents under pressure: Identifying stressors and interventions. Professional Psychology: Research and Practice, 444­449. Coie, J. D., & Dodge, K. A. (1988). Multiple sources of data on social behavior and social status in the school: A cross­age comparison. Child Development, 815­829. Coie, J. D., & Dodge, K. A., & Coppotelli, H. (1982). Dimensions and types of social status: A cross­age perspective. Developmental Psychology, 557­570. Conoley & J. C. Impara (Eds.), The twelfth mental measurements yearbook (pp. 649­650). Lincoln: University of Nebraska Press. Cook­Cottone, Phelps, C., & LeAdelle. Body dissatisfaction in college women: Identification of risk and protective factors to guide college. Journal of College Counselling, Spring 2003 issue Cooley, C. H. (1902). Human nature and the social order. New York: Scribner’s. Cooley, C. H. (1909). Social organization, New York: Scribner’s. Coughlan, R. & Owens­Manley, J. (2006). Displacement and transit: Traumatic stress in the lives of refugees. Bosnian Refugees in America, Springner US. Crain, R. M. & Bracken, B. A. (1994). Age, race and gender differences in child and adolescent self­concept: Evidence from a behavioral­acquisition, context­dependent model. Journal of School Psychology, 496­511. Cross, S. E. (1995). Slef­construals, coping, and stress in cross­cultural adaptation. Journal of Cross­Cultural Psychology, 673­697. Current perspectives on learning disabilities http://books.google.com/books?id=nJKwl0gVxhUC&pg=PA57&lpg=PA57&dq= The+implications+of+multicultural+perspectives+for+educators+are+that+social+behaviors+are+influenced+by+culture,& source=web&ots=GKXWP38ikA&sig=sx3uZR­dqBM9NkXJ9CCDK97qKds&hl=en&ei=­JmYSZ­ WLte4twfCj4ifCw&sa=X&oi=book_result&resnum=2&ct=result

307

Self-Concept, Behavior and Citizenship Status; Relationships and Differences Between Adolescents Self-Concept, Behavior and Citizenship Status in Montserrat, BWI

Defining Globalisation http://globalpolicy.igc.org/globaliz/define/index.htm Dunnington, M. J. (1957). Behavioral differences of sociometric status groups in a nursery school. Child Development, 103­ 111. Erikson, E. (1968). Identity: Youth and crisis. Norton, New York. Examining the effects of environmental interchangeability with overseas students: A cross cultural comparison http://marketing.byu.edu/htmlpages/ccrs/proceedings99/ryan.htm Feitelson, E. 1989. The spatial effects of land use regulations: The Chesapeake Bay critical area case. Johns Hopkins University, Baltimore, MD. Fukuhara, M. (1989). Counseling psychology in Japan. Applied Psychology: An International Review, 409­422 Garcia, E. (1994). Understanding and meeting the challenge of student cultural diversity. Boston, MA: Houghton Mifflin. Garcia – Coll, C. & Szalacha, A. L. (2004). The Future of Children: Educational Journal, 81­90. Gay, G. (2000). Culturally responsive teaching: Theory, research, and practice. New York, Teachers Press. Globalisation: Good or Bad? http://74.125.45.132/search?q=cache:3koy9B3B4r8J:ifl.finec.ru/departments/kaf_2/kaf2_conf/ kaf2_conf042008/TorlopovaZhdanova.pdf+Human+societies+across+the+globe+have+established+progressively+closer +contacts+over+many+centuries&hl=en&ct=clnk&cd=1&gl=us Globalization, immigration and the welfare state: a cross­national comparison http://findarticles.com/p/articles/mi_m0CYZ/ is_/ai_n27265536 Globalisation, immigration and the welfare state: a cross­national comparison. Journal of Sociology & Social Welfare (2007) http://blog.lib.umn.edu/cehd/insideout/TIPnewsrelease.pdf http://en.wikipedia.org/wiki/Applied_behavior_analysis http://www.futureofchildren.org/usr_doc/Executive_Summary.pdf http://www.futureofchildren.org/usr_doc/Children_of_Immigrant_Families.pdf http://goliath.ecnext.com/coms2/gi_0199­7221958/Globalization­immigration­and­the­welfare.html http://www.globalpolicy.igc.org/globaliz/define/index.htm http://marketing.byu.edu/htmlpages/ccrs/proceedings99/ryan.htm http://normemma.com/armaslow.htm http://www.penpages.psu.edu/penpages_reference/28507/285072990.HTML Goodenow, C. (1993). The psychological sense of school membership among adolescents: Scale development and educational correlates. Psychology in the schools, 79­90. Hanson (Eds.) A guide for working with young children and their families (pp. 19­34). Baltimore, MD: Brooks Publishing. Hattie, J. A. (1992). Self­concept. Hillsdale, NJ: Erlbaum. Hernandez, M., & McGoldrick, M. (1999). Migration and the family life cycle. In B. Carter & M. McGoldrick (Eds.), The expanded family life cycle: Individual, family, and social perspectives (3rd ed., pp. 169­184). Boston: Allyn & Bacon. Hinds, B. J. (2005). The impact of soft drink consumption on student classroom behavior. Unpublished manuscript, University of the Virgin Islands. Homma­True, R. (1997). Japanese American families. In E. Lee (Ed.), Working with Asian Americans: A guide for clinicians (pp. 114­124). New York: Guilford press. Huang, L. N. (1997). Asian American adolescents. In E. Lee (Ed.), Working with Asian Americans: A guide for clinician. (pp. 175­195). New York: Guilford press. Immigrant Families and U.S. Schools. Theory Into Practice, Winter 2008, Vol. 47, No. 1. blog.lib.umn.edu/cehd/insideout/ TIPnewsrelease.pdf Issacs, M. R. 91986). Developing mental health programs for minority youth and their families. Washington, DC: Georgetown University Child Development Center. Jackson, L. D., & Bracken, B. A. (1998). Relationship between students’ social status and global and domain­specific self­ concept. Journal of School Psychology, 233­ 246. James, D. C. S. (1997). Coping with new society: The unique psychosocial problems of immigrant youth. Journal of School Health, 98­102 Johnston, J.M. & Pennypacker, H.S. (1993b). Readings for Strategies and tactics of behavioral research (2nd ed.). Hillsdale, HF: Erlbaum. Keith, L. C., & Bracken, B. A. (1995, March). Confirmatory factor analysis of a multidimensional model of self­concept: An examination construct validity. Paper presented at the National Association of School Psychologists’ annual conference, Chicago, IL. Keith, L. K., & Bracken, B. A. (1994, March). Confirmatory factor analysis of a multidimensional model of self­concept: An examination of construct validity. Paper presented at the meeting of the National Association of School Psychologists’ annual conference, Chicago, IL.

308

D. Bagot-Allen, S. Harney, J. W. McKay Volume 7 – Fall 2009

Kim, S. C. (1997). Korean American families. In E. Lee (Ed.), Working with Asian Americans: A guide for clinicians (pp. 125­ 135). New York: Guilford press. Kitayama, S., Matsumoto, H., Markus, H. R., & Norasakkunkit, V. (1997). Individual and collective processes in the construction of the self. Journal of Personal Social Psychology, 1245­1267. Kuhn, M. K., & McPartland, S. (1954). An empirical investigation of self attitudes. American Sociology and Review, 68­76. LaFromboise, T., Coleman, H., & Gerton, J. (1993). Non­instructional influences on high school student achievement The contributions of parents, peers, extracurricular activities, and part­time work. Office of Educational esearch and Improvement, Washington, DC. Lee, L. C., & Zhan, G. (1998). Psychosocial status of children and youths. In L. C. Lee & N. W. S. Zane (Eds.), Handbook of Asian American psychology (pp. 137­163). Thousand Oaks, CA: Sage. Lynch, E. W., (1992). From culture shock to cultural learning. In E. W. Lynch & M. J. Hanson (Eds.), Developing cross­cultural competence: A guide for working with young children and their families (pp. 19­34). Baltimore, MD: Brookes Publishing. Maslow, A. H. (1970). Motivation and personality (2nd ed.), Harper & Row: New York. Marcia, J. (1980). Identity in adolescence. In Adleson, J. (ed.), Handbook of Adolescent Psychology. Wiley, New York, pp. 159­287. Montgomery, M. S. (1994). Self­concept and children with learning disabilities: Observer­child concordance across six context­dependent domains. Journal of Learning Disabilities, 254­262. Munroe­Blum, H., Boyle, M. H., Offord, D. R., & Kates, N. (1989). Immigrant children: Oberg, K. (1960). Culture Shock: Adjustments to new cultural environments. Practical Anthropology, 177­182 Ogbu, J. U. (1991). Low school performance as an adaptation: The case of Blacks in Stockton, CA. In M.A. Gibson & J. U. Ogbu (Eds.), Minority status and schooling: A Comparative study of immigrant and involuntary minorities (pp.249­286). New York Garland. Olah, A. (1995). Coping strategies among adolescents: A cross­cultural study. Journal of Adolescents, 491­512. Ollendrick, T. H., Weist, M. D., Borden, M.C., & Greene, R.W. (1992). Sociometric status and academic, behavioral and psychological adjustment: A five year longitudinal study. Journal of Consulting and Clinical Psychology, 80­87. Perry, J. C., (1979). Popular, amiable, isolated, rejected; A reconceptualization of sociometric status in preschool children. Child Development, 1231­1234. Phelan, P., Yu, H. C., & Davidson, A. (1994). Navigating the psychosocial pressure of adolescence: The voices and experiences of high school youth. American Educational Resource Journal, 415­447. Phinney, J. S., (1990). Ethnic identity in adolescents and adults: Review of research. Psychological Bulletin, 499­514. Phinney, J.S. (1989). Stage self ethnic identity development in minority group of adolescents. Journal of Early Adolescence, 34­39 Psychiatric disorder, school performance, and service utilization. American Journal of Orthopsychiatry, 510­519. Rosenberg, M. (1979). Conceiving the Self Basic Books, New York. Rotaratori, A. F. (1994). Test review: Multidimensional Self­concept Scale. Measurement and Evaluation in Counseling Development, 265­268 Rubin, K. H., Hymel, S., LcMare, L., & Rowden, L., (1989). Children experiencing social difficulties: Sociometric neglect reconsidered. Canadian Journal of Behavioral Science, 94­111 Schicke, M. C., & Fagan, T. K. (1994). Contributions of self­concept and intelligence to the prediction of academic achievement among fourth, sixth and eighth grade students. Canadian Journal of School Psychology, 62­69 Seidman, E., Aber, J., Allen, L., & French, S. E. (1996). The impact of the transition to high school on the self­esteem and perceived social context of poor urban youth. American Journal of Community Psychology, 445­461. Singelis, T. M., Bond, M. H., Sharkey, W. F., & SiuYiuLai, C. (1999). Unpacking culture’s influence on self­esteem and embarrasability. Journal of Cross­Culture Psychology, 315­341. Siris, K., & Osterman, K. (2004). Interrupting the cycle of bullying and victimization in the elementary classroom. Phi Delta Kappan, 86, 288. Shade, B. J. (1997). Culture, style and the educative process: Making Schools Work for Racially Diverse Students. Springfield, IL: Charles C. Thomas. Shields, M. K., Behrman, R. E. (2004). Children of Immigrant Families: analysis and recommendations. The Future of Children, 2004 Suarez­Orozco, C., and Suarez­Orozco, M. M. (2001). Children of Immigration. Cambridge, MA: Harvard University Press. Sullivan, H.S. (1953). The Interpersonal Theory of Psychiatry. New York: Norton. Suarez­Orozco, C., and Suarez­Orozco, M. M. (1995). Transformations: Migration, Family Life and Achievement Motivation among Latino adolescents. Stanford, CA: Stanford University Press. Sue, D. W., & Sue, D. (1999). Counseling to culturally different: Theory and practice (3rd ed.), New York: Wiley.

309

Self-Concept, Behavior and Citizenship Status; Relationships and Differences Between Adolescents Self-Concept, Behavior and Citizenship Status in Montserrat, BWI

Tatar, M. (1998). Counseling immigrants: school contexts and emerging strategies. British Journal of Guidance and Counseling, 337­352. Turner, R.M., (1976). The real self: From institution to impulse. American Journal of Sociology, 989­1016. Uba, L. (1994). Asian Americans: Personality Patterns, Identity, and Mental Health. New York: Guilford Press. Ullman, C., Tatar, M. (2001). Psychological Adjustment among Israeli Adolescent Immigrants: A Report on Life Satisfaction, Self­Concept, and Self­Esteem. Journal of Youth and Adolescence, p 449. Willis, W.G. (1995). Review of the Multidimensional Self­concept Scale in J. C. UN Department of Economic and Social Affairs (2006). International Migration 2006. New York: United Nation Publication. Retrieved on February 14, 2008. http://www.google.com/search?hl=en&q=http%3A%2F%2Fwww.un.org%2Fesa%2 Fpopulation%2Fpublications%2F2006Migration_Chart%2FMigration2006.&btnG=Search Utley, C. A., Kozleski, E., Smith, A., & Draper, I. L. (2002). Positive behavior support: A proactive Strategy for minimizing behavior problems in urban multicultural youth. Journal of Positive Behavior Interventions, 196­205. Vinokurov, A., Trickett, E. J., & Birman, D. (2002). Acculturative hassles and immigrant adolescents: A life domain assessment for soviet Jewish refugees. Journal of Social Psychology, 425­445. Wong­Reiger, D. (1984). Testing a model of emotional and coping responses to problems in adaptation: Foreign students at a Canadian university. Journal of International Relations, 153­184. Wylie, R.C. (1979). The Self­concept: The Theory and Research on Selected Topics, 2. Lincoln: University of Nebraska Press. Wylie, R. C. (1989). Measures of self­concept. Lincoln: University of Nebraska Press. Xu, O., (2007).Globalization, immigration and the welfare state: A Cross­National Comparison. Journal of Sociology & Social Welfare, 89 ­95. Yen, C. J., & Hwang, M. (2000). Interdependence in ethnic identity and self: Implication for theory and practice. Journal of Counseling and Development, 420­429. Yeh, C. J., & Wang, Y. W. (2000). Asian American coping attitudes, sources, and practices: Implications for indigenous counseling strategies. Journal of College Student Development, 94­103. Yen, Inose C., & Mayuko. Difficulties and coping strategies of Chinese, Japanese, and Korean immigrant students. Statistical, Adolescence, Spring 2002 Issue

310

N. B. Ishak, M. R. A. Kadir, K. N. Surbaini and J. A. B. Ramli Volume 7 – Fall 2009

UNDERGRADUATES SELECTION TOWARDS ISLAMIC BANKING :HOW DOES GENDER AFFECT THEIR SELECTION

Norzamri bin Ishak1, Mohd Rizuan Abd Kadir2, Khairul Nizam Surbaini3 and Juliana Anis Bte. Ramli4 Multymedia University1, Malaysia and University Tenaga Nasional2, 3, 4, Malaysia

ABSTRACT Islamic banking is no longer regarded as a banking to fulfill Islamic need or as a secondary banking, but moreover as a bank to compete to be primary banking. Thus, to plan an appropriate marketing strategy for attracting new customers is very importance, especially for Islamic banking in dual banking environment. Therefore, it is vital for Islamic banks to identify the selection criteria to convene their customers’ needs. The study focuses on examining the bank selection criteria being employed by undergraduates since they are the potential Islamic banking customers. The purpose of this paper is to examine the main factors that influence undergraduates in selecting their bank preference and how gander and religious of these groups affect their selecting criteria in dual banking environment. This study presents primary data collected by self­ administered questionnaires involving a sample of 250 undergraduates at UNITEN. The criteria are analyzed using factor analysis with varimax rotation, to cluster the criteria into several variables. The result shows that religious factor and bank appearance played significant role is selection process for undergraduates. We also found encouraging result that religious factor is among important criteria choose by Muslim undergraduates in selecting their bank.

Keywords : Bank Selection Decision, Islamic Bank, Syariah Compliant Products and Services.

INTRODUCTION In Quran, Al Baqarah, part of verse 275 stated that ‘…but Allah has permitted trade but forbidden riba (interest)….’. In Islam, riba (interest) is strictly prohibited and this principle must be applied in every aspect of Muslim life. Gerrard et. al. (1997) further stated that Islamic financing is based upon the principle that the use of interest is prohibited. Due to this prohibition, Muslims cannot receive or pay interest, thus they are unable to conduct business with conventional banks. According to Jaffe (2002), to serve in Muslim market, Islamic financial institutions have developed a range of halal interest­free financing instruments that conform to Shariah ruling, and therefore are acceptable to their clients.

According to Saeed (1996), the emerging of Islamic financial institutions was started with the establishment of Islamic Development Bank (IDB) on 1973.IDB was aimed to foster economic development and social progress of Muslim countries. The first major collective step taken by Muslim countries is to promote Islamic financial system. As a Muslim country and as to fulfillment of religious needs, Malaysia also initiated Islamic financing to realize Malaysian needs. According to Aziz (2008) Malaysia has developed over more than two decades, a comprehensive Islamic financial system that operates in parallel with the conventional financial system.

In Malaysia, the establishment of Bank Islam Malaysia Berhad (BIMB) as the first Islamic bank in 1983 had open a new path to Malaysian Financial System especially in banking system towards the introductions of new products and services that based on al­Quran and Sunnah (Rizuan et. at., 2008).Since then, Malaysian Islamic financial system has grown tremendously. According to Zamani (2007), Malaysian Islamic banking system shown strong performance in 2006 with higher profitability, and has remained well capitalized. In year 2006, Islamic banking assets has reached about USD34 billion or 13 percent in terms of market share.

The same conclusion was drawn by Haron and Wan Azmi (2006) as they claimed that the Islamic banking sector has shown a tremendous increase in growth by 19% per annum from year 2000 to 2004.This shows that Islamic products and services have made a strong impact to the Malaysian people. The awareness given by Muslims toward the Islamic Banking makes it become more significance. Today, Islamic banking is no longer regarded as a business entity striving only to fulfill the religious obligation for Muslim community; rather, Islamic banking has become one of the most important players in service industry (Wilson, 1995).

311

Undergraduates Selection Towards Islamic Banking :How Does Gender Affect Their Selection

As an important player in finance industry, it is very important for Islamic institution to winning over customers. Dasuki and Abdullah (2007) claimed that this necessitates Islamic banks to really understand the perceptions of their customers towards them in terms of service quality to secure customers’ allegiance. Therefore, Islamic banking products and services should continued to strengthen their position in the market and able to compete with other conventional banking instruments, especially in the dual banking environment system.

The bank selection criteria have been has been given substantial attention by many researchers (For example: Evans (1979), Ross (1989), Hegazy (1995), Almossawi (2001), Dasuki and Abdullah (2007), Rizuan et.al (2008)).According to Almossawi (2001), with the growing of competitiveness in the bank industry and services offered by banks, it has become increasingly important that banks identify the factors that determine the basis upon which customers chose between providers of financial services.

This paper is designed to identify the bank selection criteria among undergraduates’ in dual banking environment. Specifically, the study was designed to determine how gender and religious affecting the selection criteria in a dual banking environment. The finding of this research hopefully can help us to determine the future bank direction in providing their services.

Almossawi (2001) argued that undergraduates are crucial because they constitute a sizeable market segment, tend to be good savers and are potential bank customers, who will have one when they complete their education.

This study is divided into five sections. Section one is an introduction to the study. Section two discuss a finding review of previous research which relevant to Islamic banking selection and perceptions. Section three highlights the methodology uses in our research. Section four present the study findings and section five embrace our conclusions and suggestions for future research.

LITERATURE REVIEW Haniffa and Hudaib (2007) describe Islamic banking as a system of banking which is consistent with the principles of Islamic law (Shari’ah Islami’iah). The Shari’ah governs every aspect of a Muslim’s life, viz. spiritual, economic, political and social, and faithful execution of duties and obligations based on the Shari’ah is recognised as a form of worship. The Shari’ah is concerned with promoting justice and welfare in society (al­adl and al­ihsan) and seeking God’s blessings (barakah), with the ultimate aim of achieving success in this world and hereafter (al­falah).

According to Jaffe (2002), Islamic finance was designed to meet the Muslim needs and it was strictly following Islamic principle which is halal Interest­free. There are numbers of Islamic financial products are available; the most widely recognized is profit­and­loss sharing instruments agreement. Metwally (1994), identified three significant differences tools between Islamic financial and conventional financial: Musharaka (Partnership), Mudarabah or Quiradh (Investment with no participation in management) and Murabaha (Resale Contract). The financier of the venture is known as the Rabb­ul­mal, and the entrepreneur responsible for the management and execution of the project is referred to as the Mudarib. The parties achieve their returns by sharing in the profits of the venture, which are divided on a proportional basis (Hussain et. el., 2006).

Under a Mudaraba agreement, the parties must decide on a rate for sharing of the profits prior to the commencement of the business activity. After the business is completed the financier receives the principal and the pre­agreed share of the profit (Metwally 1994, Gafoor, A.L.M., 1996 and Usmani, M.T., 1998). Musharaka is a joint partnership formed for conducting business in which all partners share the profit according to a specific ratio while the loss is shared according to the ratio of the contribution. (Lewis et. al. 2001, Metwally 1993, Usmani, M.T., 1998 and Haron et. all., 1994). Murabaha is financing where the bank purchases for a client certain commodities and the client promises to buy the goods from the bank on a pre­agreed profit basis (Metwally, 1994).

There are numerous literatures on customers’ preference towards financial products and services criteria. Metawa and Almossawi (1998) claimed that the customers’ preferences towards banking criteria have been heavily investigated over the past two decades. Among the areas studied and attributes found are availability of credit, relatives' advice and recommendations, friends' advice and recommendation, convenient location, variety of bank services, the quality of services, availability of ATM, adequate bank hours, return on investment, friendliness of personnel, understanding financial needs, special services for women, and bank name.

According to Erol et.al. (1990) the bank customers did not differentiate between the services offered by conventional banks and Islamic banks. However, Metawa and Almossawi (1998) reveals that adherence to Islamic tenets is the main motivating

312

N. B. Ishak, M. R. A. Kadir, K. N. Surbaini and J. A. B. Ramli Volume 7 – Fall 2009 factor for customers to preferring Islamic banks in Bahrain. They results also indicate that bank employees and bank equipment play an important role towards customers preferences. While, Naser et al. (1999) stressed that a large majority of customers’ were satisfied with the Islamic bank’s name and image and with the bank’s ability to provide confidentiality. Their findings also indicate that a large majority of the respondents patronage with the Islamic finance because of its reputation. The study by Haron et. al. (1994) shown that Muslims and non­Muslims who preferred commercial banks have a common perception in selecting their banks. This means that, the Islamic bank should not rely on the religion factor as a strategy in its effort to attract more customers.

Erol and El­Bdour (1989), found that interpersonal contact and individual effort played an important role in terms of attracting individuals to utilize financing services. Religious motivation did not appear to be a primary criterion. El­Bdour et al. (1990), point out that customers rely heavily on criteria like the bank’s reputation and image and the confidentiality of the bank when choosing a bank. Hegazy (1995) studied on bank selection criteria for both Islamic banks and commercial banks. He concluded that the most important factor attributes for Islamic banks was the advice and recommendations made by relatives and friends. Dusuki, A.W. and Abdullah N.I.(2006) found that the customer satisfaction often depends on the quality of services provided by Islamic banks.

Rizuan, M et.al.(2008) argued that the criteria used for bank selection should be based on respondents profile. Since our respondents are undergraduates, we believe that we need to detail out the selected criteria. Based on literature, we have identified twenty seven criteria in banking selection, including three criteria specifically mention about Islamic principles. We hope by separating Islamic principle, we can get a true picture of undergraduates’ selection, both Muslims and non­Muslims on these criteria.

According to Al­Islam organization website (2009), there are psychologically significant different between male and female. Male has a greater preference for physical exercise, hunting, tasks involving movement, than a female. The sentiments of man are challenging and war­like, while the sentiments of woman are peaceable and convivial. Man is more aggressive and quarrelsome, and woman is quieter and calmer. A woman refrains from taking drastic action, both with regard to others and with regard to her, and this is the reason for the smaller number of suicides in women than in man. In this study, we want the see how the differences stated affect their selection criteria.

METHODOLOGY A sample of 250 undergraduates UNITEN students from College of Business and Accounting (COBA) was chosen based on random sampling technique. We choose UNITEN and COBA because of higher respondents’ rates. Moreover, we observed that there are fair population of Male and female undergraduates, and Muslim and non­Muslim undergraduates at this collage. They are come from all over Malaysia. In addition, these undergraduates are business students and hope that they have a general idea on the question to be asked. By chosen these undergraduates, we hope at least we can get closer result to the whole population. From 250 questionnaires distributed, 72 have been returned by male undergraduates and 163 female undergraduates.

Data collection was obtained from questionnaires designed based on the review of the literatures. All of the criteria have been applied from the previous studies, including three specific Islamic principle criteria. We have identified twenty seven criteria to be used.In our judgment based on the literature review, these were the most suitable criteria.Others researcher whose studied on undergraduates such as Gerrard and Cunningham (2001) studied on Singapore’s undergraduates used 20 criteria, and Almossawi (2001) studied on Jordan’s college students used 30 criteria.

We used exploratory factor analysis with varimax rotation to analyze the data. This method had been commonly used by other researchers such as Rizuan, M et.al. (2008), Haron et al. (1994), Gerrard and Cunningham (1997, 2001); Almossawi (2001), Dusuki and Abdullah and Mohd Dali et.al. (2008). Factor analysis will be used to cluster the criteria and consequently group the independent variables into smaller factors. Mohd Dali et. al. (2008) claimed that the main applications of factor analytic techniques are: (1) to reduce the number of variables and (2) to detect structure in the relationships between variables, that is to classify variables. Therefore, factor analysis is applied as a data reduction or structure detection method.

The questionnaire was divided into two sections. Section A, based on the demographic profile, a set designed to gather information about the sample’s personal, demographics and education background of the respondents. Section B comprises the twenty­seven selections bank criteria including three specific criteria on Islamic principles. The five­point Likert scales ranking from very important (scale 1) to not important at all (scale 5) was used to measure the Muslin and non­Muslim undergraduates selections criteria.

313

Undergraduates Selection Towards Islamic Banking :How Does Gender Affect Their Selection

FINDINGS AND DISCUSSION This finding is based on 235 respondents of undergraduate students, 72 male respondents and 163 female respondents. From Table 1 it shows that the highest percentage on age is below 22 for both groups.50 percent of the respondents are male­Muslim and 59% are female­Muslim. In term of CGPA result, 58 percent of Male respondents obtained below 3.00 but for female respondents 59% obtained above 3.00.57 percent of male respondents come from East Malaysia, and 48% of female respondents come from west Malaysia. Majority of male respondents have both bank accounts (Islamic and conventional) and majority of female respondents have only conventional account.

Table 1: Profile of Respondents Male Female

N = 72 (%) N = 163 (%) Age Below 22 52 72% 125 77% Above 22 22 28% 38 23% Race Malay(Muslim) 36 50% 95 59% Non­Malay 36 50% 67 41%

CGPA 2.00 – 2.49 24 33% 39 24% 2.50 – 2.99 18 25% 28 17% 3.00 – 3.49 13 18% 55 34% 3.50 Above 17 24% 41 25% Origin North 5 7% 30 18% South 13 18% 25 15% East 41 57% 29 18% West 13 18% 79 48% Account Types Islamic 5 7% 17 10% Conventional 16 22% 106 65% Both 51 71% 33 20%

We run the exploratory factor analysis with varimax rotation to analyze the data. Table 2 showed that the KMO Measurement of Sampling Adequacy (MSA) Test is 0.714, which is higher than 0.6. This indicates that the factor analysis is enabling to be further analyzed. Furthermore the Bartlett’s Test of Sphericity is significant at 0.00 level. It means that there are intercorrelations among the variables.

Table 2: KMO and Bartlett's Test Kaiser­Meyer­Olkin Measure of Sampling Adequacy. .714 Bartlett's Test of Sphericity Approx. Chi­Square 3502.746 df 351 Sig. .000

The result of factor analysis with a varimax rotation appears in Table 3. The analysis produced eight factor groups. From the eight factor groups, we have decided to titles each factor group as described below. The items grouped under Factor 1 could be called ‘transaction'. Examples of criteria under this group are several bank branches, availability of ATM in several locations, 24­hours availability of internet services, employer uses the same bank and ease of opening an account. We named Factor 2 as ‘religious factors’ and the criteria under this grouping are providing Islamic facilities, providing wide range of Islamic facilities and providing Islamic bank account.

Factor 3 is seen as a ‘convenience’ and the criteria are adequate number of tellers, convenient ATM locations and convenient location of the main branch. Factor 4 we named as ‘bank appearance’ which relate to external appearance of the bank, bank

314

N. B. Ishak, M. R. A. Kadir, K. N. Surbaini and J. A. B. Ramli Volume 7 – Fall 2009 reputation and staff appearance and attire. Factor 5 is a ‘financial benefits’ grouping consisting of providing credit card with no annual fees and paid high interest rates on saving accounts. Factor 6 relates to ‘charges and confidentiality’, particularly on low services charges and confidentiality of bank. Factor 7 is name as ‘people influences’, relates to criteria of recommendation by relatives and recommendation by friends. Finally, Factor 8 can be titled as ‘personal and product pleasant’, this relating to convenient in bank manager and ease of obtaining loans.

The criteria listed in Table 3 include only those which had factor loadings of 0.50 and above and for which the Cronbach alphas for each of the groupings were 0.60 and above. The eight factor groups accounted for 79.56 per cent of total variance. A few factors which had been remove because had factor loadings less than 0.50 are available parking space nearby, saving are guaranteed by government, giving a wide range of facilities and friendliness of bank personal.

Table 3: Factor Groups of the Bank Selection Criteria Factor 1 (Transaction) – Cronbach alpha (0.8453) Several bank branches 0.747 Availability of ATM in several locations 0.715 24­hours availability of internet services 0.658 Employer uses the same bank 0.622 Ease of opening an account 0.595 Percentage of Variance 35.930 Factor 2 (Religious factor) – Cronbach alpha (0.8835) Providing Islamic facilities 0.921 Providing wide range of Islamic facilities 0.835 Providing Islamic bank account 0.758 Percentage of Variance 9.990 Factor 3 (Convenience) – Cronbach alpha (0.7644) Adequate number of tellers 0.724 Convenient ATM locations 0.536 Convenient location of the main branch 0.526 Percentage of Variance 8.440 Factor 4 (Bank appearances) – Cronbach alpha (0.7747) External appearance of the bank 0.790 Bank reputation 0.595 Staff appearance and attire 0.594 Percentage of Variance 6.680 Factor 5 (Financial benefits) – Cronbach alpha (0.6606) Providing credit card with no annual fees 0.744 Paid high interest rates on saving accounts 0.579 Percentage of Variance 5.470 Factor 6 (Charges and confidentiality) – Cronbach alpha (0.8140) Low services charges 0.814 Confidentiality of bank 0.540 Low interest rates on loans 0.512 Percentage of Variance 4.970 Factor 7 (People influences) – Cronbach alpha (0.7639) Recommendation by relatives 0.931 Recommendation by friends 0.565 Percentage of Variance 4.160 Factor 8 (Personnel and product pleasant) – Cronbach alpha (0.6888) Convenient in bank manager 0.610 Ease of obtaining loans 0.576 Percentage of Variance 3.910

315

Undergraduates Selection Towards Islamic Banking :How Does Gender Affect Their Selection

We further analyse the data by ranked the various bank selection criteria in relation to the eight factor groups. The result is shown in Table 4.From the result, we have found a few interesting issues. First, male undergraduates preferred charges and convenience banking, but female undergraduates prefer bank appearances and bank personnel. This result might due to physiology different between these groups. As for male undergraduates, they have psychically strength and tend to discover the best bank criteria to suite them. Thus, they choose bank charges and convenience banking as their most preferred factors. For female undergraduates, as physiologically like to be protected, they choose bank appearances and personnel as their main factors.

Another interesting issue was the people influence ranked last for both groups. This means that for undergraduates, recommendation by other is not importance. People might recommend to them the bank suite to them, but they have their own choices. This might due to the knowledge they gained in the university. This result consistence with Gerrard and Cunningham (2001), but contradicts with Hegazy (1995). Hegazy (1995) found that the most important factor attributes for Islamic banks was the advice and recommendations made by relatives and friends.

Table 4: Bank Selection Criteria: A Comparison of Male and Female Undergraduates Male Female

n = 72 n = 163 Rank Mean Rank Mean Sig. Charges and confidentiality 1 4.0238 3 3.6802 .185 Convenience 2 3.9810 4 3.5684 .240 Religious factors 3 3.6024 5 3.2597 .024* Financial benefits 4 3.5076 7 3.0289 .271 Transaction 5 3.2683 6 3.1646 .001* Bank appearances 6 3.5000 1 3.9363 .398 Personnel and product pleasant 7 3.1667 2 3.8582 .001* People influences 8 2.7619 8 2.8994 .002*

In table 5 and table 6, we detailed out how religion affect gander selection criteria. The most interesting finding from the table was the different between male­Muslim most preference and male non­Muslim most preference. Male­Muslim choose religious factor as their main preference. This result might explain why there was a tremendous increment in the Islamic banking. As for male non­Muslim, they ranked Islamic religious factor as second least important. This means that Islamic banking is not totally rejected by the Male non­Muslim. The lower ranked given by them might due to the misunderstanding of Islamic concept.

For male non­Muslim, they perceived convenience as the first preference, but third for the male­Muslim. Charges and confidentiality are both among the important factors for both groups. People influence was ranked as the least important for both groups. This means that, for both groups, they like to make a decision on their owned. People might advice them about the bank product, but at the end, these groups will make their own decision.

As for female­Muslim, they ranked bank appearance as their first factor and religious as second factors. From the individual means, it was also shown that the bank appearance plays an importance role in selecting their most preference criteria for female­Muslim. In Malaysia, most of our female­Muslims are still strongly behold the religious value. Female­Muslims in Malaysia are still preferred to stay at home, as this is the best way to protect their self. Therefore, their choices of banking are very limited compare to male­Muslim. This might be the reason on why they choose the bank appearance as their main preference factor. Instead of limited choices, female­Muslim still choose religious factor as their second most preference factor.

For female non­Muslim, they ranked personnel and product pleasant as their first preference. Based on the individual means, female non­Muslims choose convenience in bank manager as their most important factor. This might due to the female physiology, which is like to be protected. By knowing someone inside the bank, especially the manager, they feel secure and easily made transactions with the bank. However, for religious factor, they ranked it as the least important. This might indicate the failure of Islamic bank to disseminate information about the Islamic banking to this group. As for the people influence factor, both female groups ranked it either as the least important or among least important factor.

316

N. B. Ishak, M. R. A. Kadir, K. N. Surbaini and J. A. B. Ramli Volume 7 – Fall 2009

Table 5: Bank Selection Criteria: A Comparison of Male and Female Undergraduates Male Male Muslim non­Muslim n = 36 n = 36 Rank Mean Rank Mean Sig. Religious factors 1 4.2808 7 2.9240 .004* Charges and confidentiality 2 4.1964 2 3.8511 .263 Convenience 3 4.0329 1 3.9291 .018* Financial benefits 4 3.6518 4 3.3634 .223 Bank appearances 5 3.5595 3 3.4405 .858 Transaction 6 3.4226 6 3.1141 .737 Personnel and product pleasant 7 3.0179 5 3.3154 .260 People influences 8 2.8304 8 2.6938 .227

Table 6: Bank Selection Criteria: A Comparison of Male and Female Undergraduates Female Female Muslim non­Muslim n = 95 n = 67 Rank Mean Rank Mean Sig. Bank appearances 1 3.9123 3 3.4060 Religious factors 2 3.8247 8 2.5072 Convenience 3 3.7274 4 3.3962 Personnel and product pleasant 4 3.7095 1 4.1267 Charges and confidentiality 5 3.6228 2 3.8079 Transaction 6 3.2807 5 3.0472 Financial benefits 7 3.1474 6 2.9061 People influences 8 2.9553 7 2.8633

LIMITION OF FINDINGS All the information gathered for this study was obtained from UNITEN undergraduates at Muadzam Shah Campus, Pahang. Therefore, the sample is limited to only one university. The number of respondents also is small which are only 235 respondents, consists of 72 Male and respondents and 163 female respondents. It is suggested that more university should be involve with more respondents in both groups.

CONCLUSION At the first place, Islamic banking was established to fulfill the need of Islamic religion, an interest free banking system. However, with the beauty of the system and the acceptance by the Muslims society, it becomes a significant banking. Today, the era of Islamic banking which regarded as a financial instrument to fulfill Islamic obligation was no longer exist. Islamic banking at once was considered as a secondary banking but now is competed to be the primary banking in Malaysia. Research about selection criteria in banking sector is very important especially for Islamic banking to place their self as a primary banking in Malaysia.

The purpose of this study was to determine on how undergraduates’ gander and religious affect their preferred selections in dual banking environment in Malaysia. As a reflection to these findings, we have come out with three vital conclusions. First, as the physiologically different between male and female, they choose differently the selection criteria as to suit they need. Male undergraduates preferred charges and convenience banking, but female undergraduates prefer bank appearances and bank personnel.

317

Undergraduates Selection Towards Islamic Banking :How Does Gender Affect Their Selection

Secondly, the religious factor was ranked the most important factor for male­Muslim, and second most important factor for female­Muslim. However, for male non­Muslim, it was ranked seven and as least important for female non­Muslim. This indicates that the future of Islamic banking is bright as Muslim perceived this as an important factor, but for non­Muslim, it was among the least important. As a Muslim, we believe that Islamic system is the best system to suite in every aspect of our life, but not for the non­Muslim. Thus, the rejection by the non­Muslim for religious factor, although not totally, is a huge losses. In one aspect, Islamic banking failed to attract the young non­Muslim customers, and in another wider aspect, we failed to disseminate information about the beauty of Islamic system to the non­Muslim.

Lastly, we found people influence was ranked either least important or among least important for all groups. This means that the young consumers are making their own decision in preferring their banking. Even though, all the banks have set up their own consultant to influence consumers, but with this finding, the bank can only consult the consumers about the services. People will hear, but will make their own decision based on their own judgment.

REFERENCES Al­Quran. Surah Al Baqarah, part ofverse 275. Aziz, Z.A., ‘Islamic Banking and Finance Progress and Prospect’, Collected Speeches 2000­2006, Bank Negara Malaysia, Kuala Lumpur, (2006). Aziz, Z.A. ‘Metamorphosis into an International Islamic Banking and Financial Hub’, Special Address at ASLI's World Islamic Economic Forum. (2005), Bank Negara, ‘Overview of Islamic Banking in Malaysia’, (2007) Available: http://www.bnm.gov.my/index.php?ch=174&pg=467&ac=367 Day, P., ‘Sticking to (Islamic) Law reaps rich rewards’, The Australian Financial Review (Sydney), 13 March, p. 17, (2003). Dusuki A. W and Abdullah N., ‘Why do Malaysian customers patronize Islamic Banks?’ International Journal of Bank Marketing, Vol. 25 No. 3, p.p 142­160, (2007). Erol, C., Kaynak, E., El­Bdour, R., ‘Conventional and Islamic Bank: Patronage Behaviour of Jordanian Customers’, International Journal of Bank Marketing, Vol. 8 No.5, pp.25­35. (1990). Erol, C., El­Bdour, R., ‘Attitude, behaviour and patronage factors of bank customers towards Islamic banks’, International Journal of Bank Marketing, Vol. 7 No.6, pp.31­7. (1989). Gafoor, A.L.M., ‘Interest­Free Commercial Banking’, A.S. Noordeen, Malaysia. (1996) Gafoor, A.L.M., ‘Mudaraba­based investment and finance’, Available: http://www.islamicbanking.nl/article2.html. (2001). Gerrard, P. and Cunningham, J.B., ‘Islamic banking: A study in Singapore’, International Journal of Bank Marketing, Vol. 15, No. 6, pp. 204 – 216, (1997). Ghannadian, F.F. and Goswami,G., ‘Developing economy banking: The case of Islamic banks’, International Journal of Social Economics, Vol.31, No. 8, pp. 740 – 752,(2004). Haron, S., Ahmad, N. and Planisek, S.L.., ‘Bank patronage factors of Muslim and non­Muslim customers’, International Journal of Bank Marketing, Vol. 12, No. 1, pp.32 – 40, (1994). Hegazy, I.A., ‘An empirical comparative study between Islamic and commercial banks' selection criteria in Egypt’, International Journal of Contemporary Management, Vol. 5 No. 3, pp. 46­61, (1995). http://www.al­islam.org/rightsofwomeninislam/ Hussain G. R. and Zurbruegg R.,‘Awareness of Islamic banking products among Muslims’, Journal of Financial Services Marketing, Vol. 12, No. 1. pp 65–74, (2006). Iqbal, M., Molyneux, P, ‘Thirty Years of Islamic Banking: History, Performance and Prospects’, Palgrave Macmillan, New York, NY, (2005). Jaffe, C.A. ‘Financial forms tailor products to lure Muslims’, Boston Globe, 20 January, (2002). Lewis, M. K. and Algaoud, L.M. ‘Islamic Banking’, Edward Elgar, Cheltenham, UK, (2001). Mansur, M. et. al .‘Persepsi pengguna terhadap produk dan perkhidmatan perbankan Islam’ Seminar Dasar Ekonomi Negara Pasca 50 tahun Kemerdekaan: cabaran dan halatuju, Universiti Kebangsaan Malaysia.Melaka, (2007). Metawa, S.A., Almossawi, M. , ‘Banking behaviour of Islamic bank customers: perspectives and implications’, International Journal of Bank Marketing, Vol. 16 No.7, pp.299­313, (1998). Metwally, M.M., ‘Interest­free (Islamic) banking? A new concept in finance’, Journal of Banking and Finance, Vol. 5 No. 2, pp. 119­27, (1994). Mohd Dali, N.R., Hanifah, A.H. and Izlawanie, M., ‘Banking Selection Factors For Islamic Banking Users’, Islamic Banking, Accounting and Finance Conference, Universiti Sains Malaysia, Kuala Lumpur, (2008) Naser, K., Jamal, A., Al­Khatib, L., ‘Islamic banking: a study of customer satisfaction and preferences in Jordan’, International Journal of Bank Marketing, Vol. 17 No.3, pp.135­50, (1999).

318

N. B. Ishak, M. R. A. Kadir, K. N. Surbaini and J. A. B. Ramli Volume 7 – Fall 2009

Naser, K. and Moutinho, L., ‘Strategic marketing management: The Case of Islamic banks’, International Journal of Bank Marketing, Vol. 15, No. 6, pp.187 – 203, (1997). Nunnally, J. C.Psychometric Theory (2nd ed.).New York:McGraw­Hill, (1978). Rammal ,H.G. ‘Mudaraba in Islamic finance: Principles and application’, Business Journal ForEntrepreneurs, Vol. 4, pp. 105 – 112 , (2003). Rizuan, .A.K, Suzaida, B. and Mazuin, N.S., ‘Customers Preferences towards Syariah Compliant Products and Services in Banking Sectors: Empirical Study among Academician and Non­Academician’, Islamic Banking, Accounting and Finance Conference, Universiti Sains Malaysia, Kuala Lumpur, (2008). Roszaini Haniffa and Mohammad Hudaib, ‘Exploring the Ethical Identity of Islamic Banks via Communication in Annual Reports’, Journal of Business Ethics 76:97–116 (Springer 2007) Saeed, A. ‘Islamic Banking And Interest: A Study Of The Prohibition Of Riba And Its Contemporary Interpretation’, E.J. Brill, Leiden, The Netherlands, ( 1996 ). Usmani, M. T. ‘An Introduction To Islamic Finance’, Idaratul Ma’arif, Karachi, Pakistan ,(1998). Wilson, R. "Marketing strategies for Islamic financial products", New Horizon., (1995). Zamani, A. G. "Mainstreaming Islamic Finance: Malaysia as International Islamic Financial Centre", Deputy Governor's Keynote Address at the International Takaful Summit 2007,(2007).

319

The Correlation between Conflict and Job Satisfaction within Nurse Units

THE CORRELATION BETWEEN CONFLICT AND JOB SATISFACTION WITHIN NURSE UNITS

Tina Y. Cardenas Paine College, USA

ABSTRACT The purpose of this study was to determine if a relationship existed between conflict types (task and relationship) and job satisfaction, anticipated turnover, and performance among nurses. Interest in conflict has increased because leaders are spending a significant amount of time addressing conflict within the workplace and because conflict is thought to have both positive and negative affects on the organization. Therefore, healthcare leaders should be interested in how much conflict is occurring and how it may impact other important job factors. The challenge for leaders of all industries appears to be managing conflicts so that the negative affects are minimized and that positive affects are maximized. Four hundred and thirty one staff nurses employed at a Veterans medical facility in the southeast were surveyed for perceptions of overall conflict occurring, which type of conflict was occurring most (task or relationship), along with their perceptions of their job satisfaction, anticipated turnover intentions and performance. Of the 194 surveys that were collected, 181 surveys were used in the actual study (a response rate of 45%). The population consisted mostly of older nurses who were highly trained (registered nurses), most had been with the medical facility over eight years, and most worked in geriatric and acute care settings. The population definitely mirrored national trends of nurses soon to be leaving their fields, and many retiring all at once. This article focuses only on the levels of conflict (task and relationship) and job satisfaction and the relationship between these job factors. The results regarding these job factors showed slightly moderate levels of total conflict with higher levels of relationship conflict, and a moderate level of job satisfaction. A significant negative relationship was also found between both types of conflict and job satisfaction. Additional research is needed to better understand the dynamics of conflict in healthcare organizations, and to assess the affects that conflict may have on health care institutions and eventually on patient care. Practical implications are outlined for nurse managers as relates to conflict, in particular monitoring it and managing it constructively, and future research ideas are outlined as well.

Keywords: Organizational Conflict, Conflict Types, Job Satisfaction, Interpersonal Conflict, Intragroup Conflict

INTRODUCTION Organization leaders of all industries make every effort to manage organizational factors thought to impact or impede effectiveness and efficiency. Management of these factors is also important because of the need to remain competitive, and to achieve identified goals and objectives (Klunk, 1997). However, conflict is a factor that is thought to have multiple affects on other organizational factors, but is often not discussed openly even though it is considered an inevitable facet of all work environments (Berstene, 2004; Fernberg, 1999; Janssen, van de Vliert, & Veenstra, 1999; Kolb & Putnam, 1992). Research has shown that conflict may have both positive and negative affects on the organization (Amason, 1996; Bacal, 2004; Barclay, 1991; Baron, 1985; Jehn, 1994). Generally, the type of conflict (positive or negative) determines this affect. Jehn (1995) proposes that task conflict is generally thought to have positive affects on organizations, while relationship conflict is generally thought to have negative affects on organizations. Despite the possibility of conflict having a negative impact on other organizational factors, Bodtker and Jameson (2001) propose that conflict is healthy for organizations with Berstene (2004) suggesting that conflict is necessary for organizational development. Other authors have expressed a more stronger position and state, “The absence of conflict is not harmony, it’s apathy” (Eisenhardt, Kahwajy, and Bourgeois, 1997, p.1). Research of this factor has increased because of a desire to better understand and manage conflict. Subsequently, interest in conflict has also increased because managers have reported spending a significant amount of time managing employee conflicts (Caudron, 1998; Cox, 2001; Kolb & Putnam, 1992; Moberg, 2003). More specifically, Cochran and White (1981) also note that conflict has increased significantly in healthcare as a result of its complex structure. Therefore, healthcare managers are also interested in how conflicts may impact other organizational factors (Cox, 2001; Gardner, 1992).

320

T. Y. Cardenas Volume 7 – Fall 2009

Leaders within the healthcare industry are concerned about managing organizational factors believed to be linked to the delivery of quality patient care, which may in some instances have life or death implications (Aiken, et al., 2002;). In addition to conflict, other organizational factors thought to impact effectiveness, efficiency, and patient care within the healthcare industry include job satisfaction, turnover, and performance (Cox, 2001; Gardner, 1992; Kunaviktikul, et al., 2000). More specifically, job satisfaction and turnover are key factors to healthcare leaders because of their affiliation with quality and achieving quality accreditation status, known as magnet status (Bliss­Holtz, Winter, & Scherer, 2004; Buchan, 1999; Lopopolo, 2002). Additionally, conflict and all of these factors (job satisfaction, anticipated turnover, and performance) appear to be influenced by the environment indicating possible factor correlation. Since conflict is believed to have both positive and negative affects within the organization, the challenge for leaders within the healthcare industry appears to be determining how to manage conflict so that the negative affects are minimized and the positive affects are maximized (DeChurch & Marks, 2001; Freidman, Tidd, Currall, & Tsai, 2000). Proper management of conflict will also allow leaders to eliminate costs associated with unresolved conflicts (Forte’ 1997). It appears important for healthcare managers to be able to identify what impact conflicts have on other organizational factors of interest in the healthcare industry such as job satisfaction, and how to effectively manage these factors in order to produce desired organizational outcomes. When this is accomplished, leaders can hopefully achieve organizational goals and objectives which include retaining satisfied, productive nursing staff (DiMeglio, et al., 2005). Therefore, organizational conflict, and how it may be related to job satisfaction appears to be important to examine further.

The objective of this study is to identify the level (amount) and type (task or relationship) of conflict and job satisfaction, anticipated turnover and performance within the units of the medical center, and to investigate whether a relationship exists between conflict and these organizational factors. However, this article will only discuss the levels of organizational conflict and job satisfaction among hospital nurses, the correlation between these two variables and the significance of this relationship.

LITERATURE REVIEW Organizational Conflict Organization conflict can be defined as a recognizable disagreement that occurs because of personal or work issues existing between supervisors and subordinates, colleagues, or other individuals who are interdependent regarding resources, functions or daily operations. Organization conflict has always been a part of work environments mainly because conflict is a product of continuous employee interaction (Kolb & Putnam, 1992). Pondy (1967) highlighted that early conflict literature appears to focus on understanding conflict and its role in organizations. Decades later, Wall and Callister (1995) still agree with this belief and note that conflict has been a literary topic of interest for an extensive period of time in an attempt to better understand its complexity.

Several researchers have created labels to identify the various types of conflict. Some of these labels are task and relationship (Jehn, 1994; Jehn 1995), cognitive or affective (Rahim, 2002), and functional and dysfunctional (Amason, 1996). These types of conflict may occur at different levels of employee interaction, and take place in the work environment within or between individual(s) or groups (Rahim, 2001). Based on this foundation, the four general levels of conflict discussed in conflict literature are intrapersonal, interpersonal, intragroup, and intergroup. Conflict at all of these levels may have different causes which vary based on the workplace situation or work environment. Specific causes of conflict that occur in healthcare organizations include stress (Klunk, 1997), scarce resources (Redman & Fry, 2000), incompatible goals (Sportsman, 2005), interdependency (Cochran, Schnake, & Early, 1983), and miscommunication (Barney, 2002).

Conflict Within Nurse Units The affect that conflicts have on organizations may vary depending on the organization. Healthcare organizations have undergone significant changes over the last decades in order to respond to subsequent industry changes and to compete (Curtright, Stolp­Smith, & Edell, 2000; Hart, 2005). Heinz (2004) proposes that these changes have affected organizational factors and patient outcomes. Some of these changes include: a) an increased focus on improving quality (Curtright, Stolp­ Smith, & Edell, 2000), b) an increased focus on cutting costs (Rotarious & Liberman, 2000), c) an aging workforce (Heinz, 2004), and d) restructuring (Baker, 1995; Jones, et al., 1993). Some authors have even highlighted that these changes may have contributed to increased levels (amounts) of conflicts within healthcare facilities (Baker, 1995; Gardner, 1992; Jones, Bushardt, & Cadenhead, 1990, Kunaviktikul, et al., 1996; Nelson, 1995). Mostly, it is thought that interpersonal conflict is more prevalent in healthcare facilities because of the level of interdependency, specialization, and levels of authority (Cochran, Schnake, &Earl 1983). However, utilization of teams has also led to an increase in nurses’ roles in patient care decision

321

The Correlation between Conflict and Job Satisfaction within Nurse Units making, known as shared governance (Kennerly, 1996). Several authors are also of the opinion that shared governance leads to increased conflicts because of increased levels of interaction and disagreements (Baker, 1995, Prescott & Bowen, 1985). Gerardi and Morrison (2005) propose that hospitals’ complex clinical work cultures may produce conflicts as well. Earlier researchers believe that physician­nurse relationships are the main source of conflict (Prescott & Bowen, 1985). Other researchers support this claim, and further state that, “The reason for the prevalence of conflict in hospitals relates to power, authority, and status of organizational members” (Cochran, Schnake, & Earl, 1983, p. 442). This type of conflict is believed to be dysfunctional leading to negative outcomes if not properly managed, with some of these types of conflict situations being irreconcilable (Deutsch & Coleman, 2000; Jameson, 2003). The impact of these conflicts within healthcare environments, when not managed, are often believed to be costly to organizations (Forte’, 1997; Jones, Bushardt, & Cadenhead, 1990). These costs may include staff replacement (Curtin, 2003), a decrease in the quality of patient care (Forte’, 1997), patient mortality (Adams & Bond, 2000; Aiken, et al., 2002), and legal fees (Mitchell, 2001).

Based on this research, it appears clear that managers within healthcare facilities will need to assess conflict within their workforce (Jones, 1993; Siders & Aschenbrener, 1999) and make every attempt to manage conflicts (Rahim, 2002) so that the consequences of conflict are not detrimental to the performance of employees, which is directly related to patient care. Additionally, since interdependency is thought to increase potential occurrences of conflict, and harmonious work teams are often needed to provide patient care, then it appears important for healthcare employees and managers to have the ability to manage expected conflicts (disagreement) within these work teams (Nelson,1995).

Managing Organizational Conflict Researchers in this area have differing views regarding which strategies should be used to assess and/or manage conflict. Some researchers are of the opinion that managers should confront workers to identify the problem and the preferred outcomes and include the employees in this process (Jones, 1993). Other researchers suggest that a practical diagnostic tool for assessing conflict be used to identify critical conflict information, issues and interests of participants, and then act on the information received from these participants. Some researchers have utilized conflict scales to measure the amount and type of conflict within the work units (Jehn, 1995; Amason, 1996). However, as stipulated earlier, even though conflict is a serious issue, in some cases conflict is not addressed even when it is reported especially if it involves employees at different levels such as conflict between physicians and nurses (Nelson, 1995). Aschenbrener and Siders (1999) further state, “All too frequently busy physicians and physician executives avoid such conflicts in hopes that they will go away; indeed conflict avoidance may be part of the culture of the healthcare organization” (p.44). A study conducted by Redman and Fry (2000) showed that only 25% of ethical conflict occurrences regarding decisions related to patient care were favorably resolved. However, only a limited amount of empirical studies have directly linked conflict to patient care, but when these cases have been discovered the affects were generally found to be negative.

Assessment and management of conflict also appears to be necessary, regardless of the work environment, because of the multiple consequences and/or outcomes associated with conflict (positive and negative). It is also believed that the management of conflict will have an impact on these consequences. Sportsman (2005) states, “A healthcare organization’s success may depend on effective conflict management” (p. 34). Other researchers believe management of conflict is important in healthcare facilities and further include that it is extremely important to exhibit effective operations and is necessary in order to achieve favorable conflict situations (Kunaviktikul, et al., 2000). Some of the skills organization leaders need to achieve successful conflict management include strong communication and interpersonal skills (Jameson, 2003). The focus of conflict management should also be on how to achieve positive organization outcomes which may include organization development and addressing stakeholder needs (Rahim, 2002).

As a result of the nature of conflict and the impact (both positive and negative) that it may have on the organization, it also appears important to understand how conflict may influence or affect other principle job factors. More specifically for this article, how does conflict impact job factors of interest to healthcare leaders such as job satisfaction (Cox, 2001; Gardner, 1992; Lopopolo, 2002).

Job Satisfaction Employees’ levels of satisfaction are of interest to healthcare leaders and managers because of the impact satisfaction may have on employee behavior and patient care (AbuAlRub, 2004). Satisfaction is also of interest because it is a common belief that happy workers are better workers. Spector (1985) further states, “The attitudinal nature of satisfaction implies that an individual would tend to approach (or stay with) a satisfying job and avoid (or quit) a dissatisfying job” (p. 695).

322

T. Y. Cardenas Volume 7 – Fall 2009

Job satisfaction refers to how employees perceive that their needs are being met by an organization based on their overall opinion of their jobs. George and Jones (1996) describe satisfaction as a type of attitude that involves feelings and thoughts about actual work experiences related to a specific job. Similarly, Adams and Bond (2000) define job satisfaction as “the degree of positive affect towards a job or its components” (p. 538). This definition will also be used here because it appears to more concisely present the meaning.

Spector (1985) created a job satisfaction scale used to measure levels of job satisfaction based on nine facets of satisfaction. Blau (1999) utilized this instrument to measure influences on professional commitment of medical technologist with job satisfaction as a control variable. The results showed that professional organization memberships and routine tasks had positive effects on professional commitment after controlling for other variables thought to affect commitment which included job satisfaction.

Aiken, et al. (2002) note that nurse dissatisfaction is four times higher than other professions, with 25% of nurses expressing intent to leave their current jobs. In their study of approximately 200 hospitals, these researchers found that high patient nurse ratios were strongly associated with higher levels of job dissatisfaction, again supporting the influence of work environments on satisfaction levels. Rowe, de Savigny, Lanata, and Victora (2005) also found in their review of performance literature that job satisfaction was considered a determinant of healthcare worker performance because of its link to motivation, which is believed to have a major influence on worker performance. Murphy (2004) conducted a study of nursing home administrators which supported these findings that job satisfaction can be an indicator of many work­related behaviors. McNeese­Smith and van Servellen (2000) also note that satisfaction is thought to affect productivity with satisfied employees usually being more productive.

Additional factors believed to affect satisfaction include interpersonal relationships (DiMeglo, et al., 2005) and stress (Parikh, Taukari, & Bhattacharya, 2004). Interpersonal relationships and stress are also considered causes and sources of organizational conflict. Moreover, a previous study showed that healthcare employees experience high levels of stress and have intense interpersonal relationships because of higher levels of interdependency required for quality patient care (Kunaviktikul, et al., 2000). Thus, does conflict within healthcare settings influence levels of job satisfaction? The answer to this question appears important in expanding managers’ knowledge regarding the factors that may be related to improved hospital unit and organizational efficiency and effectiveness.

Conflict and Job Satisfaction Possessing knowledge of major job factors, along with understanding how complex this work environment can be, are important issues for healthcare leaders and managers to consider. Of similar importance is knowledge regarding how these variable relate to each other and to conflict in particular because of its inevitability within work environments. Few studies have examined conflict and its correlation to job satisfaction, turnover and performance with results consistently indicating a correlation between conflict and satisfaction. However, results vary as it relates to the correlation between conflict and the other two job factors (turnover and performance).

Gardner (1992) conducted a study to examine the relationship of conflict to job satisfaction, performance, and turnover of new graduate nurses. The results of this study did show moderate levels of conflict, but no indication of whether the conflict was positive or negative; however, conflict was correlated with job satisfaction, but it was not directly correlated with performance and turnover. Cox (2001) also conducted a study of nurses that examined all of these factors (conflict, job satisfaction, anticipated turnover, and performance effectiveness) as they related to employee morale. Some of the results of this study supported Gardner’s (1992) findings; showing no correlation between conflict and anticipated turnover, but correlations between conflict and satisfaction. However, different measurements were used in this study.

Kunaviktikul, et al. (2000) also conducted a study to examine conflict, job satisfaction, and intentions to stay (anticipated turnover). This study examined the relationships among these factors with conflict management styles and actual turnover. The findings of this study included that there were moderate levels of conflict within hospital units with a negative correlation between conflict and satisfaction.

Therefore, based on the limited amount of studies in this area, more research is needed to expand the healthcare literature regarding the correlation of these job factors. Clearly the implications of knowing more about nurses’ perceptions of conflict and how they are related to job satisfaction levels (as well as other job factors of interest) are self evident.

323

The Correlation between Conflict and Job Satisfaction within Nurse Units

THE STUDY The purpose of this study was to determine whether a relationship exists between conflict and job satisfaction, anticipated turnover, and performance among hospital nurses, with these article focusing on the results of the correlation between conflict and job satisfaction. This researcher hopes to gain a better understanding of the level (amount) and type of conflict (task or relationship) among nursing personnel and how this conflict relates to these principal job factors. Ultimately, it is thought that if hospital administrators are more aware of how high (or, for that matter, in fairness how low) the conflict levels might be in healthcare complex work situations, and the variables that might relate to these conflict levels, some remedial efforts might be directed toward these issues if necessary.

A survey descriptive research design using correlational statistics was utilized to examine conflict types, job satisfaction, anticipated turnover, and performance within the healthcare setting for nurses. A cross­sectional approach was also used here. The Intragroup Conflict Scale (Jehn, 1995) was used to measure the level (amount) of conflict and the type (task or relationship) of conflict within hospital units. The Job Satisfaction Survey (Spector, 1985) was used to measure general job satisfaction. The psychometric properties of these scales are included in the instrumentation section below. A demographic questionnaire was also used to obtain information about the participants’ ages, educational backgrounds, tenure, and clinical specialty areas (acute care, geriatric, mental health, rehabilitation, and spinal cord).

Nurses from all educational backgrounds (certified nursing assistants, licensed practical nurses, and registered nurses) are the population used for this study. A Veteran’s medical center in the southeast with two divisions, employing approximately 480 nurses was used to obtain the sample information for this study. A nonprobability convenience sampling approach was also utilized. Special clearance was required to conduct research at this medical facility with part of that clearance requiring a nurse manager employed at the medical facility to serve as the mentor for the study in order to ensue that guidelines were followed throughout the study.

The survey packets were administered to every unit, in both divisions, after conducting brief “in service” sessions (training) regarding the variables and purpose of the study. Additionally, emails were forwarded to the nurse managers of each unit and flyers were posted on key information boards soliciting their participation in the study. The survey packets (and sealable envelopes) were administered at the meeting with additional packets being placed in nurses’ unit mail boxes and were collected at a later date. The approximate time to complete the survey packet was 15 minutes. This procedure was thought to not cause unit disruptions or intervene in providing patient care. Additionally, participants were informed that the results would be presented in an aggregated and summary format at the end of the study.

Instrumentation A demographic questionnaire was used to assess the following subgroup variables: age, educational background, tenure and clinical area. Age was divided into three levels: 18­28 years of age, 29­39 years of age, and 40 years of age or more. Educational/training background was divided into three levels: Certified nursing assistant or health technician, licensed practical nurse, and registered nurse (categorical). Tenure was also divided into three levels: 0­3 years of employment, 4­7 years of employment, and 8+ years of employment. The clinical areas were divided into five levels: Acute care, geriatric, mental health, rehabilitation, and spinal cord (Appendix B). The researcher hopes this additional information will be useful in making further determinants concerning occurrences and types of conflict as relates to satisfaction levels.

As mentioned above, the Intragroup Conflict Scale was developed by Dr. K. A. Jehn (1995) and is based on eight items measuring the amount of conflict (low, moderate, and high), and the type of conflict (task or relationship). Task conflict is conflict related to aspects of the job and employee responsibilities, and relationship conflict is conflict related to interpersonal issues only. These items are measured based on a 5­point Likert scale ranging from “1” none, to “5” a lot. Items 1 through 4 identify the amount of relationship conflict and items 5 through 8 identify the amount of task conflict. The total scores for this scale can range from a low of 8 to a maximum of 40. Overall, higher scores represent “a lot” (high amounts) of conflict and low scores represent “low” amounts of conflict, with specific sections(items 1­4 measuring relationship conflict and items 5­8 measuring task conflict) indicating the perceptions of the amount of each type of conflict (task or relationship) present.

The Intragroup Conflict Scale (Jehn, 1995) has been used by several researchers in various work environments (DeChurch & Marks, 2001; Pearson, Ensley, & Amason, 2002) including healthcare (Friedman, et al., 2000) and is considered a reliable (Jehn, 1995; Amason, 1996) and valid (Jehn, 1995; Amason, 1996) measure of intragroup conflict. Jehn (1995) conducted a study of approximately 580 individuals comprising approximately 105 management teams in the freight and transportation industry utilizing this instrument to measure conflict. Analysis of the scale showed eigenvalues above 1.0 and a scree plot that

324

T. Y. Cardenas Volume 7 – Fall 2009 suggested a two factor solution (relationship and task conflict) based on the factor analysis. Cronbach’s alpha analysis showed coefficient alphas for each conflict type with a coefficient alpha of .92 for relationship conflict and a .87 for task conflict. Amason (1996) also utilized a seven item version of Jehn’s (1994) Intragroup Conflict Scale to conduct a study to determine the effects of functional (task) and dysfunctional (relationship) conflict on top team decision making among manufacturing workers. This author assessed the scale using exploratory factory analysis that also produced a two­factor solution (Amason, 1996). Affective (relationship) conflict produced a subscale reliability coefficient of .86, and cognitive (task) conflict produced a subscale reliability coefficient of .79 here (Amason, 1996). De Dreu and Weingart (2003) also conducted a meta­analysis study to assess over thirty conflict studies measuring task and relationship conflict. These authors state, “Task and relationship conflict in these studies was most often assessed with a scale developed by Jehn” (De Dreu & Weingart, 2003, p. 743).

The Job Satisfaction Survey (Spector, 1985) was developed to measure general job satisfaction and is based on 36 items. The scores for this survey can range from 36 to 216. The higher the scores obtained on the survey, the higher the levels of job satisfaction. The Job Satisfaction Survey instrument has nine facets coming from the 36 items which are measured using a 6­point Likert scale ranging from “1” Disagree very much, to “6” Agree very much. The facets include pay, promotion, supervision, benefits, contingent rewards, conditions, coworkers, the work itself, and communication. For the purposes of the present study, only overall job satisfaction was examined, which was computed by totaling all item scores (after reversing negatively worded items). Although originally developed to measure social service providers levels of satisfaction, this survey is commonly used to measure job satisfaction in various work environments, including healthcare (Blau, 1999), and various groups (Corte & Morgan, 2002). This instrument is considered a reliable and valid measure of job satisfaction with Cronbach’s alpha analysis showing internal consistency reliability for the overall scale being a high score of .91 (Spector, 1985). The subscale internal consistency reliability scores in this study ranged from .75 for pay, .73 for promotion, .82 for supervision, .73 for benefits, .76 for contingent rewards, .62 for operating procedures, .60 for coworkers, .78 for the nature of work, and .71 for communication. Individual items on the scale were also analyzed using principal components with varimax rotation and the results showed nine eigenvalues greater than 1.0 supporting the existence of the nine subscales.

Spector (1985) assessed convergent validity through comparison of the Job Satisfaction Survey with the Job Descriptive Index (another instrument that may be used to measure satisfaction and other aspects of jobs). The correlations between the same subscales of each instrument were larger than zero ranging between .61 and .80, suggesting convergent validity. Additionally, measures of distinct facets of job satisfaction were shown based on lower correlations among differing subscales ranging from .11 to .59, suggesting discriminant validity. The Job Satisfaction Survey subscales were also analyzed for discriminant validity through variable correlations. Some of the variables assessed included employee characteristics, turnover, and absenteeism (Spector, 1985). Employee characteristic age was correlated with the subscale nature of work (r= .24) and pay (r= .21). Turnover was correlated with the benefits subscale (r= ­.16) and the contingent rewards subscale (r= ­ .36). Absenteeism was correlated with total satisfaction (r= ­.12). These results also suggest validity of the scale.

There are several limitation and assumptions for this study. The convenience sample for this study was limited to nurses employed at this medical center, which may minimize the ability to generalize the findings to the total nursing population. There may also be some self­serving biases (organizational pressures, self protection biases, etc.) and/or social desirability issues in the self­reported responses given by the participants (Nauta & Kluwer, 2004). Additional response issues may include recency effect concerns (occurs when individuals assess only based on recent experiences or incidents), and/or halo effect concerns (occurs when positive information in one category tends to distort multiple categories) (Kreitner & Kinicki, 2004). General apprehension may also be an issue, which is specifically related to conflict studies (Nauta & Kluwer, 2004). Apprehension may also occur because conflict is often a sensitive issue, and some individuals may be hesitant to admit the presence of conflict because of the negative connation sometimes associated with conflict in general (Nauta & Kluwer, 2004). The assumptions of this study include the following: a) the participants will understand the questionnaires and particular items, b) the participants will give honest responses to questions, and c) the participants have work experiences related to the variables included in the study. Additionally, it is also assumed that the variables are normally distributed in this study, which is required to produce valid statistical test results.

Research Questions and Hypotheses The research questions that were addressed in this study included the following: a) What is the level (amount) of conflict within the units of the medical center?, b) Is task conflict (thought to be positive by Amason, 1996; De Dreu & van Vianen, 2001; Jehn, 1995) more prevalent than relationship conflict (thought to be negative by Amason, 1996; De Dreu & van Vianen, 2001; Jehn, 1995) within units of the medical center?, and c) Is there a relationship between conflict types (task and relationship) and satisfaction?

325

The Correlation between Conflict and Job Satisfaction within Nurse Units

From these research questions the following hypotheses were generated:

H10: There is a low level (amount) of total conflict within the units of the medical center. H1a: There is a moderate level (amount) of total conflict within the units of the medical center. H20: There is no difference in the level (amount) of task conflict and relationship conflict within the units of the medical center. H2a: There is more task conflict prevalent than relationship conflict within the units of the medical center. H30: There is no relationship between task conflict scores and general job satisfaction scores. H3a: There is a significant negative relationship between task conflict scores and general job satisfaction scores. H40: There is no relationship between general job satisfaction scores and relationship conflict scores. H4a: There is a significant negative relationship between general job satisfaction scores and relationship conflict scores.

Data Processing and Analysis The purpose of this study was to determine the levels (amount) of conflict and type of conflict (relationship and task) within the nursing groups and to examine whether relationships exist between these conflict measures and the self­reported job satisfaction, anticipated turnover intentions and the performance levels of these nurses. The SPSS software package 12.0, was used to conduct the statistical analyses of the data. These analyses included descriptive statistics of frequency distributions and measures of central tendency, and comparisons of the major variables (t­tests). Correlational statistics was also used to examine the relationships between the independent variables (task and relationship conflict) and the dependent variables (job satisfaction, anticipated turnover, and performance) examined in the study. All of these analyses were conducted using a .05 level of statistical significance.

Findings The sample for this study was comprised of in­patient unit nurses employed at a Veteran’s medical center in the southeast. The total population reported for the study was approximately 480 nurses. This total included full time and part time in patient nursing staff as well as on call nurses. Nurses included in the study were certified nursing assistant/health technicians, licensed practical nurses (LPNs), and registered nurses (RNs). Out of the total 480 nurses only 431 nurses were surveyed for the study as the researcher only had access to those nurses with mailboxes on the units, which eliminated some part­time and on call nurses. A total of 194 nurses responded to the survey. Out of these 194 surveys only 181 were usable due to missing data in some cases. Therefore, the overall response rate for the study was 45% for the 431 nurses actually initially surveyed during the study. The descriptive statistics were calculated for each job factor (total conflict, task and relationship conflict, and general job satisfaction) examined in the present study.

Descriptive statistics and frequency distributions showed that the nurses perceived slightly moderate levels of total conflict within the medical units with a mean score of 3.18 (on a 5 point scale, with 3 being slightly moderate), that they felt there was a slightly moderate level of task conflict with a mean score of 3.13 and a slightly moderate level of relationship conflict with a mean score of 3.24. In general the nurses were also moderately satisfied with a total mean overall job satisfaction score of 128.85 (maximum 216 points and a norm mean score of 133.1 as reported by Spector, 1985). Frequency distributions were also computed for the sample demographic profiles as well. These demographic profiles included age, education (training level), tenure, and clinical area of work.

The age of the nurses that presented usable survey packets showed that the majority of the nurses were 40 years of age or older (77.9%), which supports analyses conducted by the United States Department of Human Services (United States Department, 2004). The various education levels reported included certified nursing assistant/health technician, licensed practical nurse (LPN), and registered nurse (RN). The majority of the nurses in this study were registered nurses (RN) (48.6%) with licensed practical nurses (LPNs) being the next largest group. The number of years the nurses was employed at the medical center, or tenure, was also examined. The findings showed that 47 (26%) were employed 0­3 years, 35 (19.3%) were employed 4­7 years, and 99 (54.7%) were employed eight or more years with the medical center. Finally, the majority of the nurses in the present study worked on the acute care and geriatric units. There were 49 (27.1%) nurses on the acute care units, 50 (27.6%) nurses working on the geriatric units.

Hypotheses

H10: There is a low level (amount) of total conflict within the units of the medical center. Descriptive statistics were used to determine the level (amount) of total conflict within the units of the medical center. The results showed a slightly moderate

326

T. Y. Cardenas Volume 7 – Fall 2009 level of total conflict with a mean score of 3.18 (on a 5 point scale) however, this level is lower than the mean moderate conflict level of 4.0 that was predicted. The null hypothesis was rejected. These findings appear consistent with the literature review that conflict does exist within healthcare organizations (Cox, 2001, Gardner, 1992; Kunaviktikul et al., 2000). However, the level (amount) of conflict found in previous studies varied with Cox (2001) reporting low levels, and others reporting moderate levels (Gardner, 1992; Kunaviktikul, et al., 2000).

H20: There is no difference in the level (amount) of task conflict and relationship conflict within the units of the medical center. A paired sample t­test was conducted to compare the mean levels (amounts) of each type of conflict being examined in the present study (task and relationship). The data suggests that relationship conflict is slightly higher than task conflict with a mean score of 3.25 for relationship conflict and a mean score of 3.13 for task conflict (t, df 180=1.645 p < .05; one­tailed test). Subsequently the null hypothesis was rejected; however, the results are in the opposite direction than anticipated. The findings here are inconsistent with the majority of the literature review that suggests that conflict within healthcare organizations generally relates to tasks and responsibilities (Baker, 1995, Prescott & Bowen, 1985; Bell, 2003) and roles (Adams & Bond, 2000). However, Kunaviktikul et al. (2000) did find that in healthcare the most frequent causes of conflict were characteristics of co­workers.

H30: There is no relationship between task conflict scores and general job satisfaction scores. A Pearson bivariate correlational statistic was used to determine if a relationship exist between task conflict scores and general job satisfaction scores. A significant moderate negative correlation was found between task conflict scores and general job satisfaction scores with a correlation coefficient of r= ­.546, p= .000 (see Table 6 below). Therefore the null hypothesis was rejected. These findings appear to be consistent with the literature review findings that conflict regarding tasks will reduce job satisfaction levels of nurses (Cox, 2001; Gardner, 1992). These findings also appear to be consistent with the literature review regarding employees of other industries as well (Jehn, 1995).

H40: There is no relationship between general job satisfaction scores and relationship conflict scores. A Pearson bivariate correlational statistic was used to determine if a relationship existed between relationship conflict scores and general job satisfaction scores. A significant moderate negative correlation was found between relationship conflict scores and general job satisfaction scores with a correlation coefficient of r= ­.488, p= .000 (see Table 7 below). Therefore, the null hypothesis was rejected. These findings appear to be consistent with the literature review that conflict regarding personal issues is thought to reduce satisfaction levels (Cox, 2001; Gardner, 1992).

The Pearson bivariate correlation coefficients as relates to the types of conflict varied. A significant moderate negative correlation coefficient (r= ­.546) was found between task conflict and general job satisfaction and a significant moderate negative correlation coefficient (r= ­.488) was found between relationship conflict and general job satisfaction. These findings seem to support the idea that higher levels of either type of conflict (task or relationship) are associated with negative job satisfaction levels for nurses (Jameson, 2003; Nelson, 1995). In general, the nurses were of the opinion that slightly moderate levels of conflict existed within their setting that more relationship conflict existed than task conflict and that as conflict (either type) increases satisfaction levels tended to go down.

CONCLUSION Conflict is a part of all organizations today. Therefore, it is important that healthcare managers not overlook or downplay conflict, and its importance, when assessing nurse work environments in general and how it may be correlated to other job factors. The findings of this study suggest that conflict did exist within this setting (slightly moderate levels), that more relationship conflict existed in this hospital setting (believed to be negative in nature), and that both types of conflict (task and relationship), were correlated to job satisfaction. Hopefully, these results will lead to a better understanding of the dynamics of conflict situations, and a better understanding of the relationships between conflict and job satisfaction. The ultimate goal, of course, should be better management of conflict situations. It is thought that if nurse managers and nursing staff are aware of how conflict may influence other aspects of their work, they may be more motivated to manage conflict situations when they occur and to minimize the negative affects of this conflict. As was demonstrated here, the negative affects may include less satisfied nurses which may impact performance and thus patient care. Additionally, this information may be used to determine what actions, if any, need to be taken to address issues that affect overall patient care (Albaugh, 2005) and thus customer satisfaction (Koys, 2001). These improved customer satisfaction may also lead to an improved company image, which may have positive impacts in the community. A better understanding of the levels and types of conflicts within an organization may lead to less dysfunctional conflicts, leaving conflicts that are ultimately more functional.

327

The Correlation between Conflict and Job Satisfaction within Nurse Units

It is hoped that the information gained from this research project will motivate healthcare leaders to create cultures that do not label conflict as always negative or bad. Also, it is hoped that these leaders will promote cultures that foster good interpersonal relationships and effective communications; both of which appear to be important in having better employee collaboration, efficiency, and eventually improved patient care. However, it is evident that more research is needed to better understand and manage its affects on employees and the organization in general. More specifically, future research should also include a qualitative approach in an attempt to present a complete set of findings in studies investigating these factors.

In conclusion, providing quality health care is a serious issue in this country. The present study is a good beginning at understanding the dynamics of conflict and its effects on nurses in providing quality patient care.

REFERENCES AbuAlRub, R. F. (2004). Job stress, job performance, and social support among hospital nurses. Journal of Nursing Scholarship, 36(1), 73­78. Adams, A. & Bond, S. (2000). Hospital nurses’ job satisfaction, individual and organization characteristics. Journal of Advanced Nursing, 32(3), 536­543. Aiken, L. H., Clarke, S. P., Sloane, D. M., Sochalski, J., & Silber, J. H. (2002). Hospital nurse staffing and patient mortality, nurse burnout, and job satisfaction. Journal of American Medical Association, 288(16), 1987­1993. Albaugh, J. A. (2005). Resolving the nursing shortage: Nursing job satisfaction on the rise. Urologic Nursing, 25(4), 293, 284. Amason, A. C. (1996). Distinguishing the effects of functional and dysfunctional conflict on strategic decision making: Resolving a paradox for top management teams. Academy of Management Journal, 39(1), 123­158. Aschenbrener, C. A. & Siders, C. T. (1999). Managing low­to­mid intensity conflict in health care settings. Physician Executive, 25(4), 44­50. Bacal, R. (2004). Organizational conflict – The good, the bad, and the ugly. The Journal of Quality and Participation, 27(2), 21­22. Baker, K. M. (1995). Improving staff nurse conflict resolution skills. Nursing Economics, 13(5), 295­317. Barclay, D. W. (1991, May). Interdepartmental conflict in organizational buying: The impact of the organizational context. Journal of Marketing Research, 28, 145­159. Barney, S. M. (2002). Radical change: One solution to the nursing shortage. Journal of Healthcare Management, 47(4), 220­ 223. Bell. S. E. (2003). Nurses’ ethical conflicts in performance of utilization reviews. Nursing Ethics, 10(5), 541­554. Berstene, T. (2004). The inexorable link between conflict and change. The Journal for Quality and Participation, 27(2), 4­9. Blau, G. (1999). Early­career job factors influencing the professional commitment of medical technologists. Academy of Management Journal, 42(6), 687­695. Bliss­Holtz, J., Winter, N. & Scherer, E. M. (2004). An invitation to magnet accreditation. Nursing Management, 35(9), 36­42. Bodtker, A. M., & Jameson, J. K. (2001). Emotion in conflict formation and its transformation: Application to organizational conflict management. International Journal of Conflict Management, 12, 259­275. Buchan, J. (1999). Still attractive after all these years? Magnet hospitals in a changing health care environment. Journal of Advanced Nursing, 30(1), 100­108. Caudron, S. (1998, September). Keeping team conflict alive. Training & Development, 52(9), 48­52. Cochran, D. S., Schnake, M. & Earl, R. (1983). Effects of organizational size on conflict frequency and location in hospitals. Journal of Management Studies, 20(4), 441­451. Cochran, D. S. & White, D (1981). Intraorganizational conflict in the hospital purchasing decision making process. Academy of Management Journal, 24(2), 324­332. Cote, S. & Morgan, L. (2002). A longitudinal analysis of the association between emotion regulation, job satisfaction, and intentions to quit. Journal of Organizational Behavior, 23(8), 947­962. Cox, K. B. (2001). The effects of unit morale and interpersonal relations on conflict in the nursing unit. Journal of Advanced Nursing, 35(1), 17­25. Curtin, L. L. (2003). An integrated analyses of nurse staffing and related variables: Effects on patient outcomes. Journal of Issues in Nursing, 8(3), 118­129. Curtright, J. W, Stolp­Smith, S., & Edell, E. S. (2000). Strategic performance management: Development of a performance measurement system at the Mayo Clinic. Journal of Healthcare Management, 45(1), 58­68. DeChurch, L. A. & Marks, M. A. (2001). Maximizing the benefits of task conflict: the role of conflict management. International Journal of Conflict Management, 12(1), 4­22. DeDreu, C. K.W., van Vianen, A. E.M. (2001, May). Managing relationship conflict and the effectiveness of organizational teams. Journal of Organizational Behavior, 22(3), 309­328.

328

T. Y. Cardenas Volume 7 – Fall 2009

DeDreu, C. K. W., & Weingart, L. R. (2003). Task versus relationship conflict, team performance, and team member satisfaction: A meta­analysis. Journal of Applied Psychology, 88(4), 741­749. Deutsch, M. (1973). The Resolution of Conflict. New Haven, CT: Yale University Press. DiMeglio, K. et al. (2005). Group cohesion and nurse satisfaction: Examination of a team­building approach. The Journal of Nursing Administration, 35(3), 110­120. Eisenhardt, K. M., Kahwajy, J. L. & Bourgeois, III, L. J. (1997). How management teams can have a good fight. Harvard Business Review, 75(4), 1­8. Fernberg, P. M. (1999). Pulling together can resolve conflict. Occupational Hazards, 61(3), 65­67. Forte, P. S. (1997). The high cost of conflict. Nursing Economics, 15(3), 119­123. Friedman, R. A., Tidd, S. T., Currall, S. C., & Tsai, J. C. (2000). What goes around comes around: The impact of personal conflict style on work conflicts and stress. The International Journal of Conflict Management, 11(1), 32­55. Gardner, D. L. (1992). Conflict and Retention of new graduate nurses. Western Journal of Nursing Research, 14(1), 76­85. George, J. M. & Jones. G. R. (1996). The experience of work and turnover intentions: Interactive effects of value attainment, job satisfaction and positive mood. Journal of Applied Psychology, 81(3), 318­325. Gerardi, D. S. & Morrison, V. (2005). Managing conflict creatively. Critical Care Nurse, 25(2), 31­32. Hart, S. E. (2005). Hospital ethical climates and registered nurses’ turnover intentions. Journal of Nursing Scholarship, 37(2), 173­177. Heinz, D. (2004). Hospital nurse staffing and patient outcomes. Dimensions of Critical Care Nursing, 23(1), 44­50. Jameson, J. K. (2003). Transcending intractable conflict in health care: An exploratory study of communication and conflict management among anesthesia providers. Journal of Health Communication, 8(6), 563­581. Janssen, O., van de Vliert, E. & Veenstra, C. (1999). How task and person conflict shape the role of positive interdependence in management teams. Journal of Management, 25(2), 117­135. Jehn, K. A. (1994). Enhancing effects: An investigation of advantages and disadvantages of value­based intragroup conflict. The International Journal of Conflict Management, 5(3), 223­238. Jehn, K. A. (1995). A multimethod examination of the benefits and detriments of intragroup conflict. Administrative Science Quarterly, 40(2), 256­282. Jones, C. B., Stasiowski, S., Simons, B. J., Boyd, N. J. & Lucas, M. D. (1993). Shared governance and the nursing practice environment. Nursing Economics, 11(4), 208­214. Jones, K. (1993). Confrontation: Methods and skills. Nursing Management, 24(5), 68­70. Jones, M. A., Bushardt, S. C., & Cadenhead, G. (1990). A paradigm for effective resolution of interpersonal conflict. Nursing Management, 21(2), 64B­64L. Kennerly, S. M. (1996, March­April). Effects of shared governance on perceptions of work and work environment. Nursing Economics, 14(2), 111­116. Klunk, S. W. (1997). Conflict and the dynamic organization. Hospital Materiel Quarterly, 19(2), 37­44. Kolb, D. M. & Putnam, L. L. (1992). The multiple faces of conflict in organizations. Journal of Organizational Behavior, 13, 311­324. Koys, D. J. (2001). The effects of employee satisfaction, organizational citizenship behavior, and turnover on organizational effectiveness: A unit­level, longitudinal study. Personnel Psychology, 54, 101­114. Kunaviktikul, W., Nuntasupawat, R., Srisuphan, W., & Booth, R. Z. (2000). Relationships among conflict, conflict, management, job satisfaction, intent to stay, and turnover of professional nurses in Thailand. Nursing and Health Science, 2(1), 9­16. Lopopolo, R. B. (2002). The relationship of role­related variables to job satisfaction and commitment to the organization in a restructured hospital environment. Physical Therapy, 82(10), 984­999. McNeese­Smith, D. K., & van Servellen, G. (2000). Age, developmental, and job stage influences on nurse outcomes. Outcomes Management for Nursing Practice, 4(2), 97­104. Mitchell, G. J. (2001). A qualitative study exploring how qualified mental health nurses deal with incidents that conflict with their accountability. Journal of Psychiatric and Mental Health Nursing, 8(3), 241­248. Moberg, D. J. (2003). Managers as judges in employee disputes: An occasion for moral imagination. Business Ethics Quarterly, 13(4), 453­477. Murphy, B. (2004). Nursing home administrators’ level of job satisfaction. Journal of Healthcare Management, 49(5), 336­345. Nauta, A. & Kluwer, E. (2004). The use of questionnaires in conflict research. International Negotiation, 9, 457­470. Nelson, B. (1995). Dealing with inappropriate behavior on a multidisciplinary level: A policy is formed. JONA, 25(6), 58­61. Pearson, A. W., Ensley, M. D., & Amason, A. C. (2002). An assessment and refinement of Jehn’s Intragroup Conflict Scale. The International Journal of Conflict Management, 13(2), 110­126.

329

The Correlation between Conflict and Job Satisfaction within Nurse Units

Pondy, L. R (1967). Organizational conflict: Concepts and models. Administrative Science Quarterly, 12(2), 296­320. Parikh, P., Taukari, A., & Bhattacharya, T. (2004). Occupational stress and coping among nurses. Journal of Health Management, 6(2), 115­126. Prescott, P. A. & Bowen, S. A. (1985). Physician­nurse relationships. Annals of Internal Medicine, 103(1), 127­133. Rahim, M. A. (2001). Managing conflict in organizations, (3rd ed.). West Port, CT: Quorum Books. Rahim, M. A. (2002). Toward a theory of managing organizational conflict. The International Journal of Conflict Management, 13(3), 206­235. Redman, B. K. & Fry, S. T. (2000). Nurses’ ethical conflicts: What is really known about them? Nursing Ethics, 7(4), 360­366. Rotarius, T. & Liberman, A. (2000). Health care alliances and alternative dispute resolution: Managing trust and conflict. Health Care Manager, 18(3), 25­31. Rowe, A. K., de Savigny, D., Lanata, C. F., & Victora, C. G. (2005). How can we achieve and maintain high­quality performance of health workers in low­resource settings? The Lancet, 366(9490), 1026­1035. Siders, C. T. & Aschenbrener, C. A. (1999). Conflict management checklist: A diagnostic tool for assessing conflict in organizations. Physician Executive, 25(4), 32­37. Spector, P. E. (1985). Measurement of human service staff satisfaction: Development of a job satisfaction survey. American Journal of Community Psychology, 13(6), 693­713. Sportsman, S. (2005). Build a framework for conflict assessment. Nursing Management United States Department of Health and Human Services (2004). The Registered Nurse Population: National Sample Survey of Registered Nurses March 2004. Retrieved February 3, 2007, from http://bhpr.hrsa.gov/healthworkforce/reports/ rnpopulation/preliminaryfindings.htm Wall, J. A. & Callister, R. R (1995). Conflict and its management. Journal of Management, 21(3), 515­558.

330

B. B. Alderman, S. C. Perkins, S. C. Broadway and L. Milrod Volume 7 – Fall 2009

PRE AND POST WRITING TEST ASSESSMENT: DETERMINING RATER RELIABILITY

Betsy B. Alderman1, Stephynie C. Perkins2, S. Camille Broadway3 and Lucas Milrod4 University of Tennessee at Chattanooga1, 4, University of North Florida2 and University of Texas at Arlington3, USA

ABSTRACT This study sought to determine if the scoring of a pre­ and post­writing test, used as part of an accredited department’s overall assessment plan, is consistent among raters in other journalism programs. The study found strong relationships among raters on the over­all scores and various subscale scores and thus great consistency among raters on this writing test. The post­test scores appear to be a reliable assessment of a student’s writing ability because of this consistency among graders.

Keywords: Writing Assessment, Journalism, Accreditation

INTRODUCTION The teaching of writing is the core of journalism and mass communication programs. Whether students plan to pursue careers in informative or persuasive writing, jobs that require writing have in common the need for copy that clearly communicates the message, and is grammatically appropriate, mechanically sound and stylistically accurate.

Businesses across the spectrum of media and industry recognize the importance of employees who can write clearly. For example, a 2004 study by the National Commission on Writing determined that businesses spent almost $3 billion to help employees clarify e­mails, letters and reports31.

How much more important then is it for public relations practitioners, copywriters and reporters, whose business is writing, to master punctuation, spelling, grammar and style? It follows then that journalism educators must be able to assess how well they are teaching students the craft of writing.

In recent years, the Accrediting Council on Education in Journalism and Mass Communications (ACEJMC), which evaluates how well journalism programs prepare young professionals, developed a list of values and competencies that are vital for graduates from accredited programs to have mastered. These values and competencies include:

 Write correctly and clearly in forms and styles appropriate for the communications professions, audiences and purposes they serve.  Critically evaluate their own work and that of others for grammatical correctness, appropriate style, clarity, accuracy and fairness32.

In addition, assessing how well accredited schools are meeting those values and competencies has received increased emphasis by the accrediting council and is one of the nine standards by which accredited programs are evaluated33. Accredited programs have developed methods to measure their ability to instill in their graduates the competencies set forth by ACEJMC. Thus, it is important to be able to have reliable assessment methods.

RESEARCH QUESTIONS This study seeks to begin to address an over­arching question:

(1) Is the grading of writing consistent across journalism programs?

31 National Commission on Writing for America’s Families, Schools and Colleges, The College Board “Writing: A ticket to work … or a ticket out?” (September 2004.) 32 Accrediting Council on Education in Journalism and Mass Communications, Journalism and Mass Communications Accreditation 2006­2007, ACEJMC 2006: 15 33 Accrediting Council on Education in Journalism and Mass Communications, Journalism and Mass Communications Accreditation 2006­2007, ACEJMC 2006: 15

331

Pre and Post Writing Test Assessment: Determining Rater Reliability

Further, the question that is central to this study is:

(2) To what extent, and in what ways, would the same story be graded or scored the same way by another faculty member teaching in journalism at another institution?

This study seeks to answer these questions by scoring a pre­ and post­writing test, which could be used as part of a program’s overall assessment plan. Determining the statistical consistency among raters, or scorers, on assessment of writing is important to the overall reliability of the assessment program itself. Consistent grading methods are important to student learning outcomes and are especially important in programs with multiple sections of basic journalism writing classes, such as the course included this study.

LITERATURE OVERVIEW A review of the literature related to the evaluation of various types of writing suggests that statistical analysis has not been used as frequently to quantify writing for the media, and this study seeks to help provide one tool for more uniform measurement.

Importance of Reliable Measures Reliability is the consistency of judgment across evaluations. Consistency among raters and rater reliability has been studied in other disciplines, specifically in essay writing and in English composition. It should be noted, however, that the writing instruction and styles in these areas, compared to that in journalism and mass communication, could differ dramatically and thus the methods of evaluation of the writing could also differ.

One such study explored the scoring of the essay portion on the Test of English as a Foreign Language. The study found that rater reliability should be examined at the levels of “relative ratings” and “absolute ratings,” two categories used to score the essays34.

Another study examined how instructors at two colleges scored individual student performance on sets of essays. Hayes, Hatch and Silk found an extremely low consistency of holistically scored student performance from essay to essay. The study found that drawing conclusions from one or even a few writing samples from a particular student is problematic35.

Examining the reliability of scores assigned to essays in a university writing program was the subject of another study. The study looked at rater reliability over time and found that the reliability of scores was weaker over an 11­year period than over a three­year period. The study concluded that the reliability of scores was going down over time, which prompted a change in the university’s scoring rubric36.

Looking at assessment in a slightly different way, Popp, Ryan, Thompson and Behrens investigated the role of benchmark writing samples in the direct assessment of writing. Results showed that the assessed quality of the writing depends on the benchmarks chosen to define the rubric or scoring method37.

Need for Writing Measurement in Journalism Programs In one of the studies in this area in journalism and mass communication, Popovich and Masse investigated the Mass Communication Writing Apprehension Measure, looking at individual assessment of student attitudes in media writing. Using a pre­ and post­test approach, the study explored students’ confidence levels about their own writing abilities38. Results showed that students who were initially optimistic about writing and writing skills remained very positive after the sixteen­week semester, while the pessimistic students’ attitudes grew even more negative toward the writing experience.

34 Yong­Won Lee, “The Essay Scoring and Scorer Reliability in TOEFL CBT,” a paper presented at the annual meeting of the National Council on Measurement in Education, Seattle, Wash. (April 2001): Clearinghouse TM033055. 35 John R. Hayes, Jill A Hatch, and Christine M. Silk, “Does Holistic Assessment Predict Writing Performance: Estimating the Consistency of Student Performance on Holistically Scored Writing Assignments,” Written Communication 17 (1, January 2000): 3­26. 36 Qaisar Sultana, “The University Writing Requirement: A Study of the Reliability of Scores, “ a paper presented at the annual meeting of the Mid­South Educational Research Association, Little Rock, Ark. (November 2001): ERIC ED 460147. 37 Sharon E. Osborn Popp, Joseph M Ryan, Marilyn S. Thompson, John T. Behrens, “Operationalizing the Rubric: The Effect of Benchmark Selection on the Assessed Quality of Writing,” a paper presented at the annual meeting of the American Educational Research Association, Chicago, Ill. (April 2003): Clearinghouse TM035359. 38 Mark N. Popovich, Mark H. Masse’, “Individual Assessment of Media Writing Student Attitudes: Recasting the Mass Communication Writing Apprehension Measure,” Journalism and Mass Communication Quarterly 82 (2, summer 2005): 339­355.

332

B. B. Alderman, S. C. Perkins, S. C. Broadway and L. Milrod Volume 7 – Fall 2009

Dodd, Mays and Tipton noted that instructors of beginning media writing courses must provide consistent instruction, particularly as students learn to write in an objective, journalistic style, which varies from traditional English composition39. The study used an open­ended questionnaire to evaluate 260 “enterprise” stories for which the students gathered their own information and interviewed their own sources. Each person who served as a primary source received a copy of the story for which he or she had been interviewed. Each source judged the story’s accuracy in six areas critical to media writing: quotes, paraphrases, facts, omissions, emphasis and overall fairness. The students later reviewed and responded to the critiques. Although many of the young writers noted that they had taken care to prepare error­free copy, the sources identified numerous mistakes. Most of the inaccuracies were related to basic facts, such as misspelled names, incorrect titles, etc.

In one of the few statistical analyses of mass communication writing, Ruffner used stepwise multiple regression analysis to predict students’ grades on a timed writing assignment. The study defined three variables: demographics (specifically, age, sex, major, typing speed, media writing experience, grade point average and attitude toward writing) psychological state and the copy’s syntax. Ruffner suggested that students’ attitudes toward writing had a significant overall impact on the quality of their copy and that the combination of thinking ability, creativity, writing style and age offer the best predictors of writing success40.

Turning to literature in the tests and measurements area, many studies on inter­rater reliability can be found. Perhaps the foundation for many of these is “Intraclass Correlations: Uses in Assessing Rater Reliability.” The 1979 article provides guidelines for choosing among six different forms of the intraclass correlation for reliability studies among targets that are rated by judges41. This article provided the foundation for the statistical methods employed in the current study by serving as a template for construction of the data set.

METHODOLOGY Students in a program accredited by ACEJMC were given a pre­ and post­writing test in a sophomore­level beginning media writing course during one academic year in the fall 2005 and spring 2006 semesters. The test was given to three sections each semester (six total sections), with a total of 70 students participating during the school year. The tests are part of the program’s overall assessment plan.

During the second week of the course, students were given a fact sheet and asked to write a news story. This was prior to any specific instruction on journalistic writing technique or style. Then students were then given the same set of facts to write the story during the final week of a 15­week semester, following weeks of instruction and practice writing basic, inverted­pyramid­ style print news stories.

One full­time faculty member and the adjunct faculty member, who have both taught beginning media writing, one for 20 years, the other for 10 years, devised a scoring sheet to rate these stories. The categories are based on common areas of emphasis in a beginning media writing course as cited in many basic media writing texts42. Categories on the score sheet include: Accuracy, Objectivity and Fairness, Lead Writing, Story Organization, Attribution, Associated Press Style, Preciseness and Wordiness, and Grammar and Punctuation. Each of these categories was further defined to include a sentence or two of description of specific areas to rate. For example, the “Story Organization” category was defined as “Structuring the story in the inverted pyramid style with logical progressions or links. Maintaining short paragraphs throughout.” Each of the eight categories was assigned a numeric value (10 or 15 points) for a total possible score of 100. (See “COMM. 230 Media Writing I Pre/Post Test Evaluation” at the end of this study.)

The full­time faculty member rated these stories, based on the scoring sheet’s criteria, in the week immediately following the administration of the pre­ or post­tests respectively. Data from these tests was used to change and adjust the writing instruction and course curriculum to improve the results of the post­test.

39 Julie E. Dodd, Roy P. Mays, and Judy H. Tipton, “The Use of an Accuracy Survey to Improve Student Writing,” Journalism & Mass Communication Educator 52 (spring 1997): 45­51. Copy prepared by advertising copy writers, journalists and public relations practitioners is intended to be read by specific audiences, which makes the process more public that traditional composition. Dodd et al noted that the feedback element of the exercise reminded students of the potential impact of their copy on audiences and themselves, regardless of medium or type of media­related profession. 40 Michael Ruffner, “An Empirical Approach for the Assessment of Journalistic Writing,” Journalism Quarterly 58 (spring 1981): 77­82. 41 Patrick E. Shrout, Joseph L. Fleiss, “Intraclass Correlations: Uses in Assessing Rater Reliability,” Psychological Bulletin 86 (2, 1979): 420­428. 42 See for example, the Missouri Group, Telling the Story third edition, (Boston: Bedford/St. Martins, 2007) and Yopp, Jan Johnson and Katherine C. McAdams, Reaching Audiences fourth edition, (Boston: Pearson, 2007).

333

Pre and Post Writing Test Assessment: Determining Rater Reliability

However, in order to determine if the scoring of these tests by the original rater was reliable, it was decided to take the assessment method a step further. Thus, this study seeks to determine to what extent would the same ratings occur if different teachers rated the same stories?

A random sample of ten sets of pre­ and post­writing tests from the total population of 70 students were used in the study. Both the pre­ and post­tests of the ten random students from the pool were pulled from the total population. Individual students were not identified in any way.

Four faculty members at institutions in the U.S. were asked to participate by scoring the 10 sets of pre­ and post­tests. The raters each received the same 20 stories arranged in the same order with scoring sheets attached to each. They were also given the original fact sheet that the students had used when writing the pre­and post­tests. Due to the nature of this study, it was essential that each participant rate the same stories. Since it is unlikely that any participant would volunteer to rate the entire population of 140 studies, a random sample of 10 sets were selected.

Scorers were not told if the story was a pre­ or post­test. The faculty members were asked to score each test based on their own criteria for a sophomore­level media writing course and to use the score sheet. They were also asked to write a separate brief description of the method they used in assigning numeric scores on the grade sheets. But the faculty members, who were in essence the subjects of this study, were not given any other information about the study in order not to prejudice their attitudes about the actual grading or scoring of the writing.

The four participating faculty members have between 38 and seven years of experience in teaching a beginning media or news writing course. Among the group of raters was a retired professor emeritus who is the author of a popular media writing handbook, a former television producer and reporter, a former newspaper reporter and copy editor, and a former reporter with experience in newspapers, radio and television news. All four have either taught or are now teaching in programs accredited by ACEJMC. Three have doctorates from accredited programs. Another has a master’s degree.

These credentials are comparable to the original rater, noted as Rater A, in this study who has a doctoral degree, 23 years of full­time teaching experience in accredited programs, and professional experience in public relations, newspapers and television news.

The four raters’ scores were combined and compared to the original rater’s score. Because each rater was asked to score 20 tests, this results in an independent variable sample size of 80 scores. The scores were analyzed in SPSS using multiple regression analysis.

Multiple regression analyses were conducted for the scores of Rater A compared to the linear combination of the other raters' scores for the total on overall ratings, as well as for each of the eight subscales. This analysis included both pre­test and post­ test scores for each rater. Multiple regression analysis was chosen in order to evaluate how well the scores from the Raters B, C, D and F predicted the scores of Rater A, and thus to assess the strength of the relationship between the scores of Rater A and the other raters. (Rater E did not return the scoring sheets and thus is not included in this analysis.)

This is a non­traditional use of regression analysis because it is conventionally used to predict the value of a dimension for the future (i.e. SAT scores as a prediction of college performance), whereas this study examines the predictive relationship between Rater A's scores and the other scores as a unit in order to assess the overall strength of the relationship. This study is operating under the assumption that if the combination of Raters B, C, D and F’s scores can predict the scores of Rater A, a strong relationship exists between the rating. In other words, this analysis will offer insight into the level of consistency among raters by illustrating levels of predictability.

The predictor variables were the overall and subscales scores of Raters B, C, D and F, while the criterion variable was the same scores of Rater A. The scales that make up the overall score include these subscales: Accuracy, Objectivity and Fairness, Lead Writing, Story Organization, Attribution, AP Style, Precision and Wordiness, and Grammar and Punctuation. Two sets of analyses were done: one observing Rater A’s individual subscale scores as predicted by the subscale scores of the other raters and one examining Rater A’s overall scores as predicted by the overall scores of the other raters.

In order to demonstrate the relationships found in this study and their strength, this study reports the sample multiple correlation coefficient R2, which is the proportion of variance in the criterion variable linearly related to the combination of multiple predictor variables. This coefficient varies from 0­1.0 with 0 signifying no relationship between the criterion variable

334

B. B. Alderman, S. C. Perkins, S. C. Broadway and L. Milrod Volume 7 – Fall 2009 and the combination of predictor variables, and 1 indicating a perfect linear relationship between the two. In addition, the study reports standard tests of the significance of the relationships. Finally, the study provides scatter plots of each relationship as a visual demonstration of the linear strength of the relationships between the numerical variables. The Y­axis of each plot represents the score of Rater A on the dimension observed and the X­axis represents the combined score of the other raters. The plotted points represent actual data points. The regression line shows the predicted values of the scores of Rater A from the combined score of the other raters. The strength of a linear relationship can be seen in how closely the plotted points aggregate around the regression line.

FINDINGS For the overall score, the linear combination of the scores of Raters B, C, D and F was significantly related to the scores of Rater A. The sample multiple correlation coefficient, R2=.82, indicating that approximately 82% of the variance of Rater A’s overall scores in the sample can be described in terms of the linear combination of the overall scores of the other raters. The probability of a relationship this strong by chance alone would be less than .001, (analysis of variance F = 17.071) far exceeding the customary levels of .05 or .01 statistical significance. Thus, the overall scores of Rater A can generally be predicted with certainty based on the scores of Raters B, C, D and F. This relationship is illustrated in Figure 1.

Figure 1

As could be expected, due to the inherent gap in reliability between any whole and the parts that compose it, the correlations between Rater A and those of Raters B, C, D and F for the eight subscales are not as strong as that of the overall score, though they are still significant in most cases. Thus, Rater A’s scores on the individual subscales of the rating scale are generally less predictable from the corresponding dimensions of Raters B, C, D and F.

Table 1 and Figures 2 through 5 illustrate the relationship between Raters’ B, C, D, and F scores on the various subsections and those of Rater A and how well the scores of Rater A can be predicted from the linear combination of the others on each dimension.

 The combination of the Lead Writing scores of the four other raters was significantly related to those of Rater A with R2=.724 and F=9.845, p<.001. See figure 2.

335

Pre and Post Writing Test Assessment: Determining Rater Reliability

Figure 2

 As shown in Figure 3, a statistically significant relationship existed between the linear combination of the Story Organization scores from the four raters and the Story Organization scores of Rater A with R2=.651 and F=6.998, p<.01

Figure 3

 There was a significant link between Precision and Wordiness scores of Raters B, C, D and F and Rater A with R2=.650 and F=6.959, p<.01 as shown in Figure 4.

336

B. B. Alderman, S. C. Perkins, S. C. Broadway and L. Milrod Volume 7 – Fall 2009

Figure 4

 The combination of Grammar and Punctuation scores from Raters B, C, D and F shown in Figure 5 was significantly related to the Grammar and Punctuation scores of Rater A with R2=.721 and F=9.700, p<.001

Figure 5

 The combination of the Accuracy scores of four raters were not significantly related to the Accuracy scores of Rater A, with a R2=.278 and F=1.447, p=.267.  The Objectivity and Fairness scores of Raters B, C, D and F were also not significantly related to those of Rater A, with R2=.279 and F=1.453, p=.265  The linear combination of Attribution scores from Raters B­F was not significantly related to the scores of Rater A with R2=.418 and F=2.695, p=.071  AP style scores from Raters B, C, D and F were also not significantly related to the AP style scores of Rater A with R2=.434 and F=2.874, p=.060.

The overall post­test scores were more predictable and thus more consistent across raters than the overall pre­test scores. For the post­test, shown in figure 6, the overall scores of Rater A were significantly correlated to those of the other Raters,

337

Pre and Post Writing Test Assessment: Determining Rater Reliability

R2=.943 and F=20.534, p<.01. However, the overall scores of Rater A on the pre­test were not significantly correlated with the scores of Raters B, C, D and F, R2=.751 and F=3.779, p=.089. These findings make sense, since the pre­tests would have widely varied in format and style, whereas post­test would have been much more consistent.

Figure 6

These findings indicate a consistency in the overall scoring of a writing test among raters in different journalism programs. This consistency was greater in the scoring of post­tests than in pre­tests. In addition, this consistency was significant for the dimensions of Lead Writing, Story Organization, Precision and Wordiness, and Grammar and Punctuation, and was not statistically significant for Accuracy, Objectivity and Fairness, Attribution, and AP Style as seen in Table 1.

Table 1 Rater A and Raters B, C, D, F R2 ANOVA F P Accuracy 0.278 1.447 0.267 Objectivity and Fairness 0.279 1.453 0.265 Lead Writing 0.724 9.845 0.000 Story Organization 0.651 6.998 0.002 Attribution 0.418 2.695 0.071 Associated Press Style 0.434 2.874 0.060 Precision and Wordiness 0.650 6.959 0.002 Grammar and Punctuation 0.721 9.700 0.000

The relationship of the scores of Rater A with the other four raters can be explained to some extent with the individual descriptions raters were asked to write concerning how they assigned various numeric scores or deducted points in the eight categories on the score sheet.

For example, in the Lead Writing category, one of the four categories that were found to be consistently scored among the raters, all four raters indicated they deducted points or assigned a zero for leads that were longer than one sentence. Two of the four raters said they also deducted points if the lead missed the major news point.

Three of the four raters said that in the Story Organization category, they reduced the score or deducted points if the story needed to be re­ordered to reflect the news emphasis or if the copy did not flow logically from the lead.

338

B. B. Alderman, S. C. Perkins, S. C. Broadway and L. Milrod Volume 7 – Fall 2009

Another category in which raters were found to be statistically consistent, Precision and Wordiness, three raters said they reduced the score for clichés, redundancies or rhetorical remarks.

In the Grammar and Punctuation category, two raters said they reduced the score for use of first or second person. Grader B explained he “took off quite a bit” for editorializing and first­ and second­person writing. Grader C said she deducted points for passive voice.

In the categories that were not found to be consistent among raters, however, the explanations of how scores were assigned were similar. For example, in the Accuracy category, all four raters indicated they assigned a zero or a very low score for factual errors. Specific factual errors were noted by three of the four raters included incorrect or misspelled names or titles.

In the Objectivity and Fairness category, three raters indicated they gave low scores or a zero if important quotes were left out of the story or if there was editorializing.

And all four raters said that in the Attribution category, they deducted points from stories that had unattributed opinions or facts.

In the AP Style category, two raters said they deducted points for style errors. But in their written descriptions of how they assigned points in the eight categories, all four raters said they had a difficulty deciding just how many points to deduct for specific areas such as style. Point values deducted ranged from one or two up to all 15 points in some categories. And although raters expressed frustration at not knowing exactly how many points to deduct for specific errors, overall their comments seem to point to consistency among how they did assign numerical values.

CONCLUSIONS Teachers from three institutions and one professor emeritus all assigned similar scores to the ones that Rater A assigned. The scores of Rater A can be predicted based on the linear combination of the scores of the other four raters. Because there are strong relationships among raters on the over­all scores and four of the subscale scores, it appears that there is great consistency among raters on this writing test. Thus, the post­test scores appear to be a reliable assessment of a student’s writing ability because of the consistency among the five raters.

The strength of the reliability among the raters implies that the writing measure used in this study is a reliable gauge of student work. Indeed, the raters’ descriptions of the grading process suggest that the coders penalized students for similar mistakes in most of the subscales. And although the ratings of B, C, D, and F did not predict the scores of Rater A in four of the subscales, the raters’ descriptions of how they assigned points suggest that the lack of consistency in these subscale scores may be a result of differences in numeric evaluations (how many points to take off for what errors) rather than differences in substantive evaluations (what errors led to score reductions).

The pre­test scores were not as consistent among the raters. This may be due in part to the lack of specificity the raters were given in how to score the assignments. Raters were instructed to use their own criteria for sophomore­level writing classes. Different raters may have had divergent expectations for sophomore­level students. For example, one rater may have judged the pre­test stories based on an expectation of how sophomore students should be able to write with no prior instruction in journalistic writing, while another rater may use professional­level journalistic writing standards to grade the work. Raters using a sophomore­level standard would grade the pre­test stories more generously than raters who are looking at these stories from a professional­standards perspective.

The lack of consistency may also be due to a relatively small sample size, an admitted weakness of this study and purpose for further research. There were only four raters scoring the tests. Though they each graded a total of twenty tests to make up for this, more raters would strengthen the results of this study and possibly bring light to the lack of consistency between pre­ and post­test ratings.

Because only four other raters were subjects in this study, more research with additional raters is necessary. However, in statistical terms, using multiple regression analysis, this study went beyond simple correlation to show that not only is there a strong relationship between Rater A and the other scores as a unit, the study illustrates that one can predict the other.

While these consistent ratings may be a result of the grading system developed for the story writing exercise, they also may be a result of similar training and professional experience among the coders. Further testing is needed to determine if other

339

Pre and Post Writing Test Assessment: Determining Rater Reliability factors such as demographics, program size, teaching experience and professional media experience would influence the level of grading consistency among raters. Additionally, other grading methods and rubrics or variations on the method used here could determine whether the grading method itself influenced consistency. A more extensive explanation of the grading method used in this study also may have increased rater reliability. Additional research into the different amounts and types of grading explanations could suggest ways to improve consistency measures in grading journalistic writing samples.

COMM 230 Media Writing I Pre/Post Test Evaluation

Name: Test: Test Date: Notes:

Possible Category Score Points Accuracy All information, including numbers, spelling of names, etc., is 15 accurately communicated in a clear manner. No information was added that was not provided Objectivity and Fairness Reporting all sides; eliminating the writer’s bias from writing. Telling a 15 complete story. Lead Writing Crafting a newsworthy, interesting and concise lead sentence 15 consisting of no more than 25 to 30 words. Story Organization Structuring the story in the inverted pyramid style with logical 15 progressions or links. Maintaining short paragraphs throughout. Attribution Using appropriate sources, properly attributing information to sources 10 and using indirect/direct quotes correctly. AP Style Adhering to AP Style for all elements including titles, numbers, 10 addresses, states, etc. Preciseness and Wordiness Article contains specific details that tell the story. Story is to-the-point 10 without extraneous words or phrases. Cliches are eliminated. Grammar and Punctuation Correctly punctuating sentences, dates, titles, quotations, etc. Story is 10 written in the third person, without the use of I, me and you unless contained in direct quotes.

Total Score

340

W. Cowan, Y. Bolen, P. Chandler, B. Thomas, K. Buck and L. Hyde Volume 7 – Fall 2009

THE EFFECT OF DISTANCE EDUCATION LECTURE FORMAT ON STUDENT APPLICATION

Wendy Cowan, Yvette Bolen, Prentice Chandler, Bruce Thomas, Kathy Buck and Lisa Hyde Athens State University, USA

ABSTRACT The philosophical underpinning of this study begins with and follows the outline of ’s Social Cognitive Theory (SCT). In SCT, social actors’ attitudes toward tasks and their environment in general are shaped by external forces and as a result, new behaviors are learned through “modeling” or simply observing an action. Observational or “social learning” is a particularly useful way of thinking about online education because so much of what students do and how they interact with technology are skills that can be taught by modeling or showing students how to perform a particular task. The relationship between a student’s performance and his own reinforcement of ability are important indicators in learning a new task or idea. With the increasing popularity of online and distance education across the country, researchers have begun to look at ways in which online delivery of content and related skills can be improved. The purpose of this study is to determine the extent to which the instructional context through which a distance education lecture is transmitted effects student ability to successfully apply lecture content. The researchers hypothesize that participants receiving instruction through online modeling (Tegrity) will score higher on the LiveText assessment task than participants who receive instruction through text and illustration only (PowerPoint slide presentation). Independent t­test (t = ­1.173, df = 56, p > .05) determined that there was no significant difference between the group receiving instruction through online modeling when compared to the group receiving text and illustration only instruction (Tegrity session and PowerPoint presentation, respectively). Although the LiveText assessment score was not significantly different between the online modeling group and the text and illustration only group, it was higher in the online modeling group (19.48 +­ 4.45 vs. 17.97 +­ 5.36, respectively). Results of this ongoing research found that a positive trend seemed to exist for those that received online modeling. It is possible that employing a large sample size would have produced statistical significance. Although current research findings are promising, the greatest limitation to this project is the small sample size (n=58). The final study population will include a significantly larger number of participants. In addition, the analyses conducted at the completion of this research project will include all factors measured.

Keywords: Distance Education, Online Learning, Modeling

INTRODUCTION Social Cognitive Theory The philosophical underpinning of this study begins with and follows the outline of Albert Bandura’s Social Cognitive Theory (SCT). In SCT, social actors’ attitudes toward tasks and their environment in general are shaped by external forces and as a result, new behaviors are learned through “modeling” or simply observing an action (Bandura, 1977). Observational or “social learning” (Aronson, 2004) is a particularly useful way of thinking about online education because so much of what students do and how they interact with technology are skills that can be taught by modeling or showing students how to perform a particular task. The relationship between a student’s performance and his own reinforcement of ability are important indicators in learning a new task or idea (Krasner & Ullmann, 1965). Perhaps the most salient aspect of SCT is the overarching theme of reciprocal determinism. In this construct

Environmental events, personal factors, and behavior all operate as interacting determinants of one another. Reciprocal causation provides people with opportunities to exercise some control over their destinies as well sets limits of self­direction…Human thought is a powerful instrument for comprehending the environment and dealing with it (Bandura, 1986, xi).

Bandura’s theory of social learning is directed at the relationship between interacting factors—cognitive, affective, biological— that meet to determine the effort that one is required to make a change of behavior manifest, that is, learning a new behavior (Bandura, 1998). This intersecting matrix of factors is important for the implementation of online education, and the teaching

341

The Effect of Distance Education Lecture Format on Student Application of students to use online delivery format, because SCT has shown that once a subjects (i.e. students) has observed a model, “the best predictor of how well they can perform a similar task is the extent to which they now expect that they will be able to do it themselves” (Mook, 2004).

Cognitive Science and Distance Education With the increasing popularity of online and distance education across the country (Hooper & Hokanson, 2000; McKay, 2007), researchers have begun to look at ways in ways to improve the delivery of content and related skills. The multifaceted goals of distance education are to increase access to educational services, disseminate information to students in the most cost effective way possible, and enable teachers of classes to handle a larger number of students (Schlager, 2004). In Wang and Lin’s (2007) study of web­based learning systems, they have found that the organization and implementation of such platforms in education parallel the theoretical underpinnings of SCT. Their findings suggest that motivation, environment, and demands made upon students interact to give self­regulated learners a positive learning experience with distance education. Their research mirrors other research (Leong, 2008) that suggests that internal behaviors (i.e. cognitive) interact with social factors, such as interest and learner agency.

Learning or cognitive styles within the context of this study can be thought of as one of the intersecting matrices that determine the type and quality of a students’ experience with online education. Chen and Macredie’s (2004) found that the learning styles of the participants impacted the ways in which students reacted to the learning environment and the ways in which they dealt with the problems they faced with online delivery. Furthermore, the success of online education is closely linked to the ways in which the initial context (i.e. first contact with delivery systems) is constructed (Bossard, Kermarrex, Buche, & Tisseau, 2008) for the online learner. SCT when applied to learners’ interactions with the material gives the learner a sense of immediacy that is greater than the learning that takes place in traditional lecture settings, particularly if modeling is present. In this way (and through SCT) the distance education format gives the learner a social and status incentive to learn (LaRose & Whitten, 2000). As instructors continue to grapple with the demands and needs of students in online courses, several themes emerge from the research: attention to learning styles of students, making the learning e­environment comfortable, and disseminating knowledge that is based on the learner’s needs—all increase the value of the educational experience (Sargeant, Curran, Allen, Jarvis, & Ho, 2006).

Distance Education Research Studies of distance learning concluded that technologies used in online learning were not significantly different from regular classroom learning in terms of effectiveness (Means, B., Toyama, Y., Murphy, R, Bakia, M. & Jones, K., 2009). In response to the large number of comparative effectiveness studies that exist, metastudies have emerged to summarize findings. Olson and Wisher (2002) concluded that Web­based instruction is at least as effective as traditional classroom instruction based upon the review of 47 reports of evaluations of Web­based courses in higher education, published between 1996 and 2002. Members of the U.S. Department Education Center for Technology conducted meta­analysis of over 1,000 online learning empirical studies ranging from 1996 through July 2008, finding that students in online learning conditions performed better than students receiving face­to­face instruction (Means, et. al, 2009).

Studies have also examined the extent to which distance learning students perceive specific teaching strategies or instructional media to be effective. Maki, Maki, Patterson and Whittaker (2000) tracked and compared the achievement of university students learning through Web­based distance learning courses versus traditional land­based courses with different instructors and over a number of semesters. While less satisfaction with distance learning courses was communicated by students, distance learners’ pre­test and post­test scores were twice as high as those enrolled in traditional courses.

A major determinant of student success in a distance­learning course is the type of technology utilized for the delivery of the course material and the method by which students and teacher interact (Moore & Kearsley, 2005). Hiltz, Coppola, Rotter, Turoff, and Benbuanan­Fich (2000) studied the effect of collaborative learning strategies on online teaching success. Findings showed that students actively involved in collaborative learning performed as well or better than students in traditional learning classes. Online students scored poorer when they only received posted material and were required to send back individual assignments with little or no collaborative involvement.

Online learning is vastly different that its early beginnings which included the use of correspondence courses, teleconferencing, and televised broadcasts. A diverse range of technology is now available to facilitate online instruction. In light of the wide choice of online learning applications, the efficacy of the specific technology being used is warranted.

342

W. Cowan, Y. Bolen, P. Chandler, B. Thomas, K. Buck and L. Hyde Volume 7 – Fall 2009

Purpose of the Study The purpose of this study is to determine the extent to which the instructional context through which a distance­education lecture is transmitted effects student ability to successfully apply lecture content. The researchers hypothesize that participants receiving instruction through online modeling (Tegrity) will score higher on the LiveText assessment task than participants who receive instruction through text and illustration only (PowerPoint slide presentation).

METHODS Approach to the Problem The research project utilized a true experimental design in order to examine the hypothesis and research question. The specific approach utilized for this research was a Randomized, Posttest­Only, Control­Group Design. The research being reported on is part of an ongoing research project. The final subject pool will consist of considerably more participants.

Subjects The subjects being reported upon in this paper consisted of 71 junior­ and senior­level university students enrolled in teacher preparation courses at a university located in north Alabama. Subjects volunteered for the study and were randomly assigned to treatment or control groups. Of the original 71 subjects, 58 completed the study requirements. The treatment and control groups each consisted of 29 subjects, including 46 females and 12 males. These subjects represented four teacher education programs, including 35 elementary education majors, 12 physical education majors, 9 secondary education majors, and 2 collaborative education majors. Study approval was granted by the university’s Human Subjects Committee.

Procedure The instructors who participated in this study volunteered to proctor the study condition for their respective classes. They agreed to adhere to protocol as designed by the researcher, which included following the provided scripted study procedures.

Following random assignment to a control or treatment group, each participant was seated at a desk containing a personal computer and monitor. All participants completed a demographic questionnaire and learning style inventory prior to participating in the treatment portion of the study.

Participants in the control group viewed a PowerPoint session, utilizing illustration and text, consisting of step­by­step procedures for creating an assessment in Livetext (LiveText Solutions 1997­2009), a web­based accreditation management system. In addition they were given a hard copy of the PowerPoint session. The treatment group viewed a Tegrity session (Tegrity, Inc. 1995­2009), an asynchronous video streaming service used for lecture capture. The Tegrity session consisted of the presenter modeling and explaining the step­by­step procedures for creating an assessment in LiveText. Participants in the Tegrity group were able to watch and listen to the procedures while participants in the control group were only able to view the illustrations and read the text for each of the steps. Both groups were provided with blank paper on which they could take notes and were provided with 27 minutes to view the respective sessions.

Upon completion of the time allotted for viewing the PowerPoint or Tegrity session students returned the note paper and PowerPoint printed materials. Immediately the students were asked to create an assessment in LiveText following the specific requirements provided on an 18­item task sheet.

The completed assessment was shared with the researcher and a final score was given based on the extent to which the participant was able to create the assessment according to the given requirements. An outcome score of 23 on the assessment task was considered 100% correct.

Data Analysis Demographic data were collected on age, gender, ethnicity, academic major, preferred learning style, number of semesters of university attendance, and number of semesters of technology use. An independent t­test was utilized to determine whether a significant difference existed on LiveText assessment scores between the treatment group, receiving instruction through online modeling (Tegrity), and control group, receiving instruction through text and illustration only (PowerPoint Slide Show). Significance was set at p < 0.05.

343

The Effect of Distance Education Lecture Format on Student Application

RESULTS The analysis to be performed at the conclusion of this research project will include analyses of all factors measured. Current analysis consists of hypothesis testing only.

Hypothesis Testing The research question for this project is to what extent does the instructional context through which a distance education lecture is transmitted effect student ability to apply the lecture content? The researchers hypothesize that participants receiving instruction through online modeling (Tegrity) will score higher on the LiveText assessment task than participants who receive instruction through text and illustration only (PowerPoint slide presentation).

Independent t­test (t = ­1.173, df = 56, p > .05) determined that there was no significant difference between the group receiving instruction through online modeling when compared to the group receiving text and illustration only instruction (Tegrity session and PowerPoint presentation, respectively). Although the LiveText assessment score was not significantly different between the online modeling group and the text and illustration only group, it was higher in the online modeling group (19.48 +­ 4.45 vs. 17.97 +­ 5.36, respectively).

DISCUSSION The research question for this project is to what extent does the instructional context through which a distance education lecture is transmitted effect student ability to apply the lecture content? The researchers hypothesize that participants receiving instruction through online modeling (Tegrity) will score higher on the LiveText assessment task than participants who receive instruction through text and illustration only (PowerPoint slide presentation). Although there were not significant findings in regard to hypothesis testing, the treatment group did score higher on the LiveText outcome assessment than the control group (19.48 +­ 4.45 vs. 17.97 +­ 5.36, respectively).

Social cognitive theory posits social actors’ attitudes toward tasks and their environment in general are shaped by external forces and as a result, new behaviors are learned through “modeling” or simply observing an action (Bandura, 1977). Observational or “social learning” (Aronson, 2004) is a particularly useful way of thinking about online education because so much of what students do and how they interact with technology are skills that can be taught by modeling or showing students how to perform a particular task. Results of this ongoing research found that a positive trend seemed to exist for those that received online modeling. It is possible that employing a large sample size would have produced statistical significance.

Limitations Although current research findings are promising, the greatest limitation to this project is the small sample size (n=58). The final study population will include a significantly larger number of participants. In addition, the analyses conducted at the completion of this research project will include all factors measured.

Future Research Based on the initial results of this ongoing research project future endeavors should include specific comparisons could be made between the following three instructional methods: (a) instructional delivery received through online modeling (Tegrity), (b) instructional delivery received through text and illustration only, and (c) instructional delivery received through traditional (land­based) methods.

REFERENCES Aronson, E. (2004). The social animal. New York: Worth. Bandura, A. (1977). Social learning theory. Englewood: Prentice­Hall. Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood: Prentice­Hall. Bandura, A. (1998). Self­efficacy: The exercise of control. New York: W.H. Freeman & Company. Bossard, C., Kermarrec, G., Buche, C., & Tisseau, J. (2008). Transfer of learning in virtual environments: A new challenge. Virtual Reality, 12(3), 151­162. Chen, S. & Macredie, R. (2004). Cognitive modeling of student learning in web­based instructional programs. International Journal of Human­Computer Interaction, 17(3), 375­402.

344

W. Cowan, Y. Bolen, P. Chandler, B. Thomas, K. Buck and L. Hyde Volume 7 – Fall 2009

Hiltz, S. R., Coppola, N., Rotter, N., Troff, M., & Benbunan­Fich, R. (2000). Measuring the importance of collaborative learning for the effectiveness of ALN: A Multi­measure, multi­method approach. Journal of Asynchronous Learning Networks, 4(2). Hooper, S. & Hokanson, B. (2000). The changing face of knowledge. Social Education, 64(1), 28­31. Krasner, L. & Ullmann, L. (1965). Research in behavior modification: New developments and implications. New York: Holt, Rinehart, & Winston. LaRose, R. & Whitten, P. (2000). Re­thinking instructional immediacy for web courses: A social cognitive exploration. Communication Education, 49(4), 320­339. Leong, P. (2008). Understanding interactivity in online learning environments: The role of social presence and cognitive absorption in student satisfaction with online courses. ProQuest Information & Learning, 68(11­A), 4675. Live Text Solutions. (1997­2009). Retrieved September 25, 2009, www.college.livetext.com Maki, R. H., Maki., W. S., Patterson, M. & Whittaker, P.D. (2000). Evaluation of a Web­based introductory psychology course. Behavior Research Methods, Instruments & Computers, 32, 230 ­239. McKay, E. (2007). Enhancing learning through human computer interaction. Hershey: Idea Group. Means, B., Toyama, Y., Murphy, R., Bakia, M. & Jones, K. (2009). Evaluation of evidence­based practices in online learning: A meta­analysis and review of online learning studies. U.S. Department of Education Office of Planning, Evaluation, and Policy Development – Center for Technology in Learning. Mook, D. (2004). Classic experiments in psychology. Westport: Greenwood. Moore, M. & Kearsley, G. (2005). Distance education – A systems view. Belmont: Thomson Wadsworth. Olson, M. T., & Wisher, A. R. (2002). The effectiveness of web­based instruction: An initial inquiry. International Review of Research in Open and Distance Learning, 3(2). Sargeant, J., Curran, V., Allen, M., Jarvis, S., & Ho, K. (2006). Facilitating interpersonal interaction and learning online: Linking theory and practice. The Journal of Continuing Education in the Health Professions, 26, 128­136. Schlager, M. (2004). Enabling new forms of online management: Challenges for e­learning design and research. In T. Duffy & J. Kirkley (Eds.), Learner­centered theory and practice in distance education: Cases from higher education (pp. 91­104). Mahwah: Lawrence­Erlbaum. Tegrity 2.0. (1996­2009). [Computer software as service]. Santa Clara, CA: Tegrity, Inc. Wang, S. & Lin, S. (2007). The application of social cognitive theory to web­based learning through netports. British Journal of Educational Technology, 38(4), 600­612.

345

Improving the Delivery of Online Business Courses: A Continuous Improvement Process

IMPROVING THE DELIVERY OF ONLINE BUSINESS COURSES: A CONTINUOUS IMPROVEMENT PROCESS

Robert W. Robertson Saint Leo University, USA

ABSTRACT The continued growth of online education demands a new model of academic quality assurance. In particular, the online academic environment includes a variety of variables which make it different from the traditional class model of learning. As result, quality control must reflect the unique variables of the online environment.

Some of the unique variables include the increased use adjunct faculty; the dispersed student body accessing the course on their own schedule twenty­four hours a day seven days a week; and, of course these use of a computer to structure the lessons.

This paper will track the growth of online learning; document the key differences between the online and in class experience; and, offer suggestions through the use of cases to assist in developing “best practices” and a continuous improvement framework within the online environment.

346

K. O. Harbor Volume 7 – Fall 2009

THE ARISTOLESLIAN MODEL FOR ETHICAL DECISION MAKING: PROPOSING A MODEL FOR TEACHING ETHICAL DECISION MAKING IN COMMUNICATION

Kingsley O. Harbor Jacksonville State University, USA

EXTENDED ABSTRACT It is an uphill—if not an impossible—task to argue against the necessity of ethics now than perhaps at any other time in recent history. Evidence of moral decadence in contemporary society is overwhelming—crimes and their nature, drugs, corporate malfeasance, corrupt or incompetent politicians, proliferation of ethics courses in colleges and universities, and, among others, the rise of government­established ethics commissions. All attests to the urgency of restoring morality to a society that needs it desperately.

The high demand for ethics in contemporary society consequently dictates the need for more effective ways of teaching ethics, part of which is how to make ethical decisions. On its face, ethical decision making appears quite simple, but in practice, it is more challenging than it appears, especially when one is faced with an ethical dilemma.

Ethics scholars and educators make use of ethics models in order to simplify and teach the difficult task of making ethical decisions. Ethics models are not designed to provide specific answers to specific ethical problems because there is no single answer that can fit all ethical situations. These models, therefore, provide a road map that can assist a moral agent in coming up with a possible solution to a problem. While these models reduce the level of abstraction involved in ethical decision making by suggesting a process by which a decision can be reached, some of them do a better job than others in terms of closing the abstraction gap between an ethical issue and its solution. Two of such models are found in Aristotle’s famous theory—the Aristotelian Golden Mean—and Leslie’s Ethical Decision Making Model.

To develop a model that further bridges the gap between an ethical issue and its solution, this study proposes an amalgamation of Aristotle’s Golden Mean with Leslie’s Ethical Decision Making Model to produce what it (this study) terms the AritoLeslian Model for Ethical Decision Making. The AritoLeslian Model for Ethical Decision Making takes the strengths of Aristotle’s theory and Leslie’s model, improves on each one’s weakness, thus closing further the abstraction gap between a problem and its solution, and consequently, advancing the effectiveness of ethical decision making in communication.

347

The Effect of Summer Language Intervention Program on Vocabulary Development of ESL Third Grade Student in an Urban Mississippi School District

THE EFFECT OF SUMMER LANGUAGE INTERVENTION PROGRAM ON VOCABULARY DEVELOPMENT OF ESL THIRD GRADE STUDENT IN AN URBAN MISSISSIPPI SCHOOL DISTRICT

Vickie Latham*, Jianjun Yin¥, Vivian Taylor¥ and Linda Channell¥ Jackson Public Schools* and Jackson State University¥, USA

ABSTRACT This study was undertaken to address a critical issue in an urban Mississippi school district where English Language Learners are being served. The purpose of this study was to document the effects of a teacher­designed intervention session on the vocabulary development of ESL learners. The participants of this research were rising third graders receiving English as a Second Language services. The sample population for the study consisted of students who had been in a U.S. public school classroom for the last two years and whose parents agreed for them to participate in the summer intervention session/research group.

In the course of the study sessions activities were planned incorporating the use of instructional resources that provided the multi­sensory intervention approaches necessary for students with limited background knowledge in English. These interventions allowed students to process information through their senses in order to understand it, recall it, and later use it in their future learning situations. Activities were designed to provide multiple paths for students to develop vocabulary skills and word choice while continuously reinforcing enhanced word usage.

The instruments used to provide documentation of the impact intervention sessions consisted of pretest and posttest assessment measures. The posttest measurement, a repeated activity of the pretest, was conducted at the end of the session. This provided two data sets for the same sample and, when analyzed, provided statistical information of the significance level of the interventions sessions. Both pre and posttest measures were assessed on a rubric scale of 1 to 5 in collaboration with the teacher of the ESL summer language program.

Although the analyzed data show no significant difference of a summer language intervention program on vocabulary development of third grade students, it is noteworthy to point out that findings do not mean the session was deemed unimportant. It is believed that variables other than the design of the intervention session impacted the level of significance.

It is recommended that some or all of the variables be explored in future research regarding intervention sessions and their effect on vocabulary development of English Language Learners. It is also recommended that a larger group of participants be studied in order to possibly generalize more effectively the specific level of effect intervention sessions of this nature have on English language learners.

ABOUT THE AUTHORS: Vickie Latham, MS, is a third grade teacher in Jackson Public Schools (MS). She is currently pursuing her Ed.S degree at Jackson State University. Jianjun Yin, Ph. D, is an associate professor of education at Jackson State University. Vivian Taylor, Ed. D, is a professor of education at Jackson State University. Linda Channell, Ed. D, is an associate professor of education at Jackson State University.

348

B. A. Griffith Volume 7 – Fall 2009

THE FIRST YEAR OF COLLEGE: BEGINNING THE TRANSITION FROM ADOLESCENT TO ADULT

Brian A. Griffith Vanderbilt University, USA

ABSTRACT Students often enter college in moratorium, that state of psychological flux where individuals begin to question and explore options regarding personal and interpersonal definitions. According to Erik Erikson (1959), the transition between adolescence and adulthood is the time when a mature identity is formed, upon which future developmental milestones of intimacy, generativity and integrity are dependent. Adolescents are constructing a “personality within a social reality which one understands… that his individual way of mastering experience is a successful variant of the way other people around him master experience” (p. 89). These are formative years as students gain physical and emotional distance from parents and families and attempt to make sense of life and find their way in the world.

Erikson’s (1959) theory of identity formation further elaborated by Marcia (1966) describes the process of critical examination (exploration) and adoption (commitment) of a cohesive identity and coherent worldview. This presentation describes a comprehensive first year curriculum that facilitates such a developmental process. The First Year Experience in the Human and Organizational Development Program at Vanderbilt University facilitates psychosocial development through knowledge acquisition and personal application. These foundational help students acquire the knowledge, skills and attitudes that will guide the transition from adolescent to adult. Learning objectives include the development of (a) an accurate yet complex understanding of self, others, and the world, (b) morally sound, cooperative strategies for managing life experiences, (c) meaningful life goals that value relationships and the common good, and (d) practical skills that prepare them for the workforce.

Keywords: School to Work Transition, Adult Development, Higher Education, Self Learning, Talent Development

349

A Comparison of Two Types of Social Support for Mothers of Mentally Ill Children

A COMPARISON OF TWO TYPES OF SOCIAL SUPPORT FOR MOTHERS OF MENTALLY ILL CHILDREN

Kathleen Scharer1, Eileen J. Colon2, Linda Moneyham3, Jim Hussey4, Abbas Tavakoli5 and Margaret Shugart6 University of South Carolina1, 4, 5, Western Carolina University2 University of Alabama at Birmingham3 and Emory University6, USA

ABSTRACT Problem: The purpose of this analysis was to compare social support offered by two telehealth nursing interventions for mothers of children with serious mental illnesses Methods: A randomized, controlled, quantitative investigation is underway to test two support interventions, using the telephone (TSS) or Internet (WEB). Qualitative description was used to analyze data generated during telehealth interventions. Findings: The behaviors and attitudes of children were challenging for the mothers to manage. Mothers’ emotional reactions included, fear, frustration, concern and guilt. They sought to be advocates for their children. The nurses provided emotional, informational and appraisal support. TSS mothers were passive recipients while WEB mothers had to choose to participate. Conclusions: Mothers in both interventions shared similar concerns and sought support related to their child’s problems.

Keywords: Mothers, Mental Illness in Children, Social Support, Telehealth

350

I. Kargbo Volume 7 – Fall 2009

“DISCIPLINE AND CONFINEMENT: CRIME AND PUNISHMENT IN COLONIAL SIERRA LEONE”

Ibrahim Kargbo Coppin State University, USA

ABSTRACT The colonial criminal justice system in Sierra Leone was primarily set up as a social control mechanism and a reflection of the coercive power of the colonial governments to prevent crime and punish those who deviated from what the colonial officials deemed to be appropriate and acceptable. Thus, the colonial criminal justice system in Sierra Leone was essentially an instrument of political and social control designed to maintain law and order and punish law violators. Criminal acts were viewed as wrongs against the colonial state. Therefore, the introduction of colonial legal systems in Sierra Leone was accompanied not only by a redefinition and reclassification of crimes but also by changes in the social perception of crime and punishment. Moreover, the procedure of arresting, processing, adjudicating, and punishment of the offender was also changed.

This paper will examine the nature of crime and punishment in colonial Sierra Leone highlighting the various ordinances enacted to prevent crime, statistics of those incarcerated, types of crimes committed, juvenile crimes, and the rate of repeat offenders.

351

Burnout among Female Club Volleyball Players

BURNOUT AMONG FEMALE CLUB VOLLEYBALL PLAYERS

Jeff Eyanson, Malia S. Lawrence and Joseph K. Mintah Azusa Pacific University, USA

ABSTRACT Burnout has become the buzzword since the early 1970’s, beginning in the health services industry and advancing to the educational and athletic fields. Since then, there has been extensive research (Coakley, 1992; Lai & Wiggins, 2003; Raedeke, Lunney, & Venables, 1990) on athletic burnout. However, few studies have focused on the three­burnout indicators­emotional exhaustion, depersonalization, and personal accomplishment. The dual purposes of this study were to: (1) investigate the frequency and intensity of burnout among female volleyball players, and (2) examine the three burnout indicators of emotional exhaustion, depersonalization, and personal accomplishment. Southern California female Club Volleyball players (N = 30) ages (17­18 years) responded to a modified Maslach Burnout Inventory (Maslach & Jackson, 1996). One­Sample t­tests showed statistical significant differences in the frequency of emotional exhaustion t (29) = 5.65, frequency of personal accomplishment t (29) = 6.39, and frequency of depersonalization t (29) = ­19.57. Similar statistical significant differences were found in the intensity of emotional exhaustion t (29) = 7.64, personal accomplishment t (29) = 6.30, and depersonalization t (29) = ­3.37. Independent sample t­test results showed statistical significant differences in participants’ perceptions of community and frequency of depersonalization t (28) = ­4.28 and intensity of depersonalization t (28) = ­3.37. Separate One­Way ANOVAs showed statistical significant differences among race and frequency of emotional exhaustion F (2 27) = 9.11 and intensity of emotional exhaustion F (2 27) = 8.33. Findings are discussed and implications for sport science and athletic coaches are offered. In general, participants reported being burned out; however, more specifically they reported feeling emotionally exhausted. Future recommendations for sport science researchers and athletic coaches were also offered.

Keywords: Burnout, Female, Club Volleyball, Emotional Exhaustion, Depersonalization, and Personal Accomplishment

352

Y. Bolen, B. Thomas, B. Heatherly and J. Reid Volume 7 – Fall 2009

“SODA CONSUMPTION IN OVERWEIGHT AND AT­RISK ELEMENTARY CHILDREN”

Yvette Bolen1, Bruce Thomas2, Ben Heatherly3 and James Reid4 Athens State University1,2, Brookhill Elementary School3, Huntingdon College4, USA

ABSTRACT Increasingly researchers have investigated the factors associated with the rise in childhood obesity. Children identified as obese and having a higher body mass index (BMI) have increased risk for elevated cholesterol, hypertension, heart disease, orthopedic issues, depression, asthma, and type 2 diabetes (The Centers for Disease Control and Prevention, 2008). Body Mass Index (BMI) is a measure of the relationship of a person’s weight and height. This is determined by dividing a person’s body weight in kilograms by the person’s height in meters squared. Instead of using the traditional height/weight charts, the National Institutes of Health (NIH) currently uses the BMI to define normal weight, overweight, and obesity (in MedTerms Dictionary, 2008). When BMI scores are utilized, it should be noted that some individuals have a greater amount of muscle, which weighs more, and therefore have a higher BMI without the associated risks. The World Health Organization (2002) indicated that there is a growing body of evidence that child obesity is a global epidemic. Studies have indicated sweetened soda consumption plays a contributing role in the excessive weight gain experienced by elementary­aged students. (Ludwig, D. S., Peterson, K. E., & Gortmaker, S. L., 2001). The purpose of this study was to investigate the connection between soda consumption and childhood obesity. Of the 221 third grade subjects, based on BMI scores, 87 were identified as overweight, 58 were at­risk, and 76 were healthy. A one­way Analysis of Variance (ANOVA) was utilized to determine differences that exist between these three groups and soda consumption per day. Findings indicated a significant difference in soda consumption, F(2, 218) + 23.127, p<.00l. Results revealed that the mean daily soda consumption score of overweight participants was 2.16, while the mean daily soda consumption score of at­risk and healthy participants was 1.22 and .5132, respectively. A crosstabs statistical technique was utilized to further investigate soda consumption (diet, sweetened, or no soda consumed). Of the students identified as overweight, nine consumed sweetened soda, nine consumed diet soda and eight consumed no sodas. Twenty­four at­risk students consumed sweetened soda, 13 consumed diet soda and 21 consumed no sodas. Of the 76 healthy subjects, nine consumed sweetened soda, 29 consumed diet soda, and 38 consumed no sodas. Therefore 88% of healthy subjects consumed either diet or no sodas, while the 80% of the overweight subjects consumed sweetened soda. Though the obesity epidemic is due to a variety of factors, this study identifies soda consumption in elementary­aged students, particularly sweetened soda, as a key contributor to childhood weight gain. Studies have shown that a child’s diet which reduces the intake of beverages absent of nutrients and high in calorie may help lessen or inhibit childhood obesity. It is strongly recommended that any organizations providing services to children, including school systems, should implement changes that effectively reduce children’s consumption of sweetened sodas. It is imperative for adults to recognize that nutritional choices have the potential to determine the health status of the children for whom they are responsible and all adults and care givers responsible for the nutritional habits of children must take a proactive approach when making critical beverage choices.

Keywords: Childhood Obesity, Body Mass Index (BMI), Childhood Soda Consumption

REFERENCES Body Mass Index (n.d.) In MedTerms Dictionary. Retrieved September 6, 2008, from http://www.medterms.com/ Centers for Disease Control and Prevention. (n.d.) Retrieved September 1, 2009, from http://www.cdc.gov Ludwig, David S., Peterson, K. E., & Gortmaker, Steven L. (2001). Relation between consumption of sugar­sweetened drinks and childhood obesity: a prospective, observational analysis. The Lancet . Retrieved September 11, 2009, from http://epsl.asu.edu/ceru/Documents/lancet.pdf World Health Organization. (2002). Controlling the global obesity epidemic. Retrieved September 2, 2009 from http://www.who.int/nut/obs.html.

353

For information about Intellectbase International Consortium and associated journals of the conference, please visit www.intellectbase.org

JAGR-Journal of Applied Global IJAISL-International Journal of RMIC-Review of Management IJSHIM-International Journal JGIP-Journal of Global Research Accounting Information Innovation and Creativity of Social Health Information Intelligence and Policy Science and Leadership Systems Management

RHESL-Review of Higher JISTP-Journal of Information IJEDAS-International Journal JIBMR-Journal of International JOIM-Journal of Organizational Education and Self-Learning Systems Technology and of Electronic Data Business Management & Information Management Planning Administration and Security Research

BES T M A PS

EDUCATION BUSINESS SCIENCE

MULTI-DISCIPLINARY

SOCIAL FOUNDATIONS TECHNOLOGY & INTELLECTUAL PERSPECTIVES

POLITICAL MANAGEMENT ADMINISTRATION

JKHRM-Journal of Knowledge JGISM-Journal of Global IHPPL-Intellectbase Handbook JWBSTE-Journal of Web-Based and Human Resource Health Information Systems in of Professional Practice and Socio-Technical Engineering Management Medicine Learning