American Journal of Management

North American Business Press Atlanta - Seattle – South Florida - Toronto

American Journal of Management

Editor Dr. Howard Miller

Editor-In-Chief Dr. David Smith

NABP EDITORIAL ADVISORY BOARD

Dr. Andy Bertsch - MINOT STATE UNIVERSITY Dr. Jacob Bikker - UTRECHT UNIVERSITY, NETHERLANDS Dr. Bill Bommer - CALIFORNIA STATE UNIVERSITY, FRESNO Dr. Michael Bond - UNIVERSITY OF ARIZONA Dr. Charles Butler - COLORADO STATE UNIVERSITY Dr. Jon Carrick - STETSON UNIVERSITY Dr. Mondher Cherif - REIMS, FRANCE Dr. Daniel Condon - DOMINICAN UNIVERSITY, CHICAGO Dr. Bahram Dadgostar - LAKEHEAD UNIVERSITY, CANADA Dr. Deborah Erdos-Knapp - KENT STATE UNIVERSITY Dr. Bruce Forster - UNIVERSITY OF NEBRASKA, KEARNEY Dr. Nancy Furlow - MARYMOUNT UNIVERSITY Dr. Mark Gershon - TEMPLE UNIVERSITY Dr. Philippe Gregoire - UNIVERSITY OF LAVAL, CANADA Dr. Donald Grunewald - IONA COLLEGE Dr. Samanthala Hettihewa - UNIVERSITY OF BALLARAT, AUSTRALIA Dr. Russell Kashian - UNIVERSITY OF WISCONSIN, WHITEWATER Dr. Jeffrey Kennedy - PALM BEACH ATLANTIC UNIVERSITY Dr. Jerry Knutson - AG EDWARDS Dr. Dean Koutramanis - UNIVERSITY OF TAMPA Dr. Malek Lashgari - UNIVERSITY OF HARTFORD Dr. Priscilla Liang - CALIFORNIA STATE UNIVERSITY, CHANNEL ISLANDS Dr. Tony Matias - MATIAS AND ASSOCIATES Dr. Patti Meglich - UNIVERSITY OF NEBRASKA, OMAHA Dr. Robert Metts - UNIVERSITY OF NEVADA, RENO Dr. Adil Mouhammed - UNIVERSITY OF ILLINOIS, SPRINGFIELD Dr. Roy Pearson - COLLEGE OF WILLIAM AND MARY Dr. Sergiy Rakhmayil - RYERSON UNIVERSITY, CANADA Dr. Robert Scherer - CLEVELAND STATE UNIVERSITY Dr. Ira Sohn - MONTCLAIR STATE UNIVERSITY Dr. Reginal Sheppard - UNIVERSITY OF NEW BRUNSWICK, CANADA Dr. Carlos Spaht - LOUISIANA STATE UNIVERSITY, SHREVEPORT Dr. Ken Thorpe - EMORY UNIVERSITY Dr. Robert Tian - MEDIALLE COLLEGE Dr. Calin Valsan - BISHOP'S UNIVERSITY, CANADA Dr. Anne Walsh - LA SALLE UNIVERSITY Dr. Thomas Verney - SHIPPENSBURG STATE UNIVERSITY Dr. Christopher Wright - UNIVERSITY OF ADELAIDE, AUSTRALIA Volume 13(2) ISSN 21657998

Authors have granted copyright consent to allow that copies of their article may be made for personal or internal use. This does not extend to other kinds of copying, such as copying for general distribution, for advertising or promotional purposes, for creating new collective works, or for resale. Any consent for republication, other than noted, must be granted through the publisher:

North American Business Press, Inc. Atlanta - Seattle – South Florida - Toronto

©American Journal of Management 2013

For submission, subscription or copyright information, contact the editor at: [email protected]

Subscription Price: US$ 325/yr

Our journals are indexed by UMI-Proquest-ABI Inform, EBSCOHost, GoogleScholar, and listed with Cabell's Directory of Periodicals, Ulrich's Listing of Periodicals, Bowkers Publishing Resources, the Library of Congress, the National Library of Canada. Our journals have been used to support the Academically Qualified (AQ) faculty classification by all recognized business school accrediting bodies.

This Issue

The Impact of Reputational Resources on Event Performance in International Film Festivals ...... 9 Joseph Lampel, Shiva Nadavulakere, Anushri Rawat

Our research addresses the question: How do international film festivals acquire and confer reputational resources? Elsaesser (2005) suggests that festivals function as “ad-hoc stock exchange of reputations” and “arbiters and taste-makers”. Drawing upon his work and resource based view of strategy we propose that the most valuable intangible resource of international film festivals is their reputation. Further, using Dierickx & Cool’s (1989) intangible asset stock accumulation model we propose that the competitive advantage of an international film festival depends on its stocks of reputation, while renewing advantage depends on flows of reputation. The stocks of reputation are captured by the film festival’s jury profile, and the flows of reputation are represented by the profile of directors of films included in the competition section of the film festival. Findings suggest that the stock variable –number of feature film credits of a jury member, and the flow variable - number of award nominations of a director are significantly related to international film festival performance. One of the key contributions of our study is the operationalization of international film festival reputational resources in terms of stocks and flows, and its effect on event performance.

Computer Simulation: What’s the Story? ...... 22 Brent H. Kinghorn

Computer simulation appears often in management theory development. However, computer simulation poses unique difficulties for many researchers and practitioners to understand. The author suggests the utilization of qualitative foundations will illuminate this type of quantitative research. Storytelling facilitates the use of that specific qualitative program (ethnostatistics). Storytelling elements explicate the patterns developed, found, and reported within the computer simulation methodology. An example of the framework developed is given.

Fourteen More Points: Successful Applications of Deming’s System Theory ...... 36 Thomas F. Kelly

This paper presents 14 additional application strategies for implementing the systems concepts of W. Edwards Deming. These strategies can all be generalized to virtually all organizations to enable them to do more and better with less. In these economically stressed times many organizations are struggling to survive. A key challenge is to adapt their operations in such a way as to make them both financially sustainable and capable of succeeding in an ever more competitive environment through ongoing self- improvement. This paper presents change strategies that are both significantly more effective and less costly.

Project Scope, Market Size Prospects, and Launch Outcomes in Cooperative New Product Development ...... 41 Kimberly M. Green

This study investigates the relationship between the number of partners in cooperative new product development and the scope of the development project, the projected market size for the product, and the likelihood the product will be launched. With drug development in the pharmaceutical industry as the setting, the hypotheses are tested using hierarchical modeling and a dataset of 7,167 drugs across 86 firms during the period 1995 – 2006. Results suggest that the number of development partners is positively related to the scope of knowledge categories underlying the development effort, while the scope of product applications is associated with market size.

Networking: A Critical Success Factor for Entrepreneurship ...... 55 Moraima De Hoyos-Ruperto, José M. Romaguera, Bo Carlsson, Kalle Lyytinen

This study explores how individual and inter-organizational networking, as mediators, may provoke desired entrepreneurial success. A quantitative study using Partial Least Squares (PLS) was conducted to determine: How and to what extent do systemic and individual factors—mediated by inter-organizational and individual social networking activities—impact the likelihood of entrepreneurial success? To illustrate this, we investigates Puerto Rico’s (P.R.) unexplained stagnant entrepreneurial environment. Our findings reveal that Puerto Rican entrepreneurs are not using their networks efficiently to overcome the inadequate institutional structure. Therefore, a better interconnected entrepreneurial ecosystem must be designed; while entrepreneurs must use more effectively their networks.

The Need to Practice What We Teach: Succession Management in Higher Education ...... 73 Jamye Long, Cooper Johnson, Sam Faught, Jonathan Street

“Practice what you preach” is often a phrase used to emphasize the importance of maintaining one’s integrity through performing as one advises others. In the case of succession management, this phrase can be used to emphasize the differences between educators and practitioners. Furthermore, it is the practice of educators to instill in students the understanding that a succession plan is necessary in business practices. However, within the confines of higher education, succession management plans are rare. This brings into question if institutions are aware of the immoral implications that it establishes by teaching a concept itself is unwilling to implement.

Meeting the Challenge of Assurance of Learning: Perspectives from Four Business Schools ...... 79 Jane Whitney Gibson, Regina A. Greenwood, Bahaudin G. Mujtaba, Shelley R. Robbins, Julia A. Teahen, Dana Tesone

Six professors from four different universities discuss the strategies their business schools are currently using to capture and utilize assurance of learning data. The schools represent public and private as well as not-for-profit and for-profit and uniformly document the rigor and deliberateness with which assessment of learning is now being conducted. General recommendations are extrapolated to help other business schools who might be at an earlier stage of developing their assurance of learning protocols.

GUIDELINES FOR SUBMISSION

American Journal of Management (AJM)

Domain Statement

The American Journal of Management (AJM) is a peer-reviewed multidisciplinary journal dedicated to publishing scholarly empirical and theoretical research articles focusing on improving organizational management theory, practice and behavior. AJM encourages research that impacts the management field as a whole and introduces new ideas or new perspectives on existing research. Accepted manuscripts will focus of bridging the gap between academic theory and practice as it applies to improving the broad spectrum of the management discipline. Manuscripts that are suitable for publication in AJM cover domains such as business strategy and policy, entrepreneurship, human resource management, operations management, organizational behavior, organizational theory, and research methods.

Submission Format

Articles should be submitted following the American Psychological Association format. Articles should not be more than 30 double-spaced, typed pages in length including all figures, graphs, references, and appendices. Submit two hard copies of manuscript along with a disk typed in MS-Word.

Make main sections and subsections easily identifiable by inserting appropriate headings and sub-headings. Type all first-level headings flush with the left margin, bold and capitalized. Second-level headings are also typed flush with the left margin but should only be bold. Third- level headings, if any, should also be flush with the left margin and italicized.

Include a title page with manuscript which includes the full names, affiliations, address, phone, fax, and e-mail addresses of all authors and identifies one person as the Primary Contact. Put the submission date on the bottom of the title page. On a separate sheet, include the title and an abstract of 200 words or less. Do not include authors’ names on this sheet. A final page, “About the authors,” should include a brief biographical sketch of 100 words or less on each author. Include current place of employment and degrees held.

References must be written in APA style. It is the responsibility of the author(s) to ensure that the paper is thoroughly and accurately reviewed for spelling, grammar and referencing.

Review Procedure

Authors will receive an acknowledgement by e-mail including a reference number shortly after receipt of the manuscript. All manuscripts within the general domain of the journal will be sent for at least two reviews, using a double blind format, from members of our Editorial Board or their designated reviewers. In the majority of cases, authors will be notified within 60 days of the result of the review. If reviewers recommend changes, authors will receive a copy of the reviews and a timetable for submitting revisions. Papers and disks will not be returned to authors.

Accepted Manuscripts

When a manuscript is accepted for publication, author(s) must provide format-ready copy of the manuscripts including all graphs, charts, and tables. Specific formatting instructions will be provided to accepted authors along with copyright information. Each author will receive two copies of the issue in which his or her article is published without charge. All articles printed by AJM are copyrighted by the Journal. Permission requests for reprints should be addressed to the Editor. Questions and submissions should be addressed to:

North American Business Press 301 Clematis Street, #3000 West Palm Beach, FL USA 33401 [email protected] 866-624-2458

The Impact of Reputational Resources on Event Performance in International Film Festivals

Joseph Lampel City University London

Shiva Nadavulakere Saginaw Valley State University

Anushri Rawat Nicholls State University

Our research addresses the question: How do international film festivals acquire and confer reputational resources? Elsaesser (2005) suggests that festivals function as “ad-hoc stock exchange of reputations” and “arbiters and taste-makers”. Drawing upon his work and resource based view of strategy we propose that the most valuable intangible resource of international film festivals is their reputation. Further, using Dierickx & Cool’s (1989) intangible asset stock accumulation model we propose that the competitive advantage of an international film festival depends on its stocks of reputation, while renewing advantage depends on flows of reputation. The stocks of reputation are captured by the film festival’s jury profile, and the flows of reputation are represented by the profile of directors of films included in the competition section of the film festival. Findings suggest that the stock variable –number of feature film credits of a jury member, and the flow variable - number of award nominations of a director are significantly related to international film festival performance. One of the key contributions of our study is the operationalization of international film festival reputational resources in terms of stocks and flows, and its effect on event performance.

INTRODUCTION

The resource based view on strategy argues that competitive advantage of a firm primarily rests with idiosyncratic organizational resources and capabilities (Barney, 1991; Penrose, 1959). Intangible resources in particular, provide sustainable competitive advantage because they are firm specific and are “accumulated” in the form of “stocks and flows” over time (Dierickx & Cool, 1989). Reputation can be a key intangible resource, and several studies have demonstrated a link between reputation and superior financial and social performance (Podolny, 2005; Rindova, Williamson, Petkova & Sever, 2005). However, while some studies have discussed reputation as a source of competitive advantage in cultural industries (Anand & Watson, 2004; Lampel, Shamsie & Lant, 2006), we know relatively little about the relationship between how reputation is accumulated and renewed on the one hand, and organizational performance on the other. This lack of attention is surprising, in light of the importance of reputation as a key competitive factor in cultural industries, not to mention the central role that developing reputation

American Journal of Management vol. 13(2) 2013 9

increasingly plays in industries where product quality is hard to ascertain both before and after consumption (Lampel, Lant & Shamsie, 2000). In developing our research, we draw on the resource based view (Barney, 1991; Dierickx & Cool, 1989), to argue that reputation is a resource that can acquire the idiosyncratic properties that underpin competitive advantage. We apply Dierickx & Cool’s (1989) model to reputation accumulation within cultural industries. Specifically, we propose that stocks of reputation are accumulated reputational assets, and flows of reputation occur from both internal and external sources to be absorbed and further developed into stocks of reputation. Our research tests the relationship between stocks and flows of organizational reputation and organizational performance in the international film festival organizational field. International film festival field provides an appropriate context to examine this relationship for two reasons: First, the most valuable intangible resources of international film festivals are twofold: the capabilities involved in accessing, programming, and showcasing the best and latest international films; and an accumulated reputation of possessing those capabilities. In other words, the competitive advantage of international film festivals is primarily dependent upon both their stocks of reputation and renewing this advantage depends on their access to flows of reputation. Second, international film festivals are temporary organizations (Bechky, 2006; Orlikowski and Yates, 2002). This feature presents an advantage when it comes to testing the relationship between stocks and flows of resources and performance. Whereas organizations are structures with multitude of resource flows that are continuously accumulating, film festivals as temporary organizations have relatively few resource flows, and all of them occur at a single point in time when the event is organized. This aspect of film festivals, we argue, provides a parsimonious empirical context to delineate levels of stocks and flows of intangible resources such as reputation. The paper is organized as follows: First, we provide an overview of Dierickx & Cool’s (1989) intangible asset stock accumulation model. Second, we articulate the measure of stocks of international film festival reputation - jury prestige. Second, we suggest that reputation flows may be captured by the prestige of film-makers’ participating in an international film festival. Third, we propose a performance measure for an international film festival – the number of countries in which a festival film gets released. Fourth, we present the research design, data analysis, and discuss the results. Finally, we conclude by offering some directions for future research, particularly the prospects for convergence between the resource based view and institutional analysis of organizations.

THE ASSET STOCK ACCUMULATION MODEL OF COMPETITIVE ADVANTAGE

Dierickx & Cool’s (1989) intangible asset stock accumulation model posits that nontradeable asset stocks rather than the tradeable ones confer sustainable competitive advantage. This is because tradeable assets are “freely tradeable” and therefore rivals can replicate any asset configuration by buying and selling them at ongoing market prices. Successful implementation of a strategy depends not just on these undifferentiated tradeable assets, but assets that are nonappropriable, highly firm specific, and non tradeable assets. Examples of nontradeable asset stocks include corporate reputation, academic institute reputation, reputation for quality, dealer loyalty, or R&D capability, etc. As there are no factor markets for nontradeable asset stocks, firms have to “build” or internally “accumulate them by choosing appropriate time paths of flows over a period of time”. In essence, the model proposes that intangible assets are inherently inimitable because rivals have to replicate the entire accumulation path to achieve same level of asset stock position. Dierickx & Cool’s (1989) model consists of two parts. The first part describes the process of asset stock accumulation, and the second part identifies five features that confer sustainability of privileged asset stock positions. The authors’ illustrate the process of asset stock accumulation through the “bath- tub” metaphor. At any given point in time, the stock of water is indicated by the level of water in the bath- tub, which is the cumulative result of flows of water into the tub (through the tap) and out of it (through the leak). Applying this logic to the example of R&D capability, the amount of water in the bath-tub is the stock of know-how at a particular point in time, whereas current R&D spending is the water flowing in

10 American Journal of Management vol. 13(2) 2013 through the tap; and the know-how that depreciates over time is the flow of water leaking through the hole in the tub. A crucial point illustrated by the model is that while flows can be adjusted instantaneously, stocks cannot. With regard to the sustainability of accumulated asset stock positions, the model argues that it depends on the extent to which asset accumulation processes exhibit the following properties: time compression diseconomies, asset mass efficiencies, interconnectedness, asset erosion, and causal ambiguity. As far as we are aware, only two studies - Decarolis & Deeds (1999) and Knott, Bryce & Posen (2003) have empirically tested Dierickx & Cool’s (1989) model. The former tests just the process of asset stock accumulation, and the latter tests both accumulation and validity of three of the five properties outlined in Dierickx & Cool’s (1989) model: time compression diseconomies, asset mass efficiencies, and asset erosion. And in both the cases the empirical context was the U.S. pharmaceutical industry, and the units of analysis were organizations and not events. Decarolis & Deeds (1999) examine the relationship between organizational knowledge assets in the form of stocks and flows and firm performance. Knowledge flows are captured by variables such as geographical location, alliances, and research and development. Knowledge stocks are captured by variables such as scientific citations, products in development, and patents. Findings show that geographical location, scientific citations, and products in development are significant predictors of firm performance. Knott, Bryce & Posen (2003) investigate three questions: Is Dierickx & Cool’s (1989) model of asset accumulation correct? Are the asset stocks more important that asset flows in the firm’s production function? Does the accumulation process deter rival mobility? The study concludes that Dierickx & Cool’s (1989) model is partially correct as only two out of three properties tested - time compression diseconomies and asset erosion are significant. Findings show that asset stocks do accumulate, but are in no way more important than asset flows in the firm’s production function. With regard to the third question, the study finds that accumulation process is not inimitable, and therefore does not deter rival mobility. Notwithstanding the conflicting results, the authors’ urge further research using other intangible assets, especially reputational assets. Building on Knott et al.’s (2003) empirical finding, we concentrate our research on the first part of Dierickx & Cool’s (1989) model of asset stock accumulation and presents an initial framework of the process of reputation accumulation. We propose that stocks of reputation are accumulated reputation assets within the firm, and flows of reputation occur from both internal and external sources to be absorbed and further developed into stocks of reputation. Further, our research tests the relationship between stocks and flows of firm reputation and performance in the international film festival organizational field. Next, we conceptualize the underlying reputation of international film festivals in terms of Dierickx & Cool’s (1989) stocks and flows of reputation and propose a few hypotheses.

STOCKS AND FLOWS OF INTERNATIONAL FILM FESTIVAL REPUTATION

In Hirsch’s (1972) terms international film festivals constitute a system of events that mediate the flow of films between producers and consumers. Further, Elsaesser (2005) proposes that one of their key functions is to “categorize, classify, sort and sift, celebrate, and reward the world’s annual film- production”. Drawing upon Holbrook’s (1999) work on expert judgments of films we argue that international film festivals posses esoteric expertise to offer judgments about a variety of films such as feature, shorts, avant garde, etc. Consequently, as films derive their value from subjective experiences that rely heavily on using symbols in order to manipulate perception and emotion, film professionals and movie-goers have difficulty in identifying and establishing clear standards of quality. Instead, they resort to using “social proofs” of distinction in the form of reputation and status (Rao, Greeve, & Davis, 2001). Creating and accumulating reputation offers an international film festival the following benefits: the ability to attract the best films of the year; the chance to premiere a film; the ability to attract top notch film makers to showcase their talent; the attention from leading media outlets; the ability to broker deals between producers, distributors, and exhibitors; the ability to attract increasing number of visitors or audiences; the ability to garner substantial commercial sponsorships, etc. Growth in reputation, and its

American Journal of Management vol. 13(2) 2013 11

accompanying benefits, in fact constitutes a virtuous cycle. As an international film festival’s gains in reputation, it attracts best, newest, and to be premiered films, and as a consequence, attracts yet more prominent films, and reputed film makers. This virtuous cycle, according to Podolny (1996) corresponds to Merton’s Matthew Effect, which states that high status actors are more likely to receive greater rewards for a given quality effort. Our research views this virtuous cycle as an accumulation process and focuses on the relationship between an international film festival’s reputation in the form of stocks and flows and its performance. Towards that end, we suggest variables that capture stocks and flows of international film festival reputation. The international film festival field consists of three groups of stakeholders: general public, professionals, and public partners (Telefilm Canada, 2004). Of these, the professionals who are associated with an international film festival’s flagship ‘in-competition’ section are the most important. They include programmers who nominate the films, the jury that adjudicates the winning films, and the film makers whose films have been nominated. Though the programmers play a key role in configuring the ‘in- competition’ section by selecting around 20 films from thousands of submissions, they remain anonymous or obscure for some reason from the public. Whereas, the other two groups of professionals, the jury, and the film makers whose films have been nominated, become the focus of attention by the media and festival-goers alike, and therefore, function as the public face of a film festival. We propose that stocks of reputation can be captured by the film festival’s jury profile. And flows of reputation are represented by the profile of directors of films included in the competition section of the film festival. Elsaesser (2005) argues that international film festivals function as competitive venues for artistic excellence in cinema, very much like Olympic Games do in the sporting field. Competitive international film festivals usually give out awards for films in categories such as the best film, best actress, best actor, best director, best screenplay, and best short film. The award for the best film is the most important, and is again usually christened as Golden Palm (Cannes), Golden Loin (Venice), Golden Bear (Berlin), etc. The next important awards are Silver medals, and Bronze medals usually given out for directing, acting, and best screenplay. These awards are adjudicated by a specially appointed international jury comprising of high profile artists, directors, actors, writers, intellectuals, etc. With regards to the film professionals on the jury, most of the film festivals appoint film makers who have featured their films or in other words are an alumnus. Like for instance, Quentin Tarantino’s film ‘Pulp Fiction’ won the Golden Palm at Cannes in 1994, and in 2004 he was the head of the jury. However, it is also very common to see film makers being on juries of more than one festival in the same year, like for instance, at Berlin in February and at Cannes in May. Therefore, they are very mobile, in the sense of not being tied to a particular festival. And as there are not many people who are eligible to act as film jurists, the film festivals compete to invite high profile and prominent film makers on to their juries. The announcement of the list of jury members with the chairperson immediately follows the unveiling of competing films. In so doing, an international film festival seeks to focus attention not only on the films that are vying for top honors, but also on the reputation of the jury members who will adjudicate the winners. In other words, a film festival’s jury reputation becomes a strategic resource that might have performance implications. This gives us the following hypothesis:

H1: An international film festival’s jury reputation will have a positive impact on event performance.

Elsaesser (2005) argues that international film festivals “compete for and are dependent on a regular annual supply of interesting, innovative or otherwise noteworthy films”. In particular, they are competing for two types of resources: Firstly, a “regular roster of star directors”, and secondly, an opportunity to “discover” new auteurs and a “new wave” or ‘nouvelle vague’ of cinema. International film festivals can elevate directors to internationally recognized auteur status. For instance, 1960s saw Cannes anointing Satyajit Ray, Ingmar Bergman, Luchino Visconti, Francois Truffaut, Jean-Luc Godard, in the 1970s, American directors Robert Altman, Martin Scorsese, Francis Coppola, and in the 1980s, Chinese directors

12 American Journal of Management vol. 13(2) 2013 Zhang Yimou and Chen Kaige. Likewise, the premier American festival Sundance discovered and elevated the status of directors such as Quentin Tarantino and Steven Soderbergh. Film festivals are also ideal venues for conferring recognition on new film making styles, or what is often referred to as “new waves”. Cannes, for instance, has played host to new cinema waves such as Italian neorealism, French Nouvelle Vague, and the “new” Iranian cinema. Such discoveries are more the product of media reporting, rather than part of the official mandate of the festival. But nevertheless they form part of the mystique of Cannes, and widely emulated by other festivals such as Sundance. Struggling to formulate a more precise definition of what constitutes a wave, Nichols (1994) proposes that one new auteur is a “discovery”, two new auteurs is a “new wave”, and three new auteurs from the same country constitute a “new national cinema”. By anointing auteurs, and initiating new waves of cinema, the festivals seek to appropriate the accompanying credit and reputation. In Elsaesser’s (2005) words “a festival is an apparatus that breathes oxygen into an individual film and the reputation of its director as potential auteur, but at the same time it breathes oxygen into the system of festivals as a whole”. Further, he states that “with every prize it confers, a festival also confirms its own importance, which in turn increases the symbolic value”. A healthy flow of these two resource streams, we propose not only confirms a festival’s importance and purpose, but also helps differentiate it, thereby offering it a competitive advantage over the rest. In other words, a film festival’s nominated directors’ reputation becomes a strategic resource that might have performance implications. This gives us the following hypothesis:

H2: The reputation of film directors whose films appear in an international film festival will have a positive impact on event performance.

DATA AND METHOD

The sample used in this study was generated from a list of 49 international film festivals accredited by the International Federation of Film Producers Associations (FIAPF). Though there exist somewhere in between 600 to 3000 film festivals worldwide (Turan, 2002), the most important among these are the ones accredited by (FIAPF). The FIAPF has member organizations from 24 leading film producing countries including China, Japan, USA, and India. The FIAPF website states its role “as a regulator of international film festivals”. And Elsaesser (2005) seems to concur when he argues that FIAPF accreditation is widely accepted as the gold standard for international film festivals. FIAPF accredits festivals in four categories: competitive, competitive specialized, non-competitive, documentary and short. The 12 festivals in the competitive category are considered the “A” list festivals and include all the best European ones like Cannes, Venice, Berlin, etc. The second category - competitive specialized or “B” list festivals consist of 26 festivals. These showcase films that focus on a particular regional cinema such as Mediterranean cinema, or on a particular topic such as children’s films or films by debutant directors. Our sample includes only festivals that showcase full length feature films, and excludes the non- competitive film festivals as they source their films or resources from the competitive ones. Thus, the initial sample consisted of 38 film festivals. The data collected pertained to the year 2004 as it offered the best opportunity to fully capture the dependent variable – a film’s release dates after its festival debut. forced us to drop 13 film festivals, and therefore our final sample consists of 25 of the world’s leading film festivals: Cannes; Berlin; Venice; Locarno; Karlovy Vary; San Sebastian; Montreal; Moscow; Tokyo; Cairo; Shanghai; Brussels; Istanbul; Goeast; Sarajevo; Namur; Warsaw; Stiges; Thessaloniki; Molodist; American Film Institute Festival; Flanders; Sao Paulo; Gijon; and International film festival of Kerala. The data was collected from both the film festivals websites and imdb.com.

Dependent Variable Measuring the performance of international film festivals is very difficult as they possess attributes that are not just economic in nature but also artistic, cultural, and political. Certain tractable dimensions do exist that can be used as performance indicators, these include such as number of films presented, box-

American Journal of Management vol. 13(2) 2013 13

office earnings of the films presented, number of media attendees, number of sales companies and buyers, number of admissions, etc. However, we argue that the performance measure should truly reflect the stated objectives of international film festivals. Almost all the leading film festivals state that one of their primary objectives is to promote cinema as a global art form. Similarly, Elsaesser (2005) argues that international film festivals function as cartographers of the “world’s cinema production and the different ’ film cultures”. Further, one of the primary motives of film makers presenting their films at various festivals is not financial gain, but to acquire international “prestige, honour, fame, or recognition” (Ramey, 2002). Therefore, we propose a new performance measure for international film festivals, which is also our dependent variable: number of countries in which a film is exhibited after its festival debut. The dependent variable was measured by counting number of country releases a film has, excluding double or more releases, including non commercial releases like special exhibition venues or being shown at an international film festival. Further, each film festival’s number of country releases was obtained by averaging the count of individual film releases. Like for instance Cannes had 8 in-competition films and its average country release count was 31.625 (253/55+23+27+31+20+44+34+19).

Independent Variables We operationalize the two independent variables in our study: stocks of festival reputation through the reputation of jury members, and flows of festival reputation through film director reputation. This operationalization is consistent with recent research within the resource based view which has used individual reputation as an indicator of a firm’s intellectual capital. For instance, Rindova, Williamson, Petkova & Sever (2005) propose the following variable as antecedents of business school reputation: Student GMAT scores, faculty experience in years, faculty publications, and faculty PhD degree. Rothaermel & Hess (2007) argue that innovation in biotechnology companies is a function of “star scientists”. The reputation of star scientists is measured in terms of their “star publications” and “citation stars”. Wade, Porac, Pollock & Graffin (2006) propose that a CEO’s celebrity status is a valuable intangible asset for a firm. They measure a CEO’s reputation through the awards won at the Financial World’s annual CEO of the Year competition. Similarly, within the film industry research, the worth of a film production is assessed through the reputation of various individuals associated with it such as the director, producer, actors’, screenwriter, etc. Simonton (2004) uses 7 types of film awards in 16 different categories to assess individual and group artistic creativity in film productions. Perritti & Negro (2006) measure the status of film directors and actors by the number of Oscar awards or New York Film Critics Circle Awards they have won in the past. And film professionals who have accumulated such reputation are invited by the international film festivals to be part of their juries. Baumann (2001) suggests that competitive film festival bestow artistic merit on films as their competitions are juried by individuals who have claim to an expert status within the field. Therefore, we suggest that the reputation of a film professional on a festival’s jury is an appropriate measure for the festival’s stock of reputation variable. Towards that end, we measure it in three ways: Number of feature film credits he or she has; Number of years of experience since his or her debut; and Number of award nominations he or she has won. The variables were calculated as follows: Number of film credits – count of feature film credits; Number of years of experience since his or her debut – count of number of years since his or her first debut film till 2004; Number of awards he or she has won – count of number of award nominations from a specially constructed index of world’s important awards. The index consists of 78 most important awards from 40 leading film producing countries (See Appendix B). The list includes all the 23 member countries of the International Federation of Film Producers Associations (FIAPF). Further, we added another 17 countries that also had significant film output. Further, each film festival’s number of directors’ years was obtained by averaging the count of individual director’s years. Like for instance Cannes had 8 in-competition directors and their average years of experience was 17.625 (141/15+27+21+13+29+20+12+4). Number of directors credits, and number of directors awards for each film festival were calculated in a similar way.

14 American Journal of Management vol. 13(2) 2013 Though a film is a collaborative effort of many creative individuals, the director’s role is paramount. The auteur theory states that a film’s “authorship” lies with its director as his or her personal artistic vision is responsible in crafting it (Caughie, 1981; Becker, 1982). Simonton (2004) supporting this theory argues that “73% of all pictures that received the Best Picture Oscar have also claimed the Oscar for Best Director”. Further, Elsaesser (2005) proposes that international film festivals such as Cannes have fostered auteurism by not only retaining the director as the “king pin” of a film production, but the entire festival system itself. Evidence to this is almost all the film festivals list the film director’s name alongside the title of the film. Therefore, we suggest that the reputation of the director of a film included in the festival is an appropriate measure for the festival’s flow of reputation variable. Towards that end, we measure it in three ways: Number of feature film credits he or she has; Number of years of experience since his or her debut; and Number of awards he or she has won. The variables were calculated in the same way as that of the other independent variable –jury member reputation.

Control Variable Previous research on reputation suggests that age may be positively related with reputation (Deephouse & Carter, 2005). Older international film festivals have an established past of achievements and deep ties and relationships with all the stakeholders within the film festival field. They possess superior stocks of jury reputation, and command stellar flows of film directors’ reputation, and therefore their films are exhibited in more number of countries. The age of an international film festival is calculated by deducting its debut year from the year 2004.

ANALYSIS AND RESULTS

The data were analyzed using , and descriptive are presented in Table 1. The correlation matrix is presented in Table 2.

TABLE 1

Mean Age 32.44 18.518 Director Years 10.78 6.639 Director Credits 7.13 4.750 Director Award Nominations 3.64 4.231 Jury Member Years 19.62851 7.136640 Jury Member Credits 17.05460 10.358716 Jury Member Award Nominations 5.24811 4.236891

American Journal of Management vol. 13(2) 2013 15

TABLE 2 CORRELATION MATRIX

1 2 3 4 5 6 7 1 Age 1.00 2 Director Years 0.11 1.00 3 Director Credits 0.305 0.723* 1.00 4 Director Award Nominations 0.560* 0.176 0.254 1.00 5 Jury Member Years 0.066 0.317 0.125 0.113 1.00 6 Jury Member Credits 0.271 0.074 0.142 -0.053 0.529* 1.00 7 Jury Member Award Nominations 0.076 -0.227 -0.271 0.105 0.319 0.252 1.00 8 Number of Country Releases 0.682* -0.20 -0.031 .700* 0.045 0.276 0.316

We run three regression models to test the effects of jury reputation, and director reputation on the number of countries a festival film is released. The results are presented in Table 3.

TABLE 3 REGRESSION RESULTS – BETA COEFFICIENTS

Model 1 Model 2 Model 2 Age 0.454 ** 0.641 *** 0.317 * Director Years -0.258 -0.176 Director Credits -0.115 -0.146 Director Award Nominations 0.520 ** 0.614 *** Jury Member Years -0.146 -0.181 Jury Member Credits 0.107 † 0.321 * Jury Member Award Nominations 0.286 0.125

Adjusted R2 0.675 0.461 0.726 F- 13.479 6.139 10.104 Significance of F 0.000 0.002 0.000 N =25 for all models †p < .1 *p < .05 **p < .01 ***p < .001

In model 1, we introduce the control variable - age of the film festival, and all the flows of reputation variables: director years, director credits, and director awards. Age of the festival, and director award nominations are significant predictors of number of country releases. In model 2, we introduce stocks of reputation variables together with the control variable. The age of the festival continues to be a significant predictor, and one of stock variable - jury member credits is weak in its effect on number of country releases. In the final model, we introduce all the variables – age, and stocks and flows of reputation. The results show that one stocks of reputation variable – jury member credits, and one flows of reputation variable – director award nomination, along with age are significant predictors of country releases. Therefore, we find support for both hypotheses, but only with respect to some measures of reputation - specifically, the number of jury member film credits and the total number of a film director’s previous award nominations.

16 American Journal of Management vol. 13(2) 2013 DISCUSSION

Our research examines whether the level of stocks of reputation, and flows of reputation of international film festivals affect their performance. It conceptualizes stocks of reputation of an international film festival in terms of the nominated film directors’ reputations. The stocks of reputation of an international film festival are conceptualized in terms of its jury members’ reputations. The underlying rationale in classifying director reputation as flows, and jury reputation as stocks, and not vice versa, is as follows: First, because stocks of reputation are accumulated flows of reputation. However, we see instances where stocks are acquired without resorting to accumulated flows reputation. Like for instance, the birth of a new scholarly journal. The reputation of a new scholarly journal is signaled more by the reputation of scholars on its editorial board, than the reputations of authors publishing in its initial issues. Similarly, the reputation of a nascent international film festival is signaled by more by the reputations of the jury members adjudicating the competition. Therefore, the distinction between what constitutes stock as opposed to flows is blurred and in the case of nascent institutions. Otherwise, in the long run, it is very clear that stocks are accumulated flows. Second, in case of jury selection, international film festivals only invite those film makers who were their discoveries or have been previously featured in their competition sections. In fact, Elsaesser (2005) suggests that by grooming newly discovered auteurs for potential jury positions, the international film festivals seek fresh directions. Moreover, FIAPF prohibits a new film festival that is under consideration for accreditation from holding juried competitions. Though this rule might be in place to safeguard the interests of established festivals, it clearly points out two things: First, that jury resources are strategic in nature, and second, they can only be exploited through the process of accumulation. Therefore, we conceptualize stocks of reputation as accumulated reputational assets at a point in time which are continuously augmented and replenished by flows of reputational assets. Further, in Dierickx & Cool’s (1989) words, film director reputational assets can be adjusted, but jury member reputational assets cannot. Drawing upon previous studies, we operationalize film director reputation through three variables: Number of feature film credits he or she has; Number of years of experience since his or her debut; and Number of awards he or she has won. Likewise, we operationalize jury member reputation through three variables: Number of feature film credits he or she has; Number of years of experience since his or her debut; and Number of awards he or she has won. Results show partial support for both hypotheses. In each of the hypotheses, one important variable is found to be positively associated with film festival performance. In the first hypothesis about stocks of reputation, number of credits a jury member has significantly predicts film festival performance. There is no support for other two variables: jury member experience in number of years since his/her debut, and awards won the jury member. This shows that nominated films at international film festivals with experienced jury members in terms film credits are more likely to be released in greater number of countries. This seems plausible, and can be explained by the way international film festivals introduce their jury members, usually through a short biography in their press materials or websites. For instance, Quentin Tarantino was Cannes’s president of the jury for 2004. And his 220 word biography reads like this:

“Quentin Tarantino was born in 1963 in Knoxville, Tenessee. He spent his youth in a suburb of Los Angeles and becomes interested in film at an early age. His passion leads him, at the age of 22, to work in a video store where he spends his days with his friend Roger Avary, with whom he wrote Pulp Fiction several years later. It's during this time that he decides to edit his first scripts. Owing to the sale of his scripts True Romance and Natural Born Killers he directs his first film Reservoir Dogs in 1992. The film is widely distributed and becomes one of the best cop thrillers of the 90s. His second film, Pulp Fiction wins the Palme d'Or at the 1995 Festival de Cannes. In 1997 he shoots Jackie Brown, one of the best films of the decade, a tribute film to American cinema of the 70s. With Jackie Brown, Quentin Tarantino crosses over into the realm of great filmmakers.

American Journal of Management vol. 13(2) 2013 17

Following an absence of five years, Quentin Tarantino is back on the studio lot in 2002 with Kill Bill. Originally produced as a single film, it is finally released in two parts: Kill Bill Volume1 and Kill Bill Volume 2. He is planning to start work on the third and final opus of his Kill Bill saga”. http://www.festival-cannes.fr/index.php/en/archives/artist/866

Though Tarantino has been nominated to 31 of world’s leading awards, the biography just cites only Cannes’s Palme d'Or award. And it cites only 8 films out the 14 films he has directed till 2004. Though, it is not clear whether the festival or Quentin Tarantino himself has authored the biography, it is clear that international film festivals prefer to project the jury member’s past without indicating in full their awards or experience. This is also true in the case of Steven Soderbergh, the acclaimed American director who was on the Cannes jury for 2003. His biography mentions 10 of his films, and just two Oscars, and one Palme d'Or award, despite his three nominations at Berlin film festival, and one nomination at Sundance film festival. On the other hand, in the second hypothesis about flows of reputation, number of awards won by directors significantly predicts film festival performance. There is no support for other two variables: a director’s experience in number of years since his/her debut, and the number of film credits to his or her name. This shows that nominated films at international film festivals with highly acclaimed directors in terms of awards are more likely to be released in greater number of countries. This finding is consistent with institutional analysis of cultural fields that argues awards, honors, and prizes are especially important in cultural production as they represent forms of legitimacy (Bourdieu, 1984). Moreover, Mezias and Mezias (2000) suggest that “some measures of innovativeness that might be appropriate in the context of modern feature film industry, such as garnering awards, critical acclaim, or a massive box-office opening”. Elsaesser (2005) argues that leading international film festivals such as Cannes profess a strong commitment to artistic excellence, usually displayed through awards and prizes. He further states that “with every prize it confers, a festival also confirms its own importance, which in turn increases the symbolic value of the prize”. Therefore, our findings suggest that international film festivals see award nominated directors as superior flows of resources.

CONCLUSION

The paper examines the relationship between stocks of reputation, and flows of reputation in event performance within the international film festival field. As we previously stated, at present we seem to have only two studies that have empirically tested Dierickx & Cool’s (1989) model - Decarolis & Deeds (1999) and Knott, Bryce & Posen (2003). Though, our study tests only asset stock accumulation process, the first part of their model, the findings have wider implications for the model. First, both previous studies have focused on scientific assets in bio-technology industries. Our study is the first to consider reputational assets. Second, our results show that both flows of reputation and stocks of reputation are important, but do not indicate their sustainability over the long run. Future studies should explore the effect of reputation erosion or leakage with a view to understanding the sustainability of accumulated reputational asset stocks more generally. We believe that our study also makes a contribution in the emerging dialogue between strategy research and institutional analysis of organizations. This paper localizes the dialogue in the area of cultural industries/cultural fields, an area where many of the traditional attributes of strategy must be extended to reflect the unique properties of cultural products. Here institutional analysis of organizations is particularly useful. Institutional analysis of cultural fields examines the production and distribution of institutionalized cultural forms like art works, cuisine, religious practices, juridical ties, etc. These forms are enacted by a web of interactions between people with occupational identities, formal organizations, and markets. Studies within this perspective have examined the role of reputation in the production and distribution of institutionalized cultural forms. Anand and Peterson (2000), for instance, propose that Billboard charts function like reputation indices, and overtime have morphed into a summary measure of success or failure

18 American Journal of Management vol. 13(2) 2013 in records business. Rao, Monin & Durand’s (2003) research on French gastronomy shows that the socio- political legitimacy of the nouvelle cuisine chefs was mainly responsible for the growth of nouvelle cuisine as a high-status rival to that of the classical cuisine. The study identifies nouvelle cuisine chefs’ reputation in the form of Michelin Guide’s star ratings as one of the key sources of legitimacy. Watson and Anand (2006) argue that Grammy awards shape the canon formation process in the U.S. popular music field by constructing and purveying prestige that embodies the “hallmark of peer recognition”. However, as clearly brought out by the above review, the extant literature on institutional analysis of organizations has focused more on identifying the benefits of reputation acquisition, and less on explicating the process through which reputations are acquired and developed in the first place. Rao’s (1994) suggests that “there has been little contact between resource based researchers and neo- institutionalists” (Meyer & Rowan, 1977; Dimaggio, 1982). In this spirit, our present study suggests that Dierickx & Cool’s (1989) model is a good departure point for integrating institutional analysis with the resource based view, thus addressing Rao’s (1994) concern that resource based perspective has overlooked the institutional process of legitimation which can often play an important role in creating and sustaining competitive advantage.

REFERENCES

Anand, N., & Peterson, R. A. (2000). When market information constitutes fields: Sensemaking of markets in the commercial music field. Organization Science, 11(3): 270-284.

Anand, N., & Watson, M. R. (2004). Tournament rituals in the evolution of fields: The case of the Grammy Awards. Academy of Management Journal, 47: 59-80.

Barney, J. B. (1991). Firm resources and sustained competitive advantage. Journal of Management, 17(1): 99–120.

Baumann, S. (2001). Intellectualization and art world development: Film in the United States. American Sociological Review, 66: 404-426.

Becker, H. S. (1982). Art worlds. Berkeley: University of California Press.

Bechky, B. (2006). Gaffers, Gofers, and Grips: Role-Based Coordination in Temporary Organizations, Organization Science, 17 (1): 3-21.

Bourdieu, P. (1984). Distinction: A social critique of the judgment of taste. London: Routledge.

Caughie, J. (1981). Theories of authorship: A reader. London: Routledge & Kegan Paul.

De Carolis, D., & Deeds, D. (1999). The impact of stocks and flows of organizational knowledge on firm performance: An empirical evaluation of the biotechnology industry. Strategic Management Journal, 20: 953-968.

Deephouse, D. L., & Carter, S. M. (2005). An examination of differences between organizational legitimacy and organizational reputation. Journal of Management Studies, 42 (2): 329-360.

DiMaggio, P. J. (1982). Cultural entrepreneurship in nineteenth-century Boston. Media, Culture and Society, 4(4): 303–321.

Dierickx, I., & Cool, K. (1989). Asset stock accumulation and sustainability of competitive advantage. Management Science, 35(12): 1504-1511.

American Journal of Management vol. 13(2) 2013 19

DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48: 147-160.

DiMaggio, P. J. (1991). Social structure, institutions, and cultural goods: The case of the United States. In P. Bourdieu & J. S. Coleman (Eds.), Social theory for a changing society: 133-155. Boulder: Westview Press.

Elsaesser, T. (2005). Film festival networks: The new topographies of cinema in Europe. In T. Elsaesser, European cinema: Face to face with Hollywood: 82-107. Amsterdam: University Press.

Hirsch, P. M. (1972). Processing fads and fashions: An organization-set analysis of cultural industry systems. American Journal of Sociology, 77: 639-659.

Holbrook, M. B. (1999). Popular appeal versus expert judgments of motion pictures. Journal of Consumer Research, 26:144-155.

Knott, A. M., Bryce, D. J., & Posen, H. E. (2003). On the strategic accumulation of intangible assets. Organization Science, 14 (2): 192-207.

Lampel, J., Lant, T., & Shamsie, J. (2000). Balancing act: Learning from organizing practices in cultural industries. Organization Science, 11(3): 263–69.

Lampel J, Shamsie, J., & Lant, T. (2005). The business of culture: Strategic perspectives on entertainment and media. Mahwah, NJ: Lawrence Erlbaum Associates.

Meyer, J. W., & Rowan, B. (1977). Institutionalized organizations: Formal structure as myth and ceremony. American Journal of Sociology, 16: 340-363.

Mezias, J., & Mezias S. (2000). Resource partioning and the founding of specialist firms: The American feature film industry, 1912-1929. Organization Science, 11: 306-322.

Nichols, B. (1994). Discovering form, inferring meaning: New cinemas and the film festival circuit. Film Quarterly, 47(3): 16-30.

Orlikowski, W, and Yates, J. (2002). It's About Time: Temporal Structuring in Organizations, Organization Science, 13 (6): 684-700.

Penrose, E. T. (1959). The theory of the growth of the firm. New York: John Wiley.

Peretti, F., & Negro, G. (2006). Filling empty seats: How status and organizational hierarchies affect exploration versus exploitation in team design, Academy of Management Journal, 49 (4): 759-777.

Peterson, R. A., & Anand, N. (2004). The production of culture perspective. Annual Review of Sociology, 30: 311-334.

Podolny, J. M., & Phillips, D. J. (1996). The dynamics of organizational status. Industrial and Corporate Change, 5: 453-471.

Podolny, J. M. (2005). Status signals: A sociological study of market competition. Princeton: Princeton University Press.

20 American Journal of Management vol. 13(2) 2013

Ramey, K. (2002). Between art, industry and academia: The fragile balancing act of the avant-garde film community. Visual Review, 18(1-2): 22-36.

Rao, H. (1994). The social construction of reputation: Certification contests, legitimation, and the survival of organizations in the American automobile industry: 1895-1912. Strategic Management Journal, 15: 29-44.

Rao, H., Greve, H. R., & Davis, G. F. (2001). Fool’s gold: Social proof in the initiation and abandonment of coverage by Wall Street analysts. Administrative Science Quarterly, 46: 502-526.

Rao, H., Monin, P., & Durand, R. (2003). Institutional change in Toque Ville: Nouvelle cuisine as an identity movement in French gastronomy. American Journal of Sociology, 108:795–843.

Rindova, V. P., Williamson, I. O., Petkova, A. P., & Sever, J. M. (2005). Being good or being known: An empirical examination of the dimensions, antecedents, and consequences of organizational reputation. Academy of Management Journal, 48:1033-1049.

Rothaermel, F. T., & Hess, A. M. (2007). Building dynamic capabilities: innovation driven by individual, firm, and network level effects. Organization Science,18: In press.

Simonton, D. K. (2004). Film awards as indicators of cinematic creativity and achievement: A quantitative comparison of the Oscars and six alternatives. Creativity Research Journal, 16: 163-172.

Telefilm Canada. (2004). Analysis of Canada's major film festivals. http://www.telefilm.gc.ca/data/communiques/rel_468.asp?lang=en&

Turan, K. (2002). Sundance to Sarajevo: Film festivals and the world they made. Berkeley: University of California.

Wade, J.B., Porac, J., Pollock, T., & Graffin, S. (2006). The burden of celebrity: The impact of CEO certification contests on CEO pay and performance. Academy of Management Journal, 49(4): 643-660.

Watson, M.R., & Anand, N. (2006). Award ceremony as an arbiter of commerce and canon in the popular music industry. Popular Music, 25: 41-46.

American Journal of Management vol. 13(2) 2013 21

Computer Simulation: What’s the Story?

Brent H. Kinghorn Missouri State University

Computer simulation appears often in management theory development. However, computer simulation poses unique difficulties for many researchers and practitioners to understand. The author suggests the utilization of qualitative foundations will illuminate this type of quantitative research. Storytelling facilitates the use of that specific qualitative program (ethnostatistics). Storytelling elements explicate the patterns developed, found, and reported within the computer simulation methodology. An example of the framework developed is given.

”Far better an approximate answer to the correct question than an exact answer to the wrong question.” John Tukey, 1962

INTRODUCTION

The increase in computer processing speed and the proliferation of computers has contributed to the increase in problem solving. The classic traveling salesman problem (Robinson, 1949), which was first calculated to 49 cities (Dantzig, Fulkerson, & Johnson, 1954), has now been calculated to 25,000,000 cities (Applegate, Cook, & Rohe, 2003). Such a solution, possibly unfathomable when first posed, would not be possible without the benefit of computer processing speed and the ease of use for computers in general. Of course, the practicality of a traveling salesman tour of 25,000,000 cities contributes little to social science research, but does illustrate computing in problem solving. The increase in computing problem solving generally that researchers simply input data on one end and receive answers on the other. There is a special kind of ignorance of what may be happening in the middle, what may be happening in the computer. Researchers previously toiled long hours to calculate, recalculate, theorize, retheorize, and conceptualize their hypotheses and research question, now there is less time spent theorizing as data is entered in the front and results ‘magically’ appear at the end. This has led to a brand new term ‘harking’ or hypothesizing after results are known (Kerr, 1998), where researchers produce output, and then create their hypotheses to match the data. Therefore, the question becomes with this ease in calculation and manipulation has human reasoning and theorizing in social sciences been factored out? Social science researchers recognize that most of human behavior is the result of interdependent, yet simultaneous processes that are increasingly complex (Harrison, Lin, Carroll, & Carley, 2007). The understanding of these multiple interdependent processes is contingent upon development of theory and research on the consequences of that theory. The main limitation of traditional approaches to theory development and research is the simultaneous evaluation of these interdependent processes. If the

22 American Journal of Management vol. 13(2) 2013 individual processes are developed fully, the simultaneous and interdependent presents difficulties, especially if the processes interact in unforeseen ways. The increase in computing power, particularly in the form of simulation, can aid in this respect. Simulation is an increasingly essential methodological approach in the social sciences. Research efforts in fields such as human resources (e.g. Blundell & Costa Dias, 2009; Schultz, Schoenherr, & Nembhard, 2010), leadership (e.g. Ballinger, Schoorman, & Lehman, 2009; Hunter, Bedell-Avers, & Mumford, 2009), strategy (e.g. Adner, 2002; Zott, 2003), and entrepreneurship (e.g. Noel & Latham, 2006; Lomi, Larsen, & Wezel, 2010) employ simulation to develop the theory around the research questions. Despite these and similar studies, simulation-based methodology efforts remain largely debatable (Davis, Eisenhardt, & Bingham, 2007). On one side of the debate, researchers advocate simulation as a means of theory development. When empirical data limitations exist, simulation supplies insight into complex relationships between constructs (Zott, 2003). Simulation, also, provides an analytically precise method for the assumptions and theoretical logic from verbal theories (Kreps, 1990; Carroll & Harrison, 1998) and outcomes of the interactions between multiple organizational processes as they develop over time (Repenning, 2002). Advocates suggest simulation developing new theory and extending existing theory by useful means. The other side of the debate suggests that simulation provides little advance to new theory development (Davis, et. al., 2007). In fact, some researchers argue that the outcomes of simulation research are so complex as to render any theorizing as ambiguous and inconclusive (Fichman, 1999). Others imply that simulations are just “toy models” of phenomena (Robertson & Caldart, 2008) that simply restate the obvious or remove so much realism that no theoretical value is obtained (Chattoe, 1998). After all, most simulation models are at least based in part on some unrealistic assumptions (Rivkin, 2000), and the measured outcomes of the simulation models are bit strings that represent choices, strategies, organizations, etc. (Bruderer & Singh, 1996; Lennox, Rockart, & Lewin, 2010; Miller & Lin, 2010; Rivkin, 2001). Critics of simulation point to the lack of control by letting the machines do the work. In our opinion, the contentious points in this argument are not in the methodology but the perception of the methodology. Skeptics believe that if the human reasoning and theorizing has been factored out in favor of computing and technology then how can one be sure that computer-generated data follows the correct means of scientific reasoning? If researchers using simulation could better relate what is happening and why at several key steps in the process, then those not as familiar with simulation can be assured that the scientific method is in tack. The elements of storytelling provide the relation point of what is happening and why, and ethnostatistics provide the key steps in the process. Almost all human memory is story based (Schank, 1999). Information, while indexed and stored in different methods, it is retrieved by means of stories (Woodside, 2010). By using the components of storytelling, difficult principles are made clearer. Storytelling traces its origins back to Aristotle (350 BC) and some made modifications to the components (Boje, 2002; Burke, 1945), but the benefits to understanding difficult situations is well acknowledged. Therefore, story with many components provides a means to increased learning (Schank, 1999). Ethnostatistics is the empirical study of the creation, use, and interpretation of statistics and numbers by academics, researchers, and other professionals (Gephart, 1988, 2006). It is during these three distinct steps in any quantitative process that ethnostatistics confirms whether the statistics or numbers used are effectively representing the phenomena. Ethnostatistics is an application of enthnomethodology (Garfinkel, 1967) to the field of statistics. By using this practical application, we can extend enthnomethodology to computer simulation. Often, simulation is investigated only for the use step of the process, but we believe that by carrying the story through all levels of ethnostatistics a better judgment of validity can be made. Therefore, the purpose of this paper is to explore how a particular sophisticated technological quantitative methodology can avoid nagging questioning of results by maintaining sound qualitative principles throughout the process. Computer simulation is a particularly sophisticated technological tool utilized by some in the social sciences that has increased scrutiny of results. By marrying two particularly important pieces of qualitative methodology, namely storytelling and ethnostatistics, we will enhance the

American Journal of Management vol. 13(2) 2013 23

benefit of computer simulation. The paper proceeds in the following fashion. First, a short review of computer simulation is conducted. Next, reviews of both storytelling and ethnostatistics with linkages to each other follow. Last, a review of some computer simulation articles illustrates the benefits of this approach.

SIMULATION

Simulation is defined as the use of computer software or programming to model real-world processes, events, or systems (Davis, et. al, 2007; Carley, 2001). This definition is consistent with other definitions that describe simulation as virtual (Macy & Willer, 2006; Carley, 2001), or more simply as a simplified picture of a part of the world that contains some of the real world’s attributes and is much simpler than reality (March & Lave, 1975). Other functions for simulation include a heuristic tool to develop, hypotheses, models, and theories (Davis, et al., 2007, Hartman, 1996), as an experimentation tool for support or numerical support, or as a pedagogical tool for understanding the process. As such, simulation is different from the prior schools of science deductive and inductive (Harrison, et al., 2007). Previous scientific efforts relied on two methodologies: theoretical analysis or deduction, and empirical analysis or induction. In the deductive methodology, assumptions are formulated and then the consequences of the assumptions provide conclusions. Typically, these assumptions are formulated as mathematical relationships and the consequences suggest conclusions through mathematical proofs or derivations. This methodology led to many successes where mathematical techniques are tractable to determine the consequences adequately. However, in most cases of the social sciences, the stochastic nature of the social processes or possibly even the complexity of these processes led researchers to choose assumptions based on the ability to derive consequences rather than the correspondence to reality. Even when well-designed with correct variables, the mathematical equations can only be solved in special and limited cases, which rendering the results suspect. Inductive science requires observations of the variables or data and then analyzing that data to illuminate the relationships between the variables. In social sciences, the inductive form of science tests the predictions of theoretical analysis. The major problem for inductive science is the availability of measurable and observable data. Variables that are difficult to measure such as organizational problems (Simon, 1996) or power level of suppliers need sub variables or other markers that represent the actual variables but typically not the variables themselves. Difficult variables to observe such as organizational trust or secret agreements make predictions of the theoretical almost impossible. In addition, comparable measures cross samples or possibly across time compound the availability of viable data rendering some inductive science suspect. Simulation is the third way of doing science (Axelrod, 1997; Waldorp, 1992), because simulation is different from both deductive and inductive science in its goals (Axelrod & Tesfatsion, 2006). Simulation can handle the computational aspect of many more mathematical relationships, thereby overcoming the difficulties of the deductive sciences. At the same time, data availability is less difficult since simulation can produce ‘virtual data’ for use in its calculations. Rather than the historical view of what happened, and how, simulation looks forward to the future and what if type scenarios (Dooley, 2002). Because of these features then simulation allows researchers to make more realistic assumptions rather than analytically or data availability convenient ones. Finally, simulation allows researchers to generate hypotheses that are whole system integrated and consistent (Carley, 1999), while also allowing the experimental conditions to take place in as controlled an environment as is possible (Axelrod & Tesfatsion, 2006). Simulation researchers use three different schools of practice; each delivers a different means of scientific results (Dooley, 2002). The first of these schools is systems dynamics. Systems dynamics entails identifying key stagnant variables that define the behavior of a system, and define the relationship of those variables through coupled, differential equations (Sastry, 1997; Repenning, 2002). Systems dynamics is an extension of differential equations, and allows multiple differential equations to be

24 American Journal of Management vol. 13(2) 2013 coupled to simulate the behavior of the system. System dynamics is limited to a single level of analysis and a single entity. Although limited in scope, system dynamics bridges some of the inductive and deductive methods to simulation. The next school of simulation is discrete event simulation. This school involves the modeling of the entire system as a system of individual entities evolving over time due to triggering events such as the availability of resources. This school contains such families of simulation as cellular automata, developed from game theory (Lomi & Larsen, 1996), and learning models, developed from psychology and simulated evolutions (Levinthal, 1997; Rivkin, 2000; Rivkin & Siggelkow, 2003). Both of these families model more than one level, such as the individual or society, and the interactions of agents between those levels. Multiple levels are necessary for investigating emergent phenomena. Emergent data are properties of a system that exist at a higher level of aggregation than the original description of the particular system. The third school of simulation is agent-based simulation. Agent based simulation requires agents that are autonomous, sensing, and acting agents and represent an individual, groups, or entire organizations (Carley, 1995). These agents attempt to maximize, or sometimes minimize, their utility functions by interacting with other agents and available resources. Programmed schemas that are interpretive and action-oriented in nature determine agent behavior. Families in this school of simulation are multi-agent models, agent-based computational economics, agent-based models, and multi-agent systems. All of these families largely grew from the artificial intelligence community, including such grand projects as the original atomic bomb testing (Harrison, et al., 2007). This school has multiple levels of modeling, highly complex agents, with a myriad of complex interactions.

STORYTELLING

Storytelling, particularly in organizations, is the preferred method of evaluating the relationships involving internal and external stakeholders (Boje, 1991). Berry (2001) notes, “Stories are a fundamental way through which we understand the world…By understanding the stories of organizations, we can claim partial understanding of the reasons behind visible behavior” (pg. 59). Individuals in an organization engage in an incremental refinement of the old stories within new events, especially during turbulent times. The old stories hold a type of precedent for the interpretation of current situation. Even during stable times, portion of the stories of organizational experience are told and retold in a social manner that serve as precedent for individual interpretation. Story is therefore an important but often times misunderstood development within the organization. Academic research differs on story versus narrative versus even antenarrative (Boje, 2001). Stories perhaps exist as flowing soup (e.g. Weick, 1995), after plot is added (e.g. Czarniawska, 1997), or as fragmented, non-linear, unplotted speculation (e.g. Boje, 2006). For purposes of this paper, Ricoeur’s endorsement of Gallie’s (1968:22) approach is adopted:

A story describes a sequence of actions and experiences done or undergone by a certain number of people, whether real or imaginary. These people are presented either in situations that change or as reacting to such change. In turn, these changes reveal hidden aspects of the situation and the people involved, and engender a new predicament, which calls for thought, action, or both. (1984:150).

Storytelling is a method of understanding relationships more than simply performance output numbers. The stories told by the individuals reflect a deeper comprehension of the situation than mere numbers might be able to provide. The exploration of story requires an understanding of what is involved in the story. Aristotle (350 BC) proposed a set of poetics to describe story. Aristotle’s poetics included: • Plot – The incidents of the story, How the events or tasks of the story move along • Character – The actors of the story, Who is performing the events or tasks of the story

American Journal of Management vol. 13(2) 2013 25

• Theme – The purpose of the story, Why the events or tasks of the story are being done • Dialog – The voice of the story, How the story is told • Rhythm – The pattern of the story, How the story is told • Spectacle – The stage of the story, What the actors look like, what the surrounding environment looks like

Burke (1945) modified Aristotle’s six poetics into his Pentad of five elements of dramatism. The Dramatism Pentad combined aspects of Aristotle to model a ‘who, what, where, how, and why’ of theater. Burke argued that the dramatism pentad exposed different relationships within humans’ symbolic use of action and communication. Burke suggested that he should have added frame to his Pentad to create a Hexad (Burke, 1972). Frame, as Burke argued, are a competition between the dialectic of frames of acceptance and frames of rejection. Burke suggests that frames are the boundaries around which the communication or theatre take place. Boje (2002) attempted to expand and align Aristotle’s Six Poetics with Burke’s Pentad (Hexad) and created a Septet Grammar of the Leadership Situation. In his Septet, Boje included all Aristotle’s six poetics, replacing the rhythm that Burke collapsed with dialog to create scene and included the frame element. Additionally, Boje provided for a multitude of stories at the same time (polyphony). This addition meant that each element could have a multiple amount of elements working as a whole or in part with other elements in the Septet. Therefore, the pluralized elements note this polyphony of stories. Table 1 contains a summary of this development.

TABLE 1 STORYTELLING THROUGH THE YEARS

Author Aristotle (350 BC) Burke (1945) Boje (2002) Poetics of Grammar Dramatism Pentad Theatrics of Leadership Definition (Hextad) (Septet) What is being done; the events, Plot Act Plots construction, or processes of the story Who is acting; the actors who are Character Agent Characters involved in the events or processes Why it is being done; the rationale Theme Purpose Themes employed in resolving the events or processes How the actors are doing the plot; Rhythm Agency Rhythms the repetitive cycles, chaotic disruptions How the actors are doing the plot; Dialog Dialogs the thoughts of the actors in words Where it is being done; the stage, Spectacle Scene Spectacles costumes, Boundary conditions; the context in (Frame) Frames which the plot can be accomplished

Using elements of Boje’s Septet grammar, this paper will now move to linking ABM to the three moments of ethnostatistics. Only four elements will be used for this paper, plot, character, theme, and frame. Plot will illuminate the ‘what’ is being done, character the ‘who’, theme the ‘why’, and frame the boundaries. These elements directly tie the stated purpose of ABM. The exclusion of the remaining three

26 American Journal of Management vol. 13(2) 2013 elements does not imply non-existence or unimportance, but is beyond the scope of what is compared in this paper. For this reason, only the four elements are utilized.

ETHNOSTATISTICS

During the latter half of the twentieth century, social scientists turned to statistics as the method of transitioning from a subjective speculation to a true science (Gould, 1981). In doing so, social scientists endorsed, albeit tacitly, the instruction of Lord Kelvin: “When you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind” (in McCloskey, 1985: 54). This tacit endorsement facilitated statistical and quantitative research methods an iron fisted hold on social science research (Van Maanen, 1979). Gephart (1988) in a review of statistical practices, particularly in the social sciences, coined the term ethnostatistics.

(P)ropose the term ethnostatistics to refer to the study of how statistics are actually constructed and used, particularly in scientific research. The prefix ethno suggests a concern for the actual behavior, and the informal subcultural, folk, or ethnic knowledge and activities of statistics producers and users. This informal knowledge complements and extends the formal, codified technical knowledge involved in statistics. Ethnostatistics is concerned with the mundane, everyday life practices, and the lay and professional knowledge necessary to implement and use statistics…Ethnostatistics as a domain of empirical inquiry complements statistics as a technical field of science. [Gephart, 1988: 10]

Gephart (1988) suggests that ethnostatistics has three levels or moments of analysis: constructing numbers, analyzing numbers, and interpretation of numbers. The first of ethnostatistics utilizes qualitative methods to study the naturally occurring activities and meanings in producing a statistic. Producing a statistic involves assembling data in a logical format. This assembly involves selection of phenomena to measure, observe, and then code the results. The particular variables must be selected and the resulting activities associated with the variables must be observed in their proper setting where practical constraints and other concerns can influence the observations and the measurement of those variables. Effort to minimize confounding behaviors that influence outcome is of particular concern. Quantitative methods may not accurately describe the variables in such a ‘natural’ setting or within its particular context. The second moment of ethnostatistics examines the adequacy of the technical and practical assumptions of the statistical analyses. Of particular interest in the second moment is the use of one technical or statistical assumption when others are available and implicit assumptions about cognitive or social features of the research process. In this moment, researchers seek to explain and to critique potential problems with particular assumptions and practices, and to propose alternative assumptions and practices. Also, second moment ethnostatistics seek to understand the perspective of the statistician to discover and assess the limits of the chosen technical and practical assumptions. The contextual difficulties found in the first moment are generally glossed over in this moment. The third moment concerns the rhetorical or persuasive presentations of technical or statistical output. Rhetoric is the study of the use of language; in particular, it is the art of persuasion or the production of a particular argument persuading a unique audience. A package of antimethodology rhetoric indicates what is done, what seems to persuade, and why. Rhetoric concerns itself with explicitness, precision, and parsimony of argument. Rhetoric does not imply evasiveness or deception as in the phrase ‘mere rhetoric’; all sciences are rhetorical (McCloskey, 1985). The observations of the first moment of ethnostatistics expose the researcher to the sometimes crude, always unbiased meanings and the tacit assumptions of relationships and activities. The first moment provides a basis for the assumptions and features implemented during the second moment of

American Journal of Management vol. 13(2) 2013 27

ethnostatistics. The third moment of ethnostatistics provides a means of investigating the information and assumptions utilized in the first two moments and the third moment studies them. In general, the program of ethnostatistics allows the researcher to monitor across research methodologies as well as explore science at three moments that are mutually relevant but analytically separable.

THEORETICAL LINKAGES

Recall that agent-based modeling involves pattern seeking. It is process up evaluation rather than a control down simulation. Using the ethnostatistic program as a medium of analysis, the utility of ABM will be evaluated. The threads of connection will be the qualitative research method storytelling. Table 2 summarizes important data gathering points from moment 1, assumptions to be evaluated in moment 2, and explications for results in moment 3.

TABLE 2 ETHNOSTATISTICAL MOMENTS COMPARED TO STORYTELLING ELEMENTS

Ethnostatistic Moment 1 Ethnostatistic Moment 2 Ethnostatistic Moment 3

Plots What specific events, Model the explicit events, Importance of the patterns relationships, or processes relationships, or processes; are found in relation to the are to be studied there implicit events, events, relationships, or relationships, or processes that processes impinge on the explicit ones? Characters Characteristics of the Model characters; are they Generalizability of characters actors; What makes them ‘modelable’ and the patterns they followed unique? Themes Deep understanding of why Decision points and the Patterns found in decision this is necessary or being criticality of those points made by model done measured out and modeled Frames Context of events, To what degree can the Generalizability of patterns relationships, or processes boundary conditions be across contexts modeled

The first moment of ethnostatistics studies the assemblage of data and the meaning in the production of a statistic. Quantitative methods are generally unable to describe variables in an accurate way and therefore the employment of qualitative methods allows for the capture of the data. In the first moment of ethnostatistics, four elements of Boje’s Septet Grammar provide an excellent tool for developing insights into the studied events. Carefully attention to accurate portrayal of these four elements from the stories gathered will successfully enable an agent-based model. The frame is the most important. This puts the process or relationship in the correct context. Without a proper context, a researcher may skew the rest of the data. The frame is also the most difficult to assess. The researcher must be conscious of correctly ascertaining this context. On the other end of the spectrum are the agents. The agents may be the easiest element to discover, but understanding the meaning wrapped up in the agents might also prove elusive. For this reason, theme is another important concern for the researcher. Theme provides a deeper understanding into the why question of the process or relationship. The researcher must delve deep to understand this issue in order to be successful. The final element is plot. The plot element in this instance will tie the understanding together. The plot is the thread running through the story itself. If all four of the elements are sufficiently satisfied, then the first moment will also be sufficiently satisfied.

28 American Journal of Management vol. 13(2) 2013 The second moment of ethnostatistics is a study of the practical and statistical assumptions. In this moment, the researcher has selected the method by which to study the process or relationship. That method is ABM. The suggestion here is to pay attention to the four elements of story to build the correct model. As mentioned before, ABM can only provide descriptions based on the inputs. Creation of an incorrect model only leads to incorrect descriptions. Staying true to the elements of story eliminates this potential problem. As mentioned in the first moment, after careful consideration of four elements of story the researcher can build an accurate model of the studied process or relationship. This is not a trivial matter. A thorough understanding of the four elements is essential before moving to the model-building phase. If the correct data is not discovered in the first moment then time spent in the second moment is wasted. This is the point at which most researchers simply accept the portion in the middle. Insert data in slot A and results come out of slot C. The model must hold true to the data discovered in the first moment. The researcher must depict each element accurately within the model. Sufficiently modeling all four of the elements according to the data gathered in the first moment, the second moment would be sufficiently satisfied. The third moment of ethnostatistics is a study of the rhetoric or persuasive presentations of the research. In this moment, the researcher seeks to persuade the audience that the research design and data sufficiently illustrate the study process or relationship. In this moment, the analyzation of the previous moments work takes place. In this moment, specific assumptions that differ from the first moment must be justified. This justification must transpire prior to the results being accepted. Any difference must be scrutinized before results are given. Failure to communicate or examine the differences between the first and second moment force questions about any results. Finally, the presentation of results takes place. The results must represent only the depth and breadth found in the first moment specifically, but should include the second moment also. Recall that this presentation should not include evasive, divisive, or deception rhetoric, as doing so would immediately call into question the results. Specific attention to each moment of ethnostatistics and to the highlighted elements of storytelling provides a useful program for the inclusion of humans in ABM. As each moment is developed, the researcher can be sure that the allure of quantification has not overwhelmed them. The researcher knows by following the storytelling threads through each moment that indeed control does not rest with the computers.

AN EXAMPLE

Perhaps the best method of explaining the theory developed here is to give an example from a published article using computer simulation. This example is neither an endorsement of the article or a denouncement but merely an example of the usage of this theory. Dynamic Capabilities and the Emergence of Intraindustry Differential Firm Performance: Insights from a Simulation Study (Zott, 2003) published in the Strategic Management Journal. This article looks at a difficult question for strategic management researchers. The question of “Why are firms different?” eludes researchers despite much empirical and theoretical effort to answer that question (Rumelt, Schendel, and Teece, 1994). Economic theory predicts that differences between firms particularly competitive firms will dissipate if not completely eliminate over time, but empirical evidence shows this not to be true (Zott, 2003). Citing several key studies with empirical evidence, Zott introduces us to the main theme of his study. Namely, the question, “Why do firms in the same industry perform differently?” (Zott, 2003).

First Ethnostatistical Moment According to the theory established earlier in this paper, the theme is the why of storytelling. The theme in the first ethnostatistical moment establishes the understanding of why this is necessary. Although, Zott does not go to great lengths to provide a deep understanding of the why, it is sufficient for two purposes. First, the theme will be an ever present theme behind everything that this study seeks to provide. Secondly, Zott’s study appears in an organizational strategy themed journal, where the

American Journal of Management vol. 13(2) 2013 29

background and deep understanding of the why is understood by the readership with only small reminders. The theme introduces the plot of this moment of the study. Since the theme is the why, the plot provides us with the what based on the theme. In this the first ethnostatistical moment of the Zott study, the what involves the firms within a particular industry and their respective performances. More specifically, the relationships between firms in the same industry that differ based on dynamic capabilities and between those dynamic capabilities and firm performance. Although the characteristics of the dynamic capabilities have been theorized and studied, the affect of those capabilities on firm performance within an industry support the theme of the study. Zott devotes a section of the paper to further exploration the rationalization of those relationships. Similar to the introducing the plot, the plot introduces the actors. The actors in the first ethnostatistical moment at first glance are firms and firm performance. Recall that actors are the ones involved in the plot. It is important to note that one actor is not, according to this ethnostatistical moment, simply individual firms in the same industry but rather the bundle of capabilities that each individual firm may integrate, build, and reconfigure to address a sustainable competitive advantage. According to Zott, since all firms have access to similar resources it is the dynamic capabilities which allow firms to compete within the same industry. Therefore, one of the actors is not the individual firm but the individual attributes of the dynamic capability that exists within the individual firm, namely costs, learning, and timing (Zott, 2003). Firm performance is the other actor that influences the plot. Different than individual firms as actors, firm performance is the actor with characteristics encompassing product innovation, process innovation, and costs. The frames are the boundary conditions that establish the context in which the plot can be accomplished. While many times the frames are explicitly stated, in Zott’s development the frames are not. However, the implicit frames can be developed to allow the plot to develop. First, while individual firms are not the actors in the story, they do provide a boundary condition. Namely, that each firm must have access to the entire resource configuration and that the actors and their relationships are under study. If any firms had an unattainable access to resources then the relationship and therefore the plot could not be reached. Secondly, the actors themselves could be held distinct enough to be measured. Zott does acknowledge that the actors do interrelate to each other, so this will be a difficult boundary to maintain. The first ethnostatistical moments draws these unique story characteristics; a theme of firms in the same industry performing differently, a plot of relationships of the dynamic capability construct between firms and performance, actors of cost, learning, and timing, and firm performance, and finally frames of the access to the same attributes (being in the same industry) and these attribute being distinct. Identification of these aspects provides a smooth transition to further development in the first ethnostatistical moment.

Second Ethnostatistical Moment For the purpose of this theory development, the second ethnostatistical moment is the most important. The second ethnostatistical moment involves the construction of the data. In the Zott paper, creation of the data takes place within a computer simulation model. Therefore, an investigation of the construction of data through computer simulation will determine if computer simulation can be move from theory development to experiment. One of the more important aspects of the second ethnostatistical moments are the actors. The first actor developed in the Zott article is that of firm performance. Recall that firm performance evolves from product innovation, process innovation, and costs. The creation of the product and process innovation comes from the quadratic relationships from the theory development. The reader is left to believe in the formulas established for these two pieces of the firm performance. The third piece, total production costs, comes from the inverse relationship of accumulated efforts to the reducing costs. This relationship also yields a formula based on the theory development, putting the quadratic formulas and the production cost formulas into an objective function of a demand function based around a competition theory. The development of this formula is spelled out in great detail. While the formula looks very involved, the

30 American Journal of Management vol. 13(2) 2013 step-by-step derivation of this formula makes it seem much simpler. The reader is left to accept the creation of data for the actors of this moment. The development of these actors is important for the support of this study. The next aspect that becomes apparent in the second moment is the boundaries. The boundaries in the second moment assure that the actors can perform the plots and portray the themes. The boundaries for the computer simulation involve the assumptions for the programming of the model. These assumptions provide the model with the boundaries conditions in which the actors perform. In the Zott article, these assumptions can be found in Appendix 2 (Zott, 2003). These assumptions should be evaluated for soundness in providing for a realistic model. Several of these assumptions are for convenience sake (i.e., 200 simulation periods, experimental change of 5%, instantaneous selection of variables, etc.). Other assumptions show the formulas for each actor that aid in the realism of that formula (i.e., fitness values for profits from selection, rationalization rules for pricing based on competition, etc.). These assumptions set the boundaries for the creation of data in this second moment. The plots, in the second ethnostatistical moment, model the explicit relationships and check for any implicit relationships. In Zott (2003), the plots for the second moment can be found in one particular section of the article. In that section, Zott suggests a priori propositions for each of the relationships. Zott “maps” these relationships for different stages of the model. Using theoretical underpinnings, Zott attempts to show how each of these relationships model the focus of a set of the characters (each stage of the original model) to firm performance. For instance, the timing of resource deployment or the retention actor fosters differential firm performance due to stochastic (selecting to change a percentage of the time and not to change the inverse percentage of time) or due to suspected performance improvement reasons. This one relationship can support the plot explicitly as stated in the proposition or implicitly be influence by another actor as noted in the footnote. As shown by this example, the plots establish the relationships between the actors. However, the key for validity for the reader, in the second moment, is in showing each of these relationships and understanding how the relationships are being created. Zott has done so with his development of propositions or plots. The final piece of the storytelling puzzle involves themes. Themes in the second ethnostatistical moment measure out the decision points and show the modeling of those points. Once again, Zott sets a section of the paper aside to establish this portion. In fact, he states, “at this points, it is necessary to verify the logic and soundness of the conjectures developed in the previous section and test their robustness by simulating the model introduced earlier” (Zott, 2003, pg. 109). In making this statement, Zott goes directly to the point of storytelling themes for this particular moment. The model will hold each plot, or relationship, as either on or off allowing for it to affect the model. In doing so, Zott suggests that everything, except for the decision point in question, is held identical for the modeled firms. This allows for the decision points to be encompassed in the plots to be studied.

Third Ethnostatistical Moment The third ethnostatistical moment, deals with the relevance of the output or the results of the second moment. This moment is typically the most important portion of the study as the benefits of the study are illuminated. In the Zott paper, this is of particular importance because the author also needs to interpret what the output depicts because it is not as straightforward as other statistical methods for researchers. Zott handles each of the plots, or relationships, individually to suggest the importance of each pattern depicted by that relationship. In doing so, Zott first looks at the themes of each relationship. The themes are the patterns found from the model. Zott labels these as impacts on each individual plot, or relationship. Most of the impact sections refer to the corresponding figure showing the firm performance over time as each firm is affected by the relationships of the other actors. This discussion leads to a discussion of the importance of each relationship as a matter of its implication towards theory. Taking both the theme and the plot together underscores the importance of each individual actor on the actor of firm performance. Zott also gives an update to the relationships (propositions) established in the second moment. Several of these relationships needed to be modified based on the results of the computer simulation.

American Journal of Management vol. 13(2) 2013 31

Interestingly, Zott chose to also explain several of the characters with the individual relationships. The characters in the third ethnostatistical moment involve the generalizability of the characters. After establishing the themes and the frames, the theoretical consistency shows the influence of the computer simulation on the patterns of the characters and thus the effect of the characters is also explored at this time. The final piece of the storytelling puzzle in the third moment is the frames of this study. The frames in the third ethnostatistical moment deal with the generalizability of patterns across different contexts. Zott does mention in his conclusion that the actors and their patterns do show some interaction, and that the benefit is not sustainable as the firms tend to move towards equilibrium. He also mentions individual relationships that affect some change. To his credit, Zott also acknowledges limitations of his study, and therefore a limitation of computer simulation in this particular instance. These all contribute to the generalizability of the study.

CONCLUSION

Computer simulation as a means of scientific research, particularly in Business Research, is becoming increasingly popular (Mezias & Eisner, 1997). However, understanding wresting the experimental control from the computers is important and a central theme to this theoretical development. This paper develops a means by which a computer simulation study can be reported and understood through the use of storytelling and ethnostatistics. An example of how this theory could be used was also given. After the example, several key conclusions become apparent. First, a computer simulation must be very explicit if it is to reach the status of other empirical study methods. The theoretical development must move seamlessly through to the creation of the data through simulation methods, and then finally to the understanding of the results. The elements of storytelling provide a focus for such a transition. Secondly, evaluation of all empirical studies can be done through the lens of this theoretical development. Through the evaluation of these elements and a segregation of the three moments of ethnostatistics, researchers can have a framework through which to evaluate a study.

REFERENCES

Adner, R. (2002). When are technologies disruptive? A demand-based view of the emergence of competition. Strategic Management Journal, 23: 667 - 688.

Applegate, D., Cook, W., & Rohe, A. (2003). Chained lin-kernighan for large traveling salesman problems. INFORMS Journal on Computing, 15(1): 82-92.

Aristotle (written 350BCE). Citing in the (1954) translation Aristotle: Rhetoric and poetics. Introduction by F. Solmsen, Rhetoric. (W Rhys Roberts, Tran.); Poetics (I. Bywater, Tran.). New York, NY: The Modern Library (Random House). Poetics was written 350 BCE. Custom is to cite part and verse (i.e. Aristotle, 1450: 5, p. 23) refers to part 1450, verse 5, on p. 23 of the Solmsen (1954) book. There is also an online version translated by S. H. Butcher http://classics.mit.edu/Aristotle/poetics.html or http://eserver.org/philosophy/aristotle/poetics.txt

Axelrod, R. (1997). Advancing the art of simulation in the social sciences. Complexity, 3(2): 16-22.

Axelrod, R. (2003). Advancing the art of simulation in the social sciences. Japanese Journal for Management Information System, 12(3):1-19.

Axelrod, R. & Tesfatsion, L. (2006). On-line Guide for Newcomers to Agent-Based Modeling in the Social Sciences, in L. Tesfatsion and K. Judd (eds), Handbook of Computational Economics, Vol. 2: Agent-Based Computational Economics. Amsterdam, The Netherlands: Elsevier.

32 American Journal of Management vol. 13(2) 2013

Ballinger, G., Schoorman, F., & Lehman, D. (2009). Will you trust your new boss? The role of affective reactions to leadership succession, Leadership Quarterly, 20(2): 219 - 232.

Berry, G. (2001). Telling stories: Making sense of the environmental behavior of chemical firms. Journal of Management Inquiry, 10, 58-73.

Boje, D. (1991). The storytelling organization: A study of story performance in an office-supply firm, Administrative Science Quarterly, 36(1): 106-126.

Boje, D. (2001). Narrative Methods for Organizational & Communication Research. Thousand Oaks, CA.: Sage Publications, Inc.

Boje, D. (2002). What is Situation? Accessed last http://business.nmsu.edu/~dboje/388/what_is_situation.htm on December 10, 2006.

Boje, D. (2006). Storytelling Organization, London: Sage Publications, Inc. (In Press).

Burke, Kenneth (1945). A Grammar of Motives. Berkeley and Los Angeles, CA: University of California Press.

Burke, Kenneth (1972). Dramatism and Development. Barre, MA: Clark University Press with Barre Publishers.

Carley, K. (1995). Computational and mathematical organization theory: Perspectives and directions, Computational and Mathematical Organization Theory 1 (1), 39–56.

Carley, K. (2001). Computational approaches to sociological theorizing. In J. Turner (Ed.), Handbook of Sociological Theory: 69 - 84. New York, NY: Kluwer Academic/Plenum Publishers.

Chattoe, E. (1998). Just how (un)realistic are evolutionary algorithms as representations of social processes? Journal of Artificial Societies and Social Simulation, 1(3): 2.1 - 2.36.

Cyert, R. and March, J. G. (1963). A behavioral theory of the firm. Prentice-Hall: Englewood Cliffs, N. J.

Czarniawska, B. (1997). Narrating the Organization: Dramas of Institutional Identity. Chicago, IL: The University of Chicago Press.

Davis, J., Eisenhardt, K., & Bingham, C. (2007). Developing theory through simulation methods. Academy of Management Review, 32(2): 480 - 499.

Dantzig, G., Fulkerson, R., & Johnson, S. (1954). Solution of a large scale traveling salesman problem, Operations Research, 4: 393 – 410.

Dooley, K. (2002). Simulation research methods. In J. Baum (ed.) Companion to Organizations, London, United Kingdom: Blackwell.

Fichman, M. (1999). explained: Why size doesn’t (always) matter. Research in Organizational Behavior, 21: 295–331.

Gallie, W. (1968) Philosophy and the Historical Understanding. New York: Schocken Books.

American Journal of Management vol. 13(2) 2013 33

Gephart, R. (1988). Ethnostatistics: Qualitative Foundations for Quantitative Research. Thousand Oaks, CA: Sage.

Gephart, R. (2006). Ethnostatistics and organizational research methodologies: An introduction. Organizational Research Methods, 9(4): 417 - 431.

Grimm, V., Revilla, E., Berger, U., Jeltsch, F., Moolj, W., Railsback, S., Thulke, H., Weiner, J., Weigand, T., & DeAngelis, D. (2005). Pattern-oriented modeling of agent-based complex systems: Lessons from ecology. Science. Volume 310 (November 2005): 987-991.

Harrison, J., Lin, Z., Carroll, G., & Carley, K. (2007). Simulation modeling in organizational and management research. Academy of Management Review. 32(4): 1229 – 1245.

Hartman, S., (1996). The world as a process: Simulations in the natural and social sciences. In R. Hegselmann, U. Mueller, & K. Troitzsch, eds., Modelling and simulation in the social sciences: From the philosophy of science point of view., vol. 23 of Series A: Philosophy and Methodology of the Social Sciences, 77 – 100. Norwell, MA: Kluwer Academic Publishers.

Holland J. (1975). Adaptation in Natural and Artificial Systems. The MIT Press: Cambridge. MA.

Holland, J., & Miller, J. (1991). Artificial adaptive agents in economic theory. The American Economic Review, 81(2): 365-270.

Hunter, S., Bedell-Avers, K., & Mumford, M. (2009). Impact of situational framing and complexity on charismatic, ideological and pragmatic leaders: Investigation using a computer simulation. Leadership Quarterly, 20(3): 383 - 404.

Kerr, N. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2(3): 196 – 217.

Lennox, M., Rockart, S., & Lewin, A. (2010). Does interdependency affect firm and industry profitability? An empirical test. Strategic Management Journal, 31(1): 121 - 139.

Levinthal, D. (1997). Adaptation on rugged landscapes. Management Science, 43: 934 – 950.

Lomi, A., Larsen, E., & Wezel, F. (2010). Getting there: Exploring the role of expectation and preproduction delays in processes of organizational founding. Organization Science, 21(1): 132 - 149.

Lomi, A., & Larsen, E. (1996). Interacting locally and evolving globally: A computational approach to the dynamics of organizational . Academy of Management Journal, 39: 1287 – 1321.

Macy, M., & Willer, R. (2002). From factors to actors: Computational sociology and agent-based modeling. Annual Review of Sociology, 28(1): 143 – 166.

March, J., & Lave, C. (1975). Introduction to Models in the Social Sciences, New York: HarperCollins.

Miller, K., & Lin, S. (2010). Different Truths in Different Worlds. Organization Science, 21(1): 97 - 114.

Noel, T., & Latham, G. (2006). The importance of learning goals versus outcome goals for entrepreneurs. International Journal of Entrepreneurship & Innovation, 7(4): 213 - 220.

34 American Journal of Management vol. 13(2) 2013

Repenning, N. (2002). A simulation-based approach to understanding the dynamics of innovation implementation. Organization Science, 13: 109 – 127.

Ricoeur, P. (1984). Time and Narrative, Volume 1, Translated by K. McLaughlin and D. Pellauer, Chicago, IL: University of Chicago Press.

Rivkin, J. (2000). Imitation of complex strategies. Management Science, 46: 824 – 844.

Rivkin, J. (2001). Reproducing knowledge: without imitation at moderate complexity. Organization Science, 12(3): 274 - 293.

Rivkin, J. & Siggelkow, N. (2003). Balancing search and stability: Interdependencies among elements of organizational design. Management Science, 49: 290 – 311.

Robertson, D. & Caldart, A (2008). Natural science models in management: Opportunities and challenges. Emergence: Complexity & Organization, 10(2): 61 -75.

Robinson, J. (1949). “On the Hamiltonian Game (a Traveling Salesman Research Problem”, RAND Memorandum, RM-303.

Sastry, M. (1997). Problems and paradoxes in a model of punctuated organizational change. Administrative Science Quarterly, 42: 237 – 275.

Schank, R. (1999). Dynamic memory revisited. Cambridge, UK: Cambridge University Press.

Schultz, K., Schoenherr, T., & Nembhard, D. (2010). An example and a proposal concerning the correlation of worker processing time in parallel tasks. Management Science, 56(1): 176 - 191.

Simon, H. (1996). The Sciences of the Artificial – 3rd Edition. Cambridge, MA: The MIT Press.

Tukey, J. (1962). The future of data analysis. Annals of Mathematical Statistics, 33(1): 13.

Weick, K. (1995). Sensemaking in Organizations. Thousand Oaks, CA: Sage Publications, Inc.

Woodside, A. (2010). Brand-Consumer storytelling theory and research: Introduction to a Psychology & Marketing special Issue. Psychology & Marketing: 27(6): 531 - 540.

Zott, C. (2003). Dynamic capabilities and the emergence of intraindustry differential firm performance: Insights from a simulation study. Strategic Management Journal, 24: 97 - 125.

American Journal of Management vol. 13(2) 2013 35

Fourteen More Points: Successful Applications of Deming’s System Theory

Thomas F. Kelly Dowling College

This paper presents 14 additional application strategies for implementing the systems concepts of W. Edwards Deming. These strategies can all be generalized to virtually all organizations to enable them to do more and better with less. In these economically stressed times many organizations are struggling to survive. A key challenge is to adapt their operations in such a way as to make them both financially sustainable and capable of succeeding in an ever more competitive environment through ongoing self- improvement. This paper presents change strategies that are both significantly more effective and less costly.

We must recognize the power of systems thinking. Using systems thinking we can extend and apply Deming’s ideas to transform all of our organizations. We can do more and better with less. As our economy continues its downward trend, we are facing mushrooming debt at all levels of government and most of the private sector as well. We can use systems thinking to change organizations (systems) to improve and increase productivity while reducing required resources. We can apply systems thinking to all forms of organizations and save American education as well as our entire economy. Certainly significant improvement in our educational system by itself will cause significant improvement in our economy among many other things. In building on Deming’s work there are fourteen more points or key strategies we can identify and use to drive systemic change and save our education system, companies, governmental agencies and numerous other organizations that make up our entire economy. Some of these are already being applied in various sectors of the business community (see Kelly, 2012 for these and other examples referred to below). 1. Reallocate existing resources. Before we look to increase resources we must determine where there are existing resources that can be used more effectively. Much of the trillions of dollars we are currently spending annually can be used much more effectively in virtually all cases. Resources that are currently being used unproductively or under productively offer a vast pool that can be reallocated to productive ends. 2. Restructure existing resources to improve the system. Deming’s fourteen points are a good framework for doing this. We can also do systemic assessment for this purpose. In our efforts to pursue ongoing improvement we must commit to ongoing organizational self-assessment. There are a number of ways to do this described in my book, “We can Do More and Better With Less.” 3. Use technology for innovation. We must make maximum use of new technology to change and improve the system. Not all changes require new technology but many do. Friedrich Engels (although almost always wrong) made an astute observation: all great advances in human knowledge have followed some technological breakthrough. For example, when the first cave

36 American Journal of Management vol. 13(2) 2013 person observed that a sharp stone could be used to cut things, a revolution occurred in tool making, hunting, weapons, shelter, etc. The new knowledge created changed the whole world. The easiest example to see of this phenomenon of technological innovation as the means to expand knowledge would be the invention of the printing press. With this single innovation the whole world not only gained access to knowledge, but it could contribute to, preserve, share and expand it. Technology offers the ability to increase productivity tremendously in countless organizations. Today we are in the midst of a technological revolution that shows no signs of abating. Computers, the Internet, and numerous other innovations are improving and increasing productive capacity while reducing required resources. Change efforts must constantly seek ways to use technology to solve problems and increase productivity. 4. Use clear simple data to measure and drive improvement. Avoid over use of sophisticated statistics in presenting systems alternatives to most people. Specifically we need data that assesses the system, not just the people. Assessments of people, such as teacher evaluation and state tests of students used in all 50 states for decades, have failed to improve student achievement nationally. This requires different forms of very simple uses of data. The more complex uses found in many attempts to implement Deming’s ideas, while of great value in some cases, are usually unnecessary and frequently counterproductive in terms of convincing large numbers of people of the value of widespread use of data. More complex forms of statistics have valuable uses in many areas but the popular culture finds only mystery in them. While systems thinking presents numerous wonderful concepts for structural change we must keep the popular aversion and fear of statistics in mind. At least first seek to explain systems thinking in the simplest terms possible in order to implement it in the general culture. 5. Educate our to use W. Edwards Deming’s systems methods. Deming has revealed the incredibly important insight that ongoing problems in organizations are not coming from the individuals in the system but from the structure of the system itself. The system is the problem. In schools for example, we have been trying to “improve” teachers, administrators and students. This is the wrong focus as billions in expenditures for these efforts and decades of education reform have clearly demonstrated. There is no evidence nationally of any improvement in student achievement while student discipline problems and dropouts have increased dramatically. Chronic problems in organizations have systemic chronic causes. Unless and until we recognize these chronic causes and deal with them effectively we will continue to waste and misallocate billions of dollars and countless hours of work while failing to change their chronic effects. 6. Abandon the standard “We need more money” false solution. This is a classic example of a logical fallacy confusing quality and quantity. We must abandon this conventional all-purpose solution for solving all problems. It is not only false but is bankrupting America while preserving the failure of our economic system. The changes we need are not more but better. Specifically in most instances we do not need more money and we do not need more work. In Deming’s terms we need quality over quantity. When more money is not available this false solution serves as justification for the status quo or even failure. 7. Generalize implementation of effective models to other applications. Welfare reform is an excellent example of how a successful state innovation was replicated in other states and then nationally. When new structures are successful we must make them known to all who can use them to do more and better with less. The American automobile failed to replicate Deming’s systems ideas leading to not only inferior American cars but near collapse of our automotive industry in the face of foreign competition. We should set up a clearing house or network available online free for these successful innovations. This could be done not only in general but also by specific organizations with their peculiar implementations. 8. Recognize that all organizations, public and private, must continuously find ways to increase productivity. We must always seek to develop systems that can produce more while consuming

American Journal of Management vol. 13(2) 2013 37

fewer resources. Only those organizations (systems) that do this will survive and thrive in an ever increasingly competitive global market. 9. Recognize that ongoing change causes ongoing unemployment and develop standing or ongoing systemic responses. Change is systemic, ongoing and inevitable. This ongoing change is called “creative destruction” by economists. Old jobs are destroyed and new ones are created. This is an inherent and permanent characteristic of all economic systems. Resistance to change is futile and self-defeating. The incredibly rapid rate of change we now experience in our culture is only going to accelerate. Millions are unemployed, most through no fault of their own. Ongoing change will create millions more. Since this ongoing change is causing ongoing unemployment as a permanent condition in our economic system we need systemic structures to deal effectively with it on an ongoing basis. The system needs to recognize these realities and create structures and processes to deal with them. Systemic problems must have systemic solutions. We need a kind of G I Bill to retrain those unemployed for new jobs. Improvement of our largest and severely ineffective antipoverty program, the public schools, is also an unrivaled imperative here. 10. Recognize the destructive impact of subsidies on individuals, organizations and the common good. Subsidies maintain unproductive jobs at the expense of those that are productive. The net effect of subsidies is to make the entire system less productive. The long term impact of subsidies on individuals who are unproductive is loss of self-worth and self-esteem. Workers who find themselves in unproductive jobs can be retrained if necessary and redeployed in productive work. Such retraining is an investment in our most important resource, our people. Subsidizing unproductive work penalizes all concerned, reduces competitiveness and harms the economy. Carried far enough, such subsidies will destroy the economy. Paradoxically subsidizing unproductive jobs (which is usually meant to preserve jobs) causes general economic decline and increases long term unemployment. 11. Recognize the difference between an investment and a subsidy. An investment pays for itself in the long run, builds capacity and adds to the resources of the system. A subsidy diminishes the system by maintaining that which is unproductive at the expense of resources that could be used for things that are productive. Give a man a fish and he eats for a day. Teach him to fish and he eats for a lifetime. 12. Systems are interactive. Improving or weakening any part of the system affects the entire system in the same direction. The importance of recognizing this interactive nature of systems cannot be overstated. When we affect any part or subsystem, we affect the entire system. In terms of change this is good news. Systems are complex. They have many interacting subsystems. Efforts to change systems often fail because they are unfocused and attempt to change the whole system or too many dimensions or subsystems at the same time. This causes energy and effort to be dispersed and spread over too many different parts of the system. We can instead focus on one part of the system and by successfully changing that subsystem get positive changes in all of those subsystems with which it interacts. 13. Systems have levers, particularly important subsystems that can be identified and used to maximize the impact of change efforts. Since systems are made up of a number of subsystems that affect each other, knowledge of the existence of these levers in some subsystems that can be used for much greater impact on the entire system is extremely important. For example, a school has subsystems for curriculum, instruction, attendance, student discipline, scheduling, budgeting, etc. While all are important subsystems, some are far more important because they act as levers on the rest of the system. A constructive change in such a lever subsystem will have a big impact on the entire system. In a school for example, a positive change in curriculum will have a positive impact not only on student achievement but also on student attendance, discipline, teacher morale, parent support, etc. Changes in less important subsystems, attendance policies for example, would also have a positive impact on the entire system but to a lesser degree. In fact, the public school system as a whole is one of the most important subsystems or most powerful levers

38 American Journal of Management vol. 13(2) 2013 in the entire economy/culture. Significant improvement there will have exponential positive impact on the larger system. 14. Systemic change should always focused on capacity building, changing the system so as to increase its ability to produce.

Money is not the problem. It never was the problem and it’s not the problem now. We are currently maintaining and subsidizing inefficiency, mediocrity, failure and decline. We are fiddling while Rome burns. The “we need more money all-purpose solution” has become the primary cause of the problem. For example, after thirty years and hundreds of billions of dollars spent on public education reform student achievement data nationally is about where it was when we started. Totally lacking in imagination and ingenuity, “we need more money” is the automatic false solution we hear all the time. It is a common assumption not only in education but also in virtually all public/governmental and private organizations. As long as we continue to rely on this false solution, confusing quality and quantity, we will continue business as usual. We must accept some necessary assumptions for leadership to accomplish successful systemic change. Jim Collins (2001) points out that great leaders confront the brutal facts. Too many of our leaders refuse to recognize the brutal long term facts for their own selfish short term gain. This inevitably hurts us all over time. We must confront the brutal facts that public k – 12 and higher education are pricing themselves out of existence (as we have seen with General Motors, Chrysler, Bethlehem Steel and many others) while at the same time our product is not improving or changing to meet the new and ever changing needs or our students, and economy. Our present educational and economic models must be modified if they are to be sustainable. This includes structural change in both government and private sector organizations. We need educational and economic models that build in a systemic change process based on ongoing organizational self-assessment and self-improvement. William Glasser is an addition great resource for systemic improvement. He tells us that the healthy personality deals with reality, accepts responsibility and cares about right and wrong. Leaders must not only take responsibility to recognize and accept the brutal realities but develop ethical and effective means to deal them. Glasser supplies the psychology Deming would approve. I would add to the insights of these two geniuses. If you don’t deal with reality, reality, deals with you. I refer again to Bethlehem Steel, General motors, Chrysler, etc. Were it not for government bailouts the latter two would be gone. They may still fail. The unsustainable reality of current government “entitlement” programs is also obvious. Systemic changes may or may not require pain. Some systemic changes create little or no pain, e.g. reducing staff through attrition. Effective and timely planning can serve as one lever to minimize or eliminate the pain of unemployment. We can also do this by making retraining available for those who become unemployed. We must constantly try to anticipate the new knowledge and skills that may become necessary and work to provide appropriate training. Improving our educational system to teach certain areas we know will always be necessary will help tremendously. This includes teaching critical thinking, literacy and numeracy to high levels which the present system fails to do. These will always be required. Making the changes described in above in schools and other organizations will go a long way to respond to and meet this critical need.

REFERENCES

Collins, J. (2001). Good to Great: Why Some Companies Make the Leap…and Others Don’t. Harper Business, Harper Collins Publishers, New York, N Y.

Deming, W. E. (1986). Out of the Crisis. Massachusetts Institute of Technology Publisher, Cambridge, Mass.

American Journal of Management vol. 13(2) 2013 39

Deming, W. E. (1994). The New Economics for Industry, Government, Education. The Massachusetts Institute of Technology Publisher, Cambridge, Mass.

Glasser, W. (1985). Control Theory: A New Explanation of How We Control Our Lives. Harper & Row, New York, NY.

Glasser, W. (1965). Reality Therapy: A New Approach to Psychiatry. Harper & Row, New York, N Y.

Kelly, T. (2012). We Can do More and Better With Less. Infinity Publishing, West Conshohocken, PA 19428-2713. This paper is a selection from this book.

Kelly, T. (1996). Practical strategies for school improvement (2nd ed.). Wheeling, Illinois: National School Services.

40 American Journal of Management vol. 13(2) 2013

Project Scope, Market Size Prospects, and Launch Outcomes in Cooperative New Product Development

Kimberly M. Green University of West Georgia

This study investigates the relationship between the number of partners in cooperative new product development and the scope of the development project, the projected market size for the product, and the likelihood the product will be launched. With drug development in the pharmaceutical industry as the setting, the hypotheses are tested using hierarchical modeling and a dataset of 7,167 drugs across 86 firms during the period 1995 – 2006. Results suggest that the number of development partners is positively related to the scope of knowledge categories underlying the development effort, while the scope of product applications is associated with market size.

INTRODUCTION

A new product development (NPD) process is a driver of firms’ future growth prospects and competitive positioning. Yet the process is complex and subject to considerable risk. Consequently, many firms opt to work with partners, even though sharing the downside also necessitates sharing the upside. Cooperating firms share the risks and costs of development but also share resources, knowledge, and the payoff from NPD. While cooperative development allows firms to share risk, it adds a risk that a firm’s knowledge could be misappropriated by partners. Additionally, cooperative arrangements require monitoring and management if the benefits are to be realized. Existing research on cooperative development has examined both the performance of specific alliances and of a firm’s set of alliances in general. Studies suggest that factors such as the relative size of the firms, the type of information they are attempting to share, and the structure of the alliance provide insight into the performance of individual cooperative arrangements (Bierly & Coombs, 2004; Powell, 1998; Stuart, 2000). At the firm-level, research has noted that some firms excel in their ability to manage the complexity of choosing partners and oversee multiple relationships, exhibiting an alliance management or partnering capability (Rothaermel & Deeds, 2006). Further, a study of pharmaceutical firms found that the number of partners a firm has is related to the firm’s total number of drugs on the market (Rothaermel & Deeds, 2004). Research examining the number of alliances has considered the overall performance of the product development portfolio, but we have a more limited understanding of the relationship between the number of partners working on a specific project and the characteristics and outcomes of that project. What relationship, if any, exists between the number of partners and the scope and outcome for each individual product development initiative? And, given that partners share the payoff, is the number of partners associated with the market potential for a product? To consider whether the complexity of managing the cooperative arrangement is associated with the likelihood of launching a product, the study examines the relationship between the number of partners participating in the

American Journal of Management vol. 13(2) 2013 41

development of one product and whether or not the product launches. The present study takes the product-level perspective and uses biopharmaceutical product development – i.e., drugs – as the context.

THEORY AND HYPOTHESES

This study is based in the cooperative development research of the new product development literature. Themes from existing research that form the foundation for the present study include the motivations for entering product development alliances and the management of those alliances. Firms will form alliances to gain access to a variety of resources, either tangible such as funding, or intangible, such as skills that cannot be developed internally, network connections, an endorsement or reputation by association, and knowledge or expertise (Gerwin & Ferris, 2004; Powell, 1998; Stuart, 20000). Regarding the management of alliances, research considers both the management of all of a firm’s alliance activity as well as the structure and coordination of activities with individual alliance partners. Researchers focusing on the firm’s set of alliances, or “portfolio of coalitions,” have reported that the relationship between the number of alliances and the level of new product development exhibits diminishing returns (Rothaermel, 2001). An increase in the number of a firm’s cooperative arrangements is initially associated with an increased level of output as the number of a firm’s cooperative arrangements increases. Coordination of an increasing number of alliances eventually becomes more difficult, and returns diminish. Studies addressing the level of the individual alliance report that factors such as the relative size of the firms (Powell, 1998; Stuart, 2000) and the type of information they are attempting to share have been used to explain variations in cooperative arrangements. Pharmaceutical data are useful for investigating each of three broad stages of innovation and new product development (Henderson & Cockburn, 1994; Roberts & McEvily, 2005): discovery, development, and commercialization. The first stage is the research or discovery process in which potentially effective therapies and compounds are identified as viable candidates to proceed to the development stage for testing. Next, the development process involves the testing of products and selection of those that will be launched for sale in the market and use by consumers. The final stage is the post-launch performance, with success in this stage being defined as commercial successes. Although the examples used to describe these stages are specific to pharmaceuticals, the stages have parallels in other industries. The study proposed herein concentrates on the middle stage, or the development of the new product from the time it is identified as a potentially viable candidate up until the point of launch.

Project Scope The scope of a product development effort can be manifest in more than one way. This study considers that scope may indicate the number of different knowledge categories that developers draw on or scope may indicate the number of different uses for a product that the developers test and attempt to incorporate. The larger the scope of the product development effort, the greater may be the required resource commitment to see the project through. Partnerships can provide access to those needed resources (Gerwin & Ferris, 2004). Research suggests that there are returns to scope but not to scale in drug development efforts (Cockburn & Henderson, 2001) and that focusing on only a few fields can make high-quality patents increasingly difficult to obtain (Lin & Chen, 2005). Consequently, firms may make use of alliances and cooperative development in pursuit of these benefits of scope, accessing partners’ knowledge rather than relying solely on knowledge bases of their own. Research has also shown that firms tend to use narrower pipelines than they should in their product development efforts (Ding & Eliashberg, 2002). Using development partners may allow the firm to expand either the number of products or the scope of individual products in development. For example, partners could test alternative product uses in parallel development efforts while a firm acting alone might have to experiment with alternative uses sequentially due to capacity or time bottlenecks with their employees, facilities, or budget allocations. Based on this logic, the number of development partners would be expected to be positively related to project scope. This study tests two operationalizations of project scope:

42 American Journal of Management vol. 13(2) 2013 H1a: The number of development partners for a product is positively related to project scope when scope is conceptualized as the number of different knowledge bases underlying the project.

H1b: The number of development partners for a product is positively related to project scope when scope is conceptualized as the number of alternative uses for the product. (i.e., the number of conditions the drug is intended to treat).

Projected Market Size By providing access to knowledge and experience, alliances may allow for the development of products that have more extensive market appeal than a firm could realize if working alone. Partners may vary in their knowledge of the science, the market, and the development process. Partners with varied experience may recognize a different target market. In the case of products subject to governmental regulation such as pharmaceuticals, partners may have better access to and understanding of the approval process in different countries. Alliances have been linked to speed of development when there is similarity and overlap in the knowledge bases of the firms (Rindfleisch & Moorman, 2001). Having partners may help to speed launch-date estimates so that the product is on the market generating sales revenues for a longer time while still protected by patent. If market size is measured in revenues, the number of partners could, then, be positively associated with market size. Additionally, partners may be chosen for their reputation (Stuart, 2000). A positive reputation can help to expand sales prospects for products such as pharmaceuticals, for instance, when sales depend on prescriptions or recommendations from physicians who may rely on the reputation of one or more partners or on positive prior experience with other products from those partners. Researchers have also noted that the development of innovative products benefits from the generation of a high number of creative ideas and that a greater number of ideas are generated collaboratively (Alves, Marques, Saur & Marques, 2007).

H2a: The number of development partners for a product is positively related to projected market size measured in sales revenue.

With its choice of knowledge bases that drive development or the alternative uses to be tested, a firm designs a product intended to meet the needs of a target market. A drug that is developed to treat multiple medical conditions is a product serving multiple customer segments and is an example of technology leveraging. As technology is exploited in an increasing number of markets, the value extracted from the technology increases (Allen, 2003). Even if some knowledge categories that are explored or alternative uses that are tested do not succeed and are not incorporated into the final version of the launched product, the lessons learned from those failed explorations may contribute to making the launched product better and more useful for customers. Research suggests that planning and controls, budgets and milestones facilitate the success of product development (Davila, Foster, & Li, 2009). Demand and revenue estimates are critical because significant development costs must be incurred before any revenue is realized (Allen, 2003). Product design may be revised and scaled back if the original design proves too expensive relative to the estimated market size. Market projections can be adjusted as new information either resolves or reveals uncertainty in the environment, leading to revised resource allocation to product development projects (Anderson & Joglekar, 2005).

H2b: The project scope is positively related to projected market size measured in sales revenue.

Product Launch Both physical and knowledge resources may be shared in cooperative arrangements, and both can contribute to improving the chances of launching a product. New product development is a costly process.

American Journal of Management vol. 13(2) 2013 43

Partners can bring funding, facilities, or employees to contribute to the effort. The intangible knowledge resources may include technological expertise, product-market knowledge, or skills with the process of development. While some firms simply want access to a partner’s knowledge, others may seek to acquire and internalize knowledge learned from the partner (Mowery et al., 1996). In either case, the knowledge shared when collaborating on a new product could improve the chances for successfully developing and launching that product. The partners may also create knowledge and, together, craft a new approach that is different from, and perhaps superior to, the approach that either partner might have pursued alone (Berends, van der Bij, Debackere & Weggeman, 2006). Research has shown that products developed in an alliance have a higher probability of success (Danzon et al., 2005). These points suggest that the chances for successful development of the product should be higher as the number of development partners increases:

H3: The number of development partners for a product is positively related to the likelihood that the product will be launched.

METHODS

The data for this study are drawn from the ADIS R&D Insight database of drug development. This database is a product of Wolter Kluwers Health and is designed to provide insight into the drug development pipelines of companies in the biopharma industry, both for competitive intelligence purposes and for identifying possible partners for co-development or candidates for licensing. Because the database covers the development pipeline, it includes not only drugs on the market (i.e., launched) but also drugs under development and drugs for which development has been discontinued/canceled. This insight into not only successful NPD efforts (i.e., launched products) but also failed efforts is useful for furthering our understanding of the NPD process specifically and corporate innovation more generally. For a subset of drugs in the database, an assessment of market potential is provided by market analysts prior to the drug’s launch. The data for the present study are drawn from the 1995 – 2006 time period during which the market analysis was provided by Lehman Brothers. This time period precedes both the uncertainties introduced by United States’ healthcare reform and the financial crisis and resulting recession that affected banks such as Lehman Brothers. For this study, data on the variables of interest are available for approximately 7,167 drugs in the portfolios of 86 companies. The subset having market analysts’ estimates includes 920 drugs in the portfolios of 71 companies. The data will be analyzed using hierarchical linear modeling (HLM) to account for the nesting of products (drugs) within companies.

Operationalization of Variables Descriptions of the variables, explanations of the calculations, and the rationale for each operationalization are included below.

Number of Different Product Uses The number of different product uses represents the scope of the development project. In this study, the number of different indications for a drug is used as the measure of the number of different product uses. The number of indications is the number of unique conditions that a drug is intended to treat. As an example, the drug Entecavir is being tested for two different indications – Hepatitis B and Herpesvirus infections. This drug has two indications, regardless of whether it is actually launched to treat both or not.

Number of Knowledge Categories The number of knowledge categories is represented by the number of different therapeutic categories that underlie the drug. Standardized categories are used by the pharmaceutical industry to classify drugs based on the conditions they are intended to treat and their chemical composition (Nerkar & Roberts, 2004). The Adis R&D Insight database reports the World Health Organization’s Anatomical Therapeutic Chemical (WHO-ATC) class for each drug. This classification system divides the drugs into groups

44 American Journal of Management vol. 13(2) 2013 according to the organ or system on which they act and their chemical, pharmacological and therapeutic properties. The WHO-ATC classification consists of five levels of increasing specificity. There are fourteen Level 1 classes or main groups (World Health Organization, 2013). Examples include category A = alimentary tract and metabolism, B = blood and blood forming organs, and N = nervous system. For the purpose of this study, the Level 1 classification is used to define the therapeutic categories because the categories are sufficiently different from each other to capture specialized and non-overlapping knowledge. The number of therapeutic classes for a drug is, therefore, the number of different Level 1 classes for all of the indications. A larger number of knowledge categories indicates greater scope.

Projected Market Size Investment banks employ analysts who typically specialize in an industry and focus on one or a few companies, providing forecasts of future earnings and drivers of those earnings. The Adis R&D Insight database reported analysts’ estimates of market size potential for certain drugs in the portfolios of companies tracked by analysts at Lehman Brothers bank for the years of this study (1995 – 2006). This time period precedes both the uncertainties introduced by the United States’ healthcare reform and the financial crisis and resulting recession that affected banks such as Lehman Brothers. These sales revenue projections consider each drug’s expected launch date, time remaining until the patent expires, the various geographic regions in which the drug will be distributed, and various partners licensing or distributing the drug. Market size is measured as projected sales revenue in the peak year (in $US).

Likelihood that a Product is Launched The likelihood that a product is launched is a dichotomous variable coded as 1 if the product has launched and 0 if the product has been discontinued without launch. If the drug has not yet been launched for any indication or has not been discontinued for all indications, this drug is considered to be still under development (i.e., the development outcome has not yet been decided) and the value for this variable is missing. A discontinued drug is any drug having a status in the Adis database of Discontinued, No Development Reported, Suspended, or Withdrawn.

Number of Partners For each drug that involved collaboration, the database lists these partner organizations and identifies whether they are originating companies or licensing companies. The partners may be either pharmaceutical firms or private organizations such as research hospitals or universities. The count of organizations listed as originating companies for a drug is the number of development (or originating) partners for that drug. The count of organizations listed as licensing companies for a drug is the number of licensing partners for that drug.

Control Variables Other variables will be included in the analysis to control for possible alternative explanations for the hypothesized relationships.

Firm Size The relationship of firm size to concepts important to the likelihood of launching a drug and to expecting sizeable revenue has been noted in numerous studies. For example, large firms might have a higher likelihood of success because they may be better able to afford the specialized equipment that is often required by different therapeutic categories (Graves & Langowitz, 1993). Larger firms may have larger chemical libraries that serve as a source of advantage in generating more viable drug candidates for the development process (Thomke & Kuemmerle, 2002). Economies of scale may favor large firms, but their size may also make them more subject to the effects of inertia (Hauser et al., 2006). Small firms are associated with more innovative products and large firms are associated with less innovative products (Kotabe & Swan, 1995). Firm size is measured as the number of employees.

American Journal of Management vol. 13(2) 2013 45

Firm Age Because experience accumulates over time, older firms will have had more time to build knowledge than younger firms. Firm age has been linked to a firm’s ability to innovate (Hauser et al., 2006). Age in alliances has been found to influence performance in cooperative development (Deeds & Rothaermel, 2003). Firm age is measured as the years since the firm’s founding date or date of incorporation when the founding date is not available.

R&D Intensity Firms with a high level of drug development activity might have a higher likelihood of launch or stronger candidates for high revenue, blockbuster drugs not because they are accumulating knowledge and building competences in particular therapeutic categories but because their higher expenditures for R&D include higher salaries that enable them to attract the best scientists (Henderson & Cockburn, 1994). R&D intensity is measured on an annual basis as the firm’s R&D expenditures for the year divided by the annual sales revenue. Calculated in this manner, this variable indicates those firms that allocate a relatively greater proportion of their revenues to R&D efforts.

Number of Drugs in the Pipeline Research has found that R&D productivity is subject to economies of both scale and scope (Henderson & Cockburn, 1996). The number of drugs in the firm’s pipeline is a count including all drugs under development.

ANALYSIS AND RESULTS

The data have a hierarchical or multi-level structure since each product is associated with a firm. Individual product observations within the same firm are subject to common firm effects and, therefore, may not be independent. If not taken into account, dependence among individual observations can lead to misestimated standard errors in the statistical analysis. Hierarchical linear modeling helps resolve this problem by incorporating a unique random effect for each organizational unit and taking the variability in these random effects into account in estimating the standard errors (Raudenbush & Bryk, 2002). The hierarchical linear modeling estimates for this study were computed using HLM 6.03.

TABLE 1 HIERARCHICAL LINEAR MODELING ESTIMATES FOR HYPOTHESES 1a AND 1b

Dependent Variable Number of Knowledge (Poisson distribution): Categories (H1a) Number of Product Uses (H1b) Model 1 Model 2 Model 1 Model 2 Firm-Level (Level 2) Controls: ** * ** ** Intercept (β0) 0.308 (0.967) 0.237 (0.095) 0.308 (0.967) 0.612 (0.174) Firm Sizea (employees) 0.009 (0.027) 0.009 (0.027) 0.009 (0.027) -0.013 (0.045) Firm Agea (years) -0.045 (0.051) -0.042 (0.050) -0.045 (0.051) -0.044 (0.088) R&D Intensity -0.008 (0.006) -0.008 (0.006) -0.008 (0.006) -0.016 (0.010) Pipeline (number of products under -0.000 (0.000) -0.000 (0.000) -0.000 (0.000) 0.000 (0.000) development) Product-Level (Level 1) Independent Variable: Number of Development Partners 0.059** (0.019) 0.061 (0.040) a The natural log of firm size and firm age are the variables used in the analysis. HLM2 final estimates with robust standard errors. Unstandardized coefficients are reported; standard errors are in parentheses. The number of level 1 units (drugs) = 7,167 and the number of level 2 units (companies) = 86. † p < 0.10 * p < 0.05 ** p < 0.01 *** p < 0.001

46 American Journal of Management vol. 13(2) 2013 Because the measures of product scope that serve as dependent variables in hypotheses 1a and 1b are count measures, the analysis of these hypotheses uses a Poisson distribution. Table 1 reports the results of the analysis. The firm-level control variables are entered in Model 1, and the product-level independent variable is entered in Model 2. For hypothesis 1a, product scope is operationalized as the number of different anatomical-therapeutic categories that the drug development efforts tap into. The positive and significant coefficient for the relationship between the number of originating partners and the number of different categories (p < 0.01) offers support for hypothesis 1a. In hypothesis 1b, product scope is presented as the number of different conditions for which the drug is investigated as a possible treatment. This analysis does not provide support for hypothesis 1b, since the relationship between the number of originating partners and the number of different conditions investigated is not significant.

TABLE 2 HIERARCHICAL LINEAR MODELING ESTIMATES FOR HYPOTHESES 2a AND 2b

Dependent Variable: Projected Market Size (Revenues) Model 1 Model 2 Model 3 Model 4 Model 5 Firm-Level (Level 2) Controls: *** Intercept (β0) 1971.250*** 2229.692 454.545 510.396 484.150 (524.445) (555.306) (544.960) (395.049) (462.478) Firm Sizea (employees) 251.716 240.992 146.022 145.123 116.148 (208.202) (209.687) (184.314) (167.140) (159.419) Firm Agea (years) -662.392 -685.946* -116.470 -147.374 -83.468 (275.160) (276.214) (270.803) (208.476) (228.899) R&D Intensity -71.229† -74.024† -9.750 -9.546 -1.245 (39.858) (39.922) (35.046) (24.154) (27.912) Pipeline (number of products under 1.129 1.408 0.896 0.323 0.889 development) (2.126) (2.133) (1.893) (1.933) (1.526)

Product-Level (Level 1) Independent Variables: H2a: Number of Development -195.308** -154.676† Partners (68.686) (79.805) H2b: Number of Knowledge 385.627** 71.705 Categories (121.524) (145.231) H2b: Number of Alternative Product 190.678*** 178.057*** Uses (37.631) (42.172) a The natural log of firm size and firm age are the variables used in the analysis. HLM2 final estimates with robust standard errors. Unstandardized coefficients are reported; standard errors are in parentheses. The number of level 1 units (drugs) = 920 and the number of level 2 units (companies) = 71. † p < 0.10 * p < 0.05 ** p < 0.01 *** p < 0.001

The dependent variable for hypotheses 2a and 2b is projected market size, a continuous variable. Table 2 reports the results of this analysis, with firm-level control variables presented in Model 1 and the product-level independent variables entered in Models 2 – 5. As noted in the table, the sample size for this part of the analysis is smaller than the tests of other dependent variables because the investment analysts’ assessment of market size was not provided for all drugs in the dataset. Hypothesis 2a predicted a positive relationship between number of originating partners and the projected market size. However, the negative coefficient in Model 2 (p < 0.01) and the moderately significant negative coefficient in the full model 5 (p < 0.10) suggest a negative relationship between these variables. Both operationalizations of product scope were used to test hypothesis 2b. While the number of different anatomical-therapeutic categories is

American Journal of Management vol. 13(2) 2013 47

significant in Model 3 (p < 0.01), it is not significant in the full model 5. The number of different indications that the drug is investigated to treat is significant in both its individual model 4 (p < 0.001) and in the full model 5 (p < 0.001). These results offer partial support for hypothesis 2b. The dependent variable for hypothesis 3 is a dichotomous variable that has a value of 1 if the product has been launched for any of its indications and a value of 0 if the product has been discontinued for all indications. The results of the analysis, shown in Table 3, show no significant relationship between number of originating partners and the likelihood of the product launching.

TABLE 3 HIERARCHICAL LINEAR MODELING ESTIMATES FOR HYPOTHESIS 3

Dependent Variable (Dichotomous): Likelihood of Product Launch Model 1 Model 2 Firm-Level (Level 2) Controls: *** *** Intercept (β0) -1.717 (0.479) -1.825 (0.471) Firm Sizea (employees) 0.241 (0.185) 0.232 (0.168) Firm Agea (years) 0.220 (0.300) 0.181 (0.282) R&D Intensity 0.039 (0.028) 0.034 (0.026) Pipeline (number of products under -0.008*** (0.002) -0.007*** (0.002) development)

Product-Level (Level 1) Independent Variables: Number of Development Partners 0.154 (0.116) a The natural log of firm size and firm age are the variables used in the analysis. HLM2 final estimates with robust standard errors. Unstandardized coefficients are reported; standard errors are in parentheses. The number of level 1 units (drugs) = 5,493 and the number of level 2 units (companies) = 85. † p < 0.10 * p < 0.05 ** p < 0.01 *** p < 0.001

Ad Hoc Analysis Researchers have suggested that certain alliances are primarily exploratory while others are primarily exploitative (Rothaermel & Deeds, 2004). Using this distinction, the number of originating partners measured in this study could be considered principally exploratory. A separate group of partners – those who participate in licensing arrangements with the originators – might tend to be more exploitative in purpose. Therefore, ad hoc analyses were performed to consider the number of licensing partners as an independent variable related to (1) projected market size and (2) likelihood of launch. Licensees may expand the reach of the product by, for instance, serving specific geographic regions with which the originating partners have limited familiarity or putting more resources behind the launch and more quickly scaling up distribution. The results, shown in Table 4, indicate that the number of licensing partners is positively related to the projected market size with a positive and significant coefficient in Model 1 (p < 0.01) when the number of licensing partners is tested separately and a moderately significant coefficient in Model 2 (p < 0.10) when it is tested in the presence of the other independent variables analyzed earlier as predictors of market size. When the likelihood of launch is the dependent variable, the coefficient for the number of licensing partners is positive and significant (p < 0.001).

48 American Journal of Management vol. 13(2) 2013 TABLE 4 RESULTS OF AD HOC REGRESSION ANALYSES

Dependent Variables: Projected Market Size (Revenues) Likelihood of Product Launch Model 1 Model 2 Model 1 Model 2 Firm-Level (Level 2) Controls: *** *** *** Intercept (β0) 1295.367 533.658 -2.535 -2.539 (373.766) (344.448) (0.528) (0.482) Firm Sizea (employees) 80.483 36.848 0.489** 0.495** (134.634) (128.242) (0.182) (0.165) Firm Agea (years) -439.934* -133.141 0.112 0.076 (187.052) (167.094) (0.325) (0.289) R&D Intensity -73.013* -19.372 0.057† 0.058* (35.484) (26.765) (0.031) (0.028) Pipeline (number of products 3.597* 2.152* -0.006* -0.007** under development) (1.417) (1.081) (0.003) (0.002)

Product-Level (Level 1) Independent Variable: Number of Licensing Partners 232.963*** 117.911† 1.417*** 1.101*** (55.189) (62.513) (0.092) (0.076) Number of Development Partners -121.899† 0.230† (63.755) (0.116) Project Scope: Number of 60.815 Different Knowledge Categories (116.222) Project Scope: Number of 135.809** Alternative Product Uses (46.899) a The natural log of firm size and firm age are the variables used in the analysis. HLM2 final estimates with robust standard errors. Unstandardized coefficients are reported; standard errors are in parentheses. For models with Market Size as the DV, the number of level 1 units (drugs) = 920 and the number of level 2 units (companies) = 71. For models with Launch as the DV, the number of level 1 units (drugs) = 5,493 and the number of level 2 units (companies) = 85. † p < 0.10 * p < 0.05 ** p < 0.01 *** p < 0.001

DISCUSSION AND CONCLUSION

This study investigates the relationship between the number of partners in cooperative new product development and the scope of the development project, the projected market size for the product, and the likelihood the product will be launched. While cooperation can increase the physical and knowledge resources available for the development effort, it may also increase the complexity due to the coordination required and the potential exposure of proprietary knowledge. Therefore, deepening our understanding of how the number of development partners might be associated with various dimensions and outcomes of individual product development initiatives can contribute to the effective management of product development. The results of the tests of H1a and H1b suggest that the number of development partners is associated with project scope when scope is measured as the number of knowledge categories (H1a) that underlie the development initiative. This result is consistent with the idea that different partners bring different bases of knowledge and experience to the collaboration. The result indicating that the number of different product uses being tested is not significantly related to the number of development partners may indicate that firms do not necessarily need to have partners in order to identify and test multiple uses for a product. The different knowledge bases that can be offered by multiple partners, for instance, might be needed for

American Journal of Management vol. 13(2) 2013 49

developing complex or innovative products. But a firm may be able to test multiple alternative uses alone. These results are also consistent with the idea that collaborative efforts may be circumscribed and specific, carefully identifying the contributions expected by the partners and the uses to which those contributions are to be applied. Partners bring diverse knowledge bases to apply to specific product uses. For example, research has found that, at various points in the relationship between two partners, they will write more restrictive, detailed documents governing the relationship (Li, Eden, Hitt & Ireland, 2008). The tests of H2a and H2b considered whether the number of development partners and project scope are related to the projected market size, when market is measured in revenues. The moderately significant and negative relationship between number of partners and projected market size indicates that market size increases as the number of partners decreases. This finding is consistent with the logic that a greater number of partners increases the complexity of coordination efforts which may slow down the pace of development for some projects, limiting the revenue potential especially for products using a patented technology. The two measures of project scope taken together offer additional insight. While the number of underlying knowledge categories is not significant, the number of different product uses is significantly related to projected market size. The market sees and responds to the distinct applications for the product, with the product having wider appeal to different customers with different needs. The underlying knowledge categories required to develop those different uses would not necessarily be known or understood by the customers. The knowledge from different categories could prove to be either completely redundant and not incrementally useful or so distant and disconnected that synergies cannot be captured. The result of the test of H3 indicates that the likelihood of a product being launched is not related to the number of partners involved in developing the product. This result is consistent with the idea that managing alliances is complex and, therefore, some can be managed productively while others may not be able to coordinate efforts effectively to yield a product. This result is also consistent with the hit-rate argument about innovation which argues that firms who have more products on the market do not have higher success rates than other firms, they merely make more attempts or take more turn at-bat (Morris & Kuratko, 2002). Firms with many partners may not have any greater likelihood of success than firms with fewer partners or those acting alone. This result may also be indicative of a strategy of development partners pursuing multiple projects, and then canceling those that show less promise and continuing with those that have greatest potential. Thus, a higher number of development partners could be associated with more attempts but not with an improved likelihood of launch for any particular product. In this same vein, Rothaermel and Deeds (2004) suggested that products on the market (i.e., products launched) is predicted by exploitation alliances, which would be licensing partners rather than development partners since development is interpreted as more exploratory. Based on this observation from prior research, the number of licensing partners was included in an ad hoc analysis. The ad hoc analysis that added the number or licensing partners as a variable explaining projected market size reports a moderately significant relationship between these variables. This finding is consistent with research indicating that, when performance is measured in terms of new product development, firms focusing their alliance strategy on exploitation outperform those focusing on exploring (Rothaermel, 2001). An interesting comparison here, though, is that exploitation is often associated with incremental rather than radical innovation, suggesting products that are not dramatic improvements beyond what is already on the market (Atuahene-Gima, 2005). If customers do not see sufficient reason to switch, incremental improvements may not attract a sizeable market. Consequently, are the licensing partners in-licensing radical or incremental innovations that are associated with these higher projected market sizes? These points suggest that exploration/exploitation may be measured at the development stage and again at the sales and distribution stage. Do development partners undertake exploration to develop products that will attract many new customers, followed then by licensing partners exploiting competencies in in-licensing technology or distribution skills?

50 American Journal of Management vol. 13(2) 2013 Limitations, Implications, and Future Research A discussion of the contributions and implications of this research must acknowledge its limitations. First, the study’s focus on a single industry may limit the generalizability of the results. However, concentrating on a single industry serves to control for industry-specific effects such as patenting strategies, regulatory environment, phases of development, and knowledge categories such as the anatomical-therapeutic categories that, in this case, can be consistently applied across all pharmaceutical firms. Second, although the dataset includes development projects existing during the span of years 1995 - 2006, the variables are measures specific to individual drugs rather than to a sequence of time. Therefore, the analyses can test only for correlations and not for causal relationships. We can, for example, hypothesize that the number of licensing partners would contribute to a larger projected market size. However, it could also be the case that a larger projected market size attracts a larger number of licensing partners and those partnerships form because the revenue expectations are sufficient to support that larger number of partners. Third, the investment analysts’ estimates of market size are not available for all drugs. Further, these estimates include only revenues and not profits, as it is typical for companies not to reveal the costs or expected returns from individual projects. However, profit projections could differ greatly for two products that are expected to generate similar levels of total revenue, and such differences could affect launch decisions and collaboration strategies. The results of this study have implications for both the theory and the practice of management. The number of originating partners is positively related to the number of knowledge categories, suggesting that firms do use alliances as a source of knowledge. However, the number of alternative product uses rather than the number of knowledge categories is positively associated with projected market size. This finding suggests that while the strategy of cooperative development may generate products with sizeable revenue projections, many firms may also choose strategies such as acquiring a firm with necessary knowledge or developing the requisite skills internally by hiring employees. Firms often acquire companies they have partnered with in the past, having used the partnership to test the potential for success of an acquisition. Future research could address how companies strike an optimal balance between projects they pursue independently and those they pursue cooperatively. What characteristics of the projects or the firms determine this optimal balance? The number of development partners is negatively associated with projected market size and demonstrates no association with likelihood of launch in this dataset. Taken together with the finding that the number of alternative product uses is positively associated with projected market size, these results are consistent with research that firms’ product development efforts benefit not only from breadth of knowledge, which might be obtained by increasing the number of partners, but also from depth of knowledge, which could be developed independently as firms exploit synergy among products in the same category (Sorescu, Chandy & Prabhu, 2003). The knowledge complementarity or redundancy that has been linked to product creativity (Rindfleisch & Moorman, 2001) can be obtained by working with a smaller number of firms, perhaps those with competency in the same categories. Further, this study has considered the number of development partners as a variable explaining the likelihood of launch of products in general. It could be the case that products with particular characteristics will benefit more from a larger number of partners. The finding that the number of licensing partners was positively related to projected market size suggests that there is substantial money at stake in forming these alliances. Finding appropriate licensing partners and setting appropriate fees and stipulations will be important to the successful realization of the revenues. Existing research has found that firms’ general alliance experience has a positive relationship with project outcomes while partner-specific experience has a negative relationship (Hoang & Rothaermel, 2005). This idea, in conjunction with the results of the present study, suggests that future research could consider how firms optimally use a large number of partners while relying little on building partner-specific experience. As the number of partners increases, do firms use a mix of prior partners and new partners in efforts to gain from their generalized alliance experience rather than relying repeatedly on known partners?

American Journal of Management vol. 13(2) 2013 51

NPD can shape new industries and drive the profitability of individual firms. Since new products are developed to satisfy unmet needs in the market, they have the potential to make a difference both for the customers and for the firm that is successful in navigating the complexities of the NPD process. The NPD process can be long, particularly so in the biopharmaceutical industry, and require heavy investment today for an uncertain payoff well into the future. Understanding what factors are related to success with product development efforts can be a source of competitive advantage for firms that regularly and repeatedly undertake to develop new products. The present study contributes to this understanding by examining how cooperative development shapes the outcomes in NPD efforts.

REFERENCES

Allen, K.R. (2003). Bringing New Technology to Market. Upper Saddle River, NJ: Prentice Hall.

Alves, J., Marques, M.J., Saur, I., & Marques, P. (2007). Creativity and Innovation through Multidisciplinary and Multisectoral Cooperation. Creativity and Innovation Management, 16, (1), 27-34.

Anderson, E.G., & Joglekar, N.R. (2005). A Hierarchical Product Development Planning Framework. Production & Operations Management, 14, (3), 344-361.

Atuahene-Gima, K. (2005). Resolving the Capability-Rigidity Paradox in New Product Innovation. Journal of Marketing, 69, October, 61-83.

Berends, H., van der Bij, H., Debackere, K., & Weggeman, M. (2006). Knowledge Sharing Mechanisms in Industrial Research. R&D Management, 36, (1), 85-95.

Bierly, P.E., III, & Coombs, J.E. (2004). Equity Alliances, Stages of Product Development, and Alliance Instability. Journal of Engineering and Technology Management, 21, 191-214.

Cockburn, I.M., & Henderson, R.M. (2001). Scale and Scope in Drug Development: Unpacking the Advantages of Size in Pharmaceutical Research. Journal of Health Economics, 20, 1033-1057.

Danzon, P.M., Nicholson, S., & Pereira, N.S. (2005). Productivity in Pharmaceutical-Biotechnology R&D: The role of Experience and Alliances. Journal of Health Economics, 24, 317-339.

Davila, A., Foster, G., & Li, M. (2009). Reasons for Management Control Systems Adoption: Insights from Product Development Systems Choice by Early-Stage Entrepreneurial Companies. Accounting, Organizations & Society, 34, (3/4), 322-347.

Ding, M., & Eliashberg, J. (2002). Structuring the New Product Development Pipeline. Management Science, 48, (3), 343-363.

Graves, S.B., & Langowitz, N.S. (1993). Innovative Productivity and Returns to Scale in the Pharmaceutical Industry. Strategic Management Journal, 14, (8), 593-605.

Hauser, J., Tellis, G.J., & Griffin, A. (2006). Research on Innovation: A Review and Agenda for Marketing Science. Marketing Science, 25, (6), 687-717.

Henderson, R., & Cockburn, I. (1994). Measuring Competence? Exploring Firm Effects in Pharmaceutical Research. Strategic Management Journal, 15, (Winter Special Issue), 63-84.

52 American Journal of Management vol. 13(2) 2013 Henderson, R., & Cockburn, I. (1996). Scale, Scope, and Spillovers: The Determinants of Research Productivity in Drug Discovery. RAND Journal of Economics, 27, (1), 32-59.

Hoang, H., & Rothaermel, F.T. (2005). The Effect of General and Partner-Specific Alliance Experience on Joint R&D Project Performance. Academy of Management Journal, 48, (2), 332-345.

Kotabe, M., & Swan, K.S. (1995). The Role of Strategic Alliances in High-Technology New Product Development. Strategic Management Journal, 16, (8), 621-636.

Li, D., Eden, L., Hitt, M.A., & Ireland, R.D. (2008). Friends, Acquaintances, or Strangers? Partner Selection in R&D Alliances. Academy of Management Journal, 51, (2), 315-334.

Lin, B.W., & Chen, J.S. (2005). Corporate Technology Portfolios and R&D Performance Measures: A Study of Technology Intensive Firms. R&D Management, 35, (2), 157-170.

Morris, M.H., & Kuratko, D.F. (2002). Corporate Entrepreneurship. Fort Worth, TX: Harcourt.

Mowery, D.C., Oxley, J.E., & Silverman, B.S. (1996). Strategic Alliances and Interfirm Knowledge Transfer. Strategic Management Journal, 17, Winter Special Issue, 77-91.

Nerkar, A., & Roberts, P.W. (2004). Technological and Product-Market Experience and the Success of New Product Introductions in the Pharmaceutical Industry. Strategic Management Journal, 25, 779-799.

Powell, W.W. (1998). Learning from Collaboration: Knowledge and Networks in the Biotechnology and Pharmaceutical Industries. California Management Review, 40, (3), 228-240.

Raudenbush, S.W., & Bryk, A. (2002). Hierarchical Linear Models: Applications and Data Analysis Methods (2nd ed.). Thousand Oaks, CA: Sage Publications.

Rindfleisch, A., & Moorman, C. (2001). The Acquisition and Utilization of Information in New Product Alliances: A Strength-of-Ties Perspective. Journal of Marketing, 65, (2), 1-18.

Roberts, P.W., & McEvily, S. (2005). Product-Line Expansion and Resource Cannibalization. Journal of Economic Behavior & Organization, 57, 49-70.

Rothaermel, F.T. (2001). Incumbent’s Advantage through Exploiting Complementary Assets via Interfirm Cooperation. Strategic Management Journal, 22, 687-799.

Rothaermel, F.T., & Deeds, D.L. (2004). Exploration and Exploitation Alliances in Biotechnology: A System of New Product Development. Strategic Management Journal, 25, (3), 201-221.

Rothaermel, F.T., & Deeds, D.L. (2006). Alliance Type, Alliance Experience and Alliance Management Capability in High-Technology Ventures. Journal of Business Venturing, 21, 429-460.

Sorescu, A.B., Chandy, R.K., & Prabhu, J.C. (2003). Sources and Financial Consequences of Radical Innovation: Insights from Pharmaceuticals. Journal of Marketing, 67, October, 82-102.

Stuart, T.E. (2000). Interorganizational Alliances and the Performance of Firms: A Study of Growth and Innovation Rates in a High-Technology Industry. Strategic Management Journal, 21, (8), 791-811.

American Journal of Management vol. 13(2) 2013 53

Thomke, S., & Kuemmerle, W. (2002). Asset Accumulation, Interdependence and Technological Change: Evidence from Pharmaceutical Drug Discovery. Strategic Management Journal, 23, 619-635.

World Health Organization Collaborating Centre for Drug Statistics Methodology. (2013). The Anatomical Therapeutic Chemical (ATC)/ Defined Daily Dose (DDD) Classification System Index. Retrieved from the Web January 30, 2013. http://www.whocc.no/atc_ddd_index/

54 American Journal of Management vol. 13(2) 2013

Networking: A Critical Success Factor for Entrepreneurship

Moraima De Hoyos-Ruperto University of Puerto Rico, Mayagüez Campus

José M. Romaguera University of Puerto Rico, Mayagüez Campus

Bo Carlsson Case Western Reserve University

Kalle Lyytinen Case Western Reserve University

This study explores how individual and inter-organizational networking, as mediators, may provoke desired entrepreneurial success. A quantitative study using Partial Least Squares (PLS) was conducted to determine: How and to what extent do systemic and individual factors—mediated by inter-organizational and individual social networking activities—impact the likelihood of entrepreneurial success? To illustrate this, we investigates Puerto Rico’s (P.R.) unexplained stagnant entrepreneurial environment. Our findings reveal that Puerto Rican entrepreneurs are not using their networks efficiently to overcome the inadequate institutional structure. Therefore, a better interconnected entrepreneurial ecosystem must be designed; while entrepreneurs must use more effectively their networks.

INTRODUCTION

Entrepreneurship scholars hold very different beliefs about the nature of entrepreneurship activities (Gartner, 1990) and explanations of its role in desired economic progress (Welter & Smallbone, 2011). However, since entrepreneurship is a complex and dynamic phenomenon (Gartner, Shaver, Carter, & Reynolds, 2004), different views exist regarding the factors that really spur it (Acs & Szerb, 2010). Hence, researchers must clearly establish the limitations and arguments upon which they are basing their study (Shane & Venkataraman, 2000). Advanced studies on entrepreneurship need to explore the interaction between external factors, such as entrepreneurial opportunities and education and national mindset toward entrepreneurship; and personal factors, such as entrepreneur’s social competence and efficacy and their influences on entrepreneurial performance (Welter & Smallbone, 2011). This research focuses on entrepreneurs doing business in Puerto Rico (P.R.) because among high- income countries P.R., at 3.1 percent, has one of the world’s lowest rates of early-stage entrepreneurial activity despite the government’s two-decade effort to spur it, according to Bosma, Jones, Autio, and Levie (2008). Long reliant on the presence of multinational corporations to sustain the economy and historically lax in encouraging local business development, P.R. was hard hit by the elimination of tax

American Journal of Management vol. 13(2) 2013 55

exemptions in 2006 that incentivized U.S. subsidiaries to establish operations on the island. Despite several attempts to jumpstart the economy in the wake of their departure, reports from worldwide organizations such as the Global Entrepreneurship Monitor (GEM) (Bosma et al., 2008), the World Economic Forum (WEF) (Schwab, 2012), and the World Bank (2013) certify the challenging environment of entrepreneurship in P.R. Experts blame structural problems rather than a lack of entrepreneurial spirit for entrepreneurship’s failure to flourish in P.R. (Aponte, 2002b). Based on a qualitative research done by De Hoyos-Ruperto, Romaguera, Carlsson, and Perelli (2012) this paper theorizes that individual-level factors, including entrepreneur self-efficacy (SE) and social competence (SC), and systemic factors such as entrepreneurial education (EDU), opportunities (OPP), and national mindset (MIND) act as sourcing mechanisms that can predict entrepreneurial success(ES); while is mediated by inter-organizational networks (ONETW) and individual social network activities (INETW). Considering the world economic crisis, environmental hostility (HOST) is used as a control variable to provide a possible alternate impacting factor and explanation for success. Our data suggest that systemic factors as a whole are not working as suitable sources of the complementary relationships needed to create an environment conducive to successful entrepreneurship. Entrepreneurial advocacies are not well interconnected among them to complement entrepreneurs’ challenges. Meanwhile, entrepreneurs are not efficiently using their networks to overcome the inadequate institutional structure. Therefore, a better interconnected entrepreneurial ecosystem and more effective individual social networking may be necessary for both practitioners and policy makers to design a successful entrepreneurial environment.

Theoretical Background, Conceptual Model and Hypotheses Systemic factors such as entrepreneurial education (Levie & Autio, 2007), opportunities (Shane & Venkataraman, 2000), and national mindset toward entrepreneurship (Casson, 2003), and individual factors such as social competence (Baron & Markman, 2000) and perceived self-efficacy (Bandura, 1997) can positively or negatively influence the overall entrepreneurship success of a nation. However, while these factors perform as sourcing mechanisms, they are being mediated by other factors such as inter- organizational network activities (Butler & Hansen, 1991) and/or the entrepreneur’s social networking activities (Hoang & Antoncic, 2003; Johannisson, 1998) as our conceptual research model in Figure 1 below shows.

FIGURE 1 CONCEPTUAL QUANTITATIVE RESEARCH MODEL

56 American Journal of Management vol. 13(2) 2013 To address the above mentioned concepts, an empirical study with entrepreneurs was designed to examine the following question: How and to what extent do systemic and individual factors—mediated by inter-organizational and individual social networking activities—impact the likelihood of entrepreneurial success?

Entrepreneurial Success as the End Product Several authors remarked on the importance of using multiple performance dimensions (Venkatraman & Ramanujam, 1986). Therefore, this study uses both growth measurements, such as sales growth rates and increases in the number of employees, and profit measurements, such as net profit margin and financial conditions compared with three years prior, through a primary data source——to assess entrepreneurial success based on firm performance (Questionnaire and Construct Definition Table is available upon request).

Systemic Factors as Sources of Entrepreneurial Success The Role of Entrepreneurial Opportunities in Entrepreneurial Success The literature underscores the importance of recognizing and exploiting opportunities as well as a willingness to accept it to achieve entrepreneurial success (Shane, 2003). Opportunities, however, are not always perceived in the same way; therefore, how these are presented, the people they are presented to, and how they take advantage of them are crucial (Shane & Venkataraman, 2000). Thus, a positive perception of entrepreneurial opportunities is a necessary condition for the entrepreneurial success. Therefore, we propose:

Hypothesis 1. Perceived entrepreneurial opportunities will positively impact entrepreneurial success (1a); inter-organizational networking (1b); and, individual social networking (1c), when controlling for environmental hostility.

The Role of National Mindset in Entrepreneurship and Entrepreneurial Success In 2004 the European Commission defined entrepreneurship as the mindset and process needed to create and develop economic activity within news or existing organizations. A national mindset, may determine the industrial structure, the expertise developed, and the likelihood of a successful venture (Guiso, Sapienza, & Zingales, 2006). Hence, it is expected that a country with an adequate entrepreneurial mindset embraces an individual sense of responsibility about what happens around them and also cultivates a collaborative and cohesive environment as part of its entrepreneurial strategy (Aldrich and Zimmer, 1986). Therefore, we propose:

Hypothesis 2. National mindset toward entrepreneurship will directly impact entrepreneurial success, when controlling for environmental hostility.

The Role of Entrepreneurial Education in the Likelihood of Entrepreneurial Success Entrepreneurial education is a cornerstone of entrepreneurial success (Ronstadt, 1987) as the educational system’s structure may influence national development (Todaro, 1981). The World Economic Forum (2009, p.7) highlighted the importance of entrepreneurship education on entrepreneurial development “…education is one of the most important foundations for economic development, entrepreneurship is a major driver of innovation and economic growth…”. Unfortunately, the 2010 GEM points out that the content of entrepreneurship education is inadequate in most countries (Corduras- Martinez, Levie, Kelley, Saemundsson, & Schott 2010). Kirby (2003) affirmed that educational systems need to focus not simply on what is taught but how it is taught. On the other hand, Wilson, Kickul, & Marlino (2007) contend that entrepreneurial education that leads to entrepreneurial success is one that promotes self-efficacy and self-confidence. Moreover, self-efficacy enhanced by education may impact entrepreneurial intention (Zhao , Seibert, & Hills 2005), perceived feasibility (Peterman & Kennedy, 2003), and successful venture performance (Bandura, 1997). Consequently, we propose:

American Journal of Management vol. 13(2) 2013 57

Hypothesis 3. Appropriate content of entrepreneurial education will positively impact entrepreneurial success (3a); and, individual self-efficacy (3b), when controlling for environmental hostility.

Hypothesis 3c. Self-efficacy will partially mediate the relationship between entrepreneurial education and entrepreneurial success, when controlling for environmental hostility.

Individual Factors as Sources of Entrepreneurial Success The Role of Entrepreneurs’ Social Competence in Entrepreneurial Success Entrepreneurs’ social competence refers to their ability to interact effectively with others and adapt to new social situations with the purpose of developing strategic relationships that leverage business opportunities and competitiveness (Baron, 2000). Baron and Markman (2003) claim that the higher an entrepreneur’s social competence, the greater their financial success. To operationalize the entrepreneur social competence construct, this study adopted the four dimensions used by Baron and Markman (2003): Social Perception, Social Adaptability, Expressiveness, and Self-Promotion. Consequently, we propose:

Hypothesis 4. An entrepreneur’s social competence will positively impact entrepreneurial success (4a); and, individual social networks (4b); when controlling for environmental hostility.

The Role of Entrepreneurs’ Self-Efficacy in Enhancing Entrepreneurial Success According to Krueger and Brazeal (1994), individuals’ self-efficacy can affect venture decisions and firm performance; and Boyd and Vozikis (1994) claim that self-efficacy is fundamental to moving from entrepreneurial intention to action. However, perceived self-efficacy could be more relevant, because, as Markham, Balkin, & Baron (2002) point out, individuals are motivated by their perception rather than by their objective ability. Perceived self-efficacy refers to an individual’s assessment of his/her skills and ability to carry out a task, but it could be different in reality (Bandura, 1997). Simon, Houghton, & Aquino (1999) contend that the positive side or view of the aforementioned researchers, the state that perceived self-efficacy will negatively affect entrepreneurial outcomes because of individual overconfidence or overestimation of skills. As a result, entrepreneurs may overlook contradictory signs and information and harbor higher expectations of success. Following Simon et al.’s (1999) line of thought, we hypothesize:

Hypothesis 5. Perceived self-efficacy will negatively impact entrepreneurial success (5a); and individual social networks (5b); when controlling for environmental hostility.

The Mediator Role of Individual Social Networking and Inter-Organizational Networking As Audretsch and Thurik (2004) mention, Thorton and Flynne (2003) and Saxenian (1994) argue that “(successful) entrepreneurial environments are characterized by thriving supportive networks that provide the institutional fabric; linking individual entrepreneurs to organized sources of learning and resources” (p. 5). Hence, individual social networking and inter-organizational strategic network activities are important to a successful startup and to an ongoing competitive advantage, as they may constrain or facilitate resource acquisition and the identification of opportunities (Beckert, 2010). For this study, the individual social networking construct represents entrepreneurs engaging in networking activities to enhance his/her entrepreneurial venture (Aldrich & Zimmer, 1986). These entrepreneurial networking activities may occur with other entrepreneurs; contacts like relatives, friends, and acquaintances; and entrepreneurial advocates (Birley, 1985). The aim of those networking activities is to provide assistance to entrepreneurs in the form of expert opinions and counseling, shared experiences and role models, information and resources, and support and motivation (Manning, Birley, & Norburn 1989). Consequently, we propose:

58 American Journal of Management vol. 13(2) 2013 Hypothesis 6. Individual social network activities will positively impact entrepreneurial success, when controlling for environmental hostility.

Additionally, for this research inter-organizational networking consists of formal and/or informal collaborative networking activities among entrepreneurial advocates at the public, private, and civic levels that may facilitate the entrepreneurial process from an idea generating stage, to a development stage, and later to a strategic positioning one (Butler & Hansen, 1991; Dubini & Aldrich, 1991; Uzzi, 1996). Those collaborative network activities may include alliances to improve entrepreneurial mechanisms (Audretsch & Thurik, 2004). Therefore, we propose:

Hypothesis 7. Inter-organizational network activities will positively impact entrepreneurial success, when controlling for environmental hostility.

As entrepreneurship is embedded in networks, opening entrepreneurs to social networks may advance or constrain links to better resources and information, as well as offer faster responses to opportunities and challenges (Klyver & Hindle, 2006). Furthermore, inter-organizational networks may facilitate or constrain the information and resources that could turn opportunities into successful ventures (Aldrich & Zimmer, 1986). Additionally, Brüderl and Preisendörfer (2000) contend that venture success is attained only if entrepreneurs make effective use of their networks. Consequently, entrepreneurs with high social competence (Manning et al. 1989) and self-efficacy (Boyd & Vozikis, 1994) are more likely to establish strategic networks that will help them overcome their limited resources and barriers, particularly of information. This was confirmed by Baron and Markman (2003) who found that entrepreneurs’ social networks assist them in gaining access to strategic business contacts, but through the effective use of their social competence. Therefore, we propose:

Hypothesis 8. Inter-organizational network activities (8a) and Individual social networking activities (8b); will partially mediate the relationship between opportunities and entrepreneurial success, when controlling for environmental hostility.

Hypothesis 8c. Individual social networking activities will indirectly mediate the relationship between social competence and entrepreneurial success, when controlling for environmental hostility.

Hypothesis 8d. Individual social networking activities will partially mediate the relationship between self-efficacy and entrepreneurial success, when controlling for environmental hostility.

Environmental Hostility as Controlled Cause In this study, environmental hostility is used as a control variable since this contextual factor may affect successful venture activities (Covin, Slevin, & Covin 1990). Environmental hostility denotes an unfavorable external force for business as a consequence of radical changes, intensive regulatory burdens, and fierce rivalry among competitors, among others (Covin and Slevin, 1989). As entrepreneurship is a complex task that is extremely sensitive to “habitat” (Miller, 2000), environmental hostility is expected to impact firm performance. Hence, environmental hostility was isolated from the determinants integral to this study.

RESEARCH DESIGN AND

This is an empirical study that attempts to model the relations among variables in the proposed model using Partial Least Squares (PLS). PLS is ideally suited for small sample sizes, formative indicators, and data that do not conform to traditional statistical assumptions (Chin, Marcolin, & Newsted 2003). To

American Journal of Management vol. 13(2) 2013 59

obtain t-statistics for the paths, in line with Baron and Kenny’s (1986) test, we conducted a bootstrap test using 2000 resamples. Data screening was done to ensure the meeting of data analysis requirements. Once the data were free from outliers and adequate for the multivariate analysis, Exploratory (EFA) was conducted to define the underlying structure of the variables. Following this step, the Confirmatory Factor Analysis (CFA) took place to assess the degree to which the data met the expected structure. For both analyses—EFA and CFA—the respective reliability and validity tests were applied. During the CFA, the proposed model was modified to obtain the best “” model for the proposed relationships. Once all the tests and the recommended modifications from the previously mentioned analyses were complete, we proceeded to test the structural hypotheses with the modified structural model to obtain the final model. Details for the aforementioned procedures are explained in the following sections.

FIGURE 2 PLS RESULTS OF PROPOSED STRUCTURAL MODEL WITHOUT MODERATORS

Construct Operationalization This research—conducted online through a web-based survey administered by Qualtrics—was developed and used to test the proposed model. The study was specifically designed to test the validity of the theoretical measurement model and hypothesized relationships among the constructs. The survey items were derived from existing measures with some adaptations to fit the uniqueness of this research. We relied on existing measures since our intention in this study was not to develop new measures when available items had been validated in prior research. Measures of systemic factors (EDU, OPP, and MIND) were adapted from the National Expert Survey (NES) (Reynolds et al. 2005). The construct of SE was adapted from Chen, Gully, & Eden (2001); the constructs of SC were adapted from Baron and Markman (2003); and measures of INETW and ONETW were adapted from Chen, Zou, & Wang’s (2009) measurements. Finally, the ES construct was modified to reflect the firm performance based upon Chua (2009). Variables were operationalized as reflective, formative using the guidelines of Petter, Straub, & Rai (2007), and categorical as follows: The variables EDU, OPP, MIND, SC, SE, HOST, INETW and ONETW were operationalized as reflective constructs on a five-point Likert scale (with 1=strongly disagree and 5=strongly agree).

60 American Journal of Management vol. 13(2) 2013 The entrepreneurial success-related constructs of firm performance were operationalized as formative through different scales such as sales growth rate and net profit margin, change in the number of employees, and in financial conditions over the last three years (Jarvis, MacKenzie, & Podsakoff 2003). We chose to measure firm performance through sales and employee growth, net profit margin, and financial condition, as well as through formative indicators guided by the literature of that type of measurement (Chua, 2009). Likewise, because we had more than two variables predicting our dependent variable, we conducted a multicollinearity test. The results of the variable inflation factor analysis indicate that the predictor variables are separate and distinct (VIF : 1.01 to 1.51). The initial survey was pre-tested on a group of known entrepreneurs using Bolton’s technique, operationalizing item response theory (Bolton, 1993). During pilot tests, five questions were flagged due to problems in comprehension; subsequent changes were approved by the testers. Since entrepreneurs’ time is limited, the questionnaire was calculated to be completed within 15 minutes to improve the response rate.

Data Collection Sample This study was conducted with entrepreneurs doing business in different industries and regions in P.R (Details is available upon request). The sample consisted of 135 participants. The data were collected through a survey that assured participants that the study was purely for research purposes and that participation was voluntary. All surveys had the option of being answered in Spanish or English, the thought being that while Spanish is the primary language in P.R., most Puerto Rican entrepreneurs consider English the language of business. Study participants were identified and selected from the Puerto Rico Trade and Export Office Official Register of Business. This list is public, but needs to be requested. A total of 1,500 surveys were emailed; 221 were returned, resulting in a response rate of approximately 15 percent. However, only 135 were returned completed and usable for data analysis. Lower response rates for entrepreneur surveys seem typical when compared with the general population (Dennis, 2003). We tested for response bias based on the time of response (early vs. late) following Armstrong and Overton’s (1977) test. To do this, we conducted a one way ANOVA using the dependent variables (three observed variables), and using response date as the distinguishing factor. The results of the ANOVA show that there is no significant difference among the values (5.66 to 6.44) for the dependent variable between early and late responders. There were five percent of missing values of the total values in the data set. Since substitution also is acceptable, we input the value for each missing value (Tabachnick & Fidell, 2007). The minimum, maximum, and mean values of all variables appear to be reasonable.

Statistical Analysis Based on a bivariate outlier analysis at a of 95 percent, we found close to 115 cases of outliers. However, while we expect some observations to fall outside the ellipse, we only deleted five respondents that fell outside more than two times (Hair, Black, Babin, & Anderson 2010). Descriptive statistics, correlations, and Cronbach’s alphas for all the variables are presented in Table 1.

Measurement Model: Exploratory Factor Analysis and Confirmatory Factor Analysis An EFA was used to reveal the underlying structure of the relationship among a set of observed variables. Principal Axis Factoring with Direct Oblimin rotation was performed with valid, reliable, and adequate results which, based on the collected data, validates that eleven factors exist throughout the survey. We chose oblique rotation because of its assumption of correlated variables consistent with our understanding of the issues in this study (Field, 2005). Direct Oblimin, which is a particular type of oblique rotation, was selected because it allows factors to be correlated and diminished interpretably (Costello & Osborne, 2005).

American Journal of Management vol. 13(2) 2013 61

TABLE 1 DESCRIPTIVE STATISTICS, CORRELATIONS, AND CRONBACH’S ALPHAS

Factor Promo Mind ONetw Edu Host Adapt SE Opp Perce Expre INetw Mean SD

Promo (0.9) 3.7305 .7794 Mind -.171 (.925) 2.8263 1.042 ONetw .111 -.200 (.926) 3.0295 .9172 Edu -.069 -.175 .146 (.849) 3.0483 1.033 Host -.138 .327 -.080 -.162 (.867) 2.9641 1.064 Adapt -.138 .110 -.026 .076 .011 (.857) 4.4063 .5960 SE -.123 .075 -.041 -.058 .065 .281 (.874) 4.3559 .5793 Opp .026 .042 -.060 .039 .014 -.032 -.175 (.868) 3.4081 1.116 Percep .169 .073 .014 -.006 .028 -.341 -.323 .166 (.842) 4.0966 .4912 Expres .207 -.212 -.048 .144 -.020 -.142 -.002 .044 .026 (.90) 3.6974 .8410 INetw -.139 -.013 -.171 -.017 .214 .273 .117 -.079 -.154 .016 (.817) 3.2500 .8535 Note. Figures in parentheses are Cronbach’s Alphas.

The KMO measure of adequacy was .687, and Bartlett’s Test of Sphericity was significant (χ2 = 3819.86, df =703, p< 0.000), indicating sufficient inter-correlations. Moreover, almost all MSA values across the diagonal of the anti-image matrix were above .50 and the reproduced correlations were over .30, suggesting that the data are appropriate for factoring. An additional check for the appropriateness of the respective number of factors that were extracted was confirmed by our finding of only 4 percent of nonredundant residuals with absolute values greater than 0.05. The selected EFA structure was the one with the eigenvalues greater than one, which also fit with the eleven expected factors. The solution was considered good and acceptable through the evaluation of three possible models and their respective statistical values. During the evaluation process, twelve items were eliminated for their communality values below .50 (Igbaria, Livari, & Maragahh 1995). The total variance explained was 68.7 percent, which exceeds the acceptable guideline of 60 percent (Hair et al., 2010). To test the reliability of the measures, we used a coefficient alpha (Gerbing & Anderson 1988). Acceptable values of Cronbach’s alpha greater than .70 indicate good reliability (Nunnally, 1978). As statistics presented in Table 1 show, all factors have acceptable reliability. Convergent validity can be made based on the EFA loadings. Since all of the variables loaded at levels greater than .50 on expected factors, convergent validity is indicated (Igbaria et al., 1995). Discriminant validity measures the extent to which measures diverge from factors they are not expected to quantify. In EFA, this is aptly demonstrated by the lack of significant cross loadings across the factors (over .20 differences). The items belonging to the same scale had factor loadings exceeding .50 on a common factor and no cross-loadings. The eleven extracted factors seem to be reflective constructs as each item asks similar things. We performed the CFA using PLS and began by reviewing the factors and their items to establish face validity. We specified the measurement model in PLS with the eleven factors derived from the EFA, no modifications are considered to improve the original model. Our EFA modified model shows all the reliability coefficients above .70 and the Average Variance Extracted (AVE) above .50 for each construct. The measurements are thus reliable, and the constructs account for at least 50 percent of variance.

62 American Journal of Management vol. 13(2) 2013 In the Correlations Table, (see Table 2) the square root of each construct’s AVE is greater than the correlation between constructs, thus establishing sufficient discriminant validity (Chin, 1998). Each item loads higher on its respective construct than on any other construct, further establishing convergent and discriminant validity (Gefen, Straub, & Boudreau 2000).

TABLE 2 MEASUREMENT MODEL RESULTS: CONFIRMATORY FACTOR ANALYSIS: LOADING AND MEASUREMENT PROPERTIES OF CONSTRUCTS

Construct/ Loadings/ Composite Items Weights t-Value AVE Reliability Communalities 1 Promotion 0.77 0.93 Q16_3_1 0.7564 20.3088 0.5722 Q16_4_1 0.9087 28.9024 0.8258 Q16_5_1 0.9306 30.0956 0.866 Q16_6_1 0.9025 28.4506 0.8145 National 2 Mindset 0.817 0.947 Q11_1_1 0.8868 36.7491 0.7864 Q11_2_1 0.9141 44.5225 0.8357 Q11_3_1 0.9162 52.5999 0.8395 Q11_4_1 0.8986 39.6549 0.8074 Inter- Organizational 3 Networks 0.775 0.945 Q17_1_1 0.8559 26.9619 0.7326 Q17_2_1 0.8471 25.7777 0.7176 Q17_3_1 0.9333 28.1935 0.8711 Q17_4_1 0.9331 30.7195 0.8707 Q17_5_1 0.8263 27.1805 0.6828 4 Education 0.772 0.91 Q10_1_1 0.7871 25.1831 0.6196 Q10_2_1 0.9252 26.6255 0.8559 Q10_3_1 0.9169 23.6663 0.8408 5 Hostility 0.79 0.919 Q20_1_1 0.9037 37.2128 0.8167 Q20_2_1 0.8547 54.852 0.7305 Q20_3_1 0.9073 38.4972 0.8231 6 Adaptability 0.702 0.904 Q15_1_1 0.799 21.3127 0.6385 Q15_2_1 0.8365 24.7434 0.6998 Q15_4_1 0.8514 19.7499 0.725 Q15_5_1 0.8633 18.709 0.7453 7 Self-Efficacy 0.733 0.916 Q13_1_1 0.8217 15.1947 0.6753 Q13_2_1 0.8521 19.295 0.7262 Q13_3_1 0.9086 17.6227 0.8256 Q13_4_1 0.8394 18.6761 0.7047

American Journal of Management vol. 13(2) 2013 63

8 Opportunities 0.883 0.938 Q12_1_1 0.9399 86.787 0.8834 Q12_2_1 0.9399 86.787 0.8834 9 Perception 0.684 0.896 Q14_1_1 0.7704 19.7143 0.5936 Q14_2_1 0.8916 27.3927 0.795 Q14_3_1 0.8576 26.7221 0.7355 Q14_4_1 0.7812 17.3595 0.6103 10 Expressiveness 0.911 0.953 Q16_1_1 0.9545 112.9245 0.911 Q16_2_1 0.9545 112.9245 0.911 Individual 11 Networking 0.733 0.892 Q18_3_1 0.8623 28.7039 0.7435 Q18_4_1 0.8411 27.7983 0.7074 Q18_5_1 0.8648 27.0316 0.7479 DV_Formative Firm Performance 0.381 0.746 Q35_1 0.3318 11.8367 0.4143 Q36_1 0.3318 11.2715 0.4126 NPM_AVG 0.3318 13.953 0.4376 SGR_AVG 0.3318 21.2274 0.5257 Empl_AVG 0.3318 4.5872 0.1163

Because we used a single survey to a single sample, we needed to conduct a common method bias test to ensure that the results of our data collection were not biased by this mono-method. To do this, we examined our latent variable correlation matrix for values exceeding 0.900. According to Bagozzi, Yi, & Phillips (1991) this is a strong indication of common method bias. However, the highest correlation we have is 0.396, with an average correlation of .118, and the lowest positive correlation of .013. These values provide sufficient evidence that our data collection was not biased by a single factor due to mono- method.

Structural Model We tested our structural model using PLS-Graph 3.0, because we had formative factors (Chin, 1998). Significance of paths was estimated using t-statistics produced during bootstrapping, using 2000 resamples (see trimmed model in Figure 2). Next, we performed a mediation analysis using causal and intervening variable methodology (Baron & Kenny, 1986). Mathieu and Taylor (2006) indicate that mediator variables are explanatory mechanisms that shed light on the nature of the relationship that exists between two variables. Mediated paths connecting independent variables (Opp, SC, and SE) to dependent variable (ES) through a mediating variable (ONetw, and INetw) were analyzed to examine the direct, indirect, and total effects. For each of the mediation hypotheses being tested (H3c; H8a to H8d), a model was first run without the mediation paths then with the mediator.

FINDINGS

The estimate path loading results based on PLS, significance, and R2 are presented in Figure 2. To avoid errors in statistical conclusion a validity appropriate power level was established (power level at 0.80, and significance level of .05) and used to compute the to guarantee statistically

64 American Journal of Management vol. 13(2) 2013 significant results and control over the possibility of Type I and Type II errors (Hair et al., 2010). The R2 values show that the number of predictors used in this research for ES (Beta=.133; p<.05) and for INETW (Beta=.116; p<01) are sufficient to explain it. We found an acceptable power over .80 (Hair et al., 2010) at 95 percent and 99 percent of confidence, respectively. Hence, the independent factors proposed in the model were sufficient to explain both. However, this was not the case for ONETW (.014) and SE (.012). This may be because in both cases our model considered only one predictive factor for the analysis. Additionally, the f-squared for the effect of SC on INETW indicates a small effect (f2=0.84). HOST, which shows a strong and significant negative effect on ES (λ = -.33; p<.01) at 99 percent of confidence, was included in our model as a control variable. The mediation roles of networking remain as interesting subject details throughout this section (See Figure 2).

Systemic Factors as the Roots to Entrepreneurial Success This research suggests that systemic factors such as OPP, MIND, and EDU will directly impact ES. Remarkably, our first results show that none of those suggestions (H1a to H1c) were sustained. Secondly, this research postulates that a national mindset toward entrepreneurship directly impacts entrepreneurial success (H2). Unfortunately, we did not find a significant direct relationship between MIND and ES. Third, the entrepreneurial education received by the entrepreneurs surveyed does not appear to be appropriate to provoke a direct significant effect on ES nor indirectly through the enhancement of entrepreneur SE since the hypotheses H3a, H3b, and H3c were not supported. All of the abovementioned results provide the foundation for our first finding: Systemic factors in P.R.—entrepreneurial opportunities, national mindset toward entrepreneurship, and entrepreneurial education—are not suitable sources for boosting entrepreneurial success.

The Role of Individual Factors in the Likelihood of Entrepreneurial Success This paper theorizes that individual factors such as social competence (H4a) and self-efficacy (H5a) may act as direct driving forces for entrepreneurial success. However, our results show a positive but insignificant direct effect from entrepreneur’s SE on ES (H5a). Moreover, from all of our hypotheses, H4b was the only one to show a significant direct relationship between entrepreneur’s SC with INETW (λ= .265; p< .01). Therefore, our second finding is: Entrepreneur’s social competence enhances their individual social networking activities.

The Mediating Role of Individual Social Networking and Inter-Organizational Networking This research hypothesizes that individual social networks (H6) and inter-organizational networks (H7) have a positive direct effect on ES. Surprisingly, our data reveals a significant inverse relationship between INETW and ES (λ = .214; p <.05) and an insignificant but still negative effect between ONETW and ES. Hence, our third finding is: Individual social networks have a negative effect on entrepreneurial success. This paper also suggests that inter-organizational networks mediate the relationship between entrepreneurial opportunities and entrepreneurial success (H8a). Additionally, we theorize that individual social networks mediate the relationship between entrepreneurial opportunities (H8b), social competence (H8c) and self-efficacy (H8d) with entrepreneurial success. Our data reveals an indirect relationship between SC and ES through INETW (H8c). Yet, as previously discussed, the relationship between INETW with ES is negative. Therefore, our finding number four is: Entrepreneur’s social competence indirectly affects entrepreneurial success through the development of individual social networks. However, even when an individual’s social competence enhances their social network, the individual social network diminishes their entrepreneurial success

DISCUSSION

This study was conducted with entrepreneurs doing business in P.R. who have diverse entrepreneur and firm characteristics. This by itself may account for a wide range of differences between surveyed

American Journal of Management vol. 13(2) 2013 65

groups and the role of each factor under consideration. However, even among those potential differences, the lack of an adequate institutional structure conducive to entrepreneurship is present among all the relevant factors. A study published by the office of Advocacy of the U.S. Small Business Administration (SBA) (Acs & Szerb 2010), categorized P.R. as a country that should be in the economic development stage known as “innovation-driven.” However, their results showed that P.R.—at number 17 out of 40 countries surveyed—had not exploited its full potential. In the innovation-driven stage, entrepreneurship plays a more important role in increasing economic growth. The SBA report further specified that institutions need to be strengthened before entrepreneurial resources can be deployed to drive innovation. Consequently, our examination expands the abovementioned study by explaining why P.R. has not yet attained the innovation driven stage. It reveals that Puerto Rican institutions are neither suitable nor structured to lead the local economy from an efficiency-driven to an innovation-driven one. Stevenson and Jarillo (2007) assert that when an opportunity is detected and individuals are willing, the ability to exploit it is vital. In that line of thought, our investigation reveals that the inability of P.R.’s entrepreneurs to exploit opportunites is because of their individual networking barriers. Acs and Szerb (2010) demonstrate that a lack of adequate networking may prevent countries from reaching the next stage of development. In that sense, our findings expand the views of Aponte (2002a), Aponte & Rodríguez (2005), and the 2007 GEM report (Bosma et al., 2008), all of which state that the general population in P.R. recognizes that opportunities exist and want to follow them, but perceive it is not feasible to do so. Thus, we agree that the problem is not the lack of opportunities, and add that networks utilized by entrepreneurs at the individual level may represent a barrier to successfully exploiting those perceived opportunities. Furthermore, the literature states that networking as part of the entrepreneurial attitude may affect the general disposition of a country’s population toward entrepreneurs, entrepreneurship, and business start-ups (Acs & Szerb, 2010); and perceptions about entrepreneurship may affect the supply and demand of national entrepreneurial activities (Bosma et al., 2008). Therefore, our finding that individual social networks have a negative influence on entrepreneurial success may explain the contradiction between the positive perceptions toward entrepreneurship reported by Aponte (2002b) and the lower entrepreneurial activity recounted by the 2007 GEM study. However, the reasons why entrepreneur networks at individual level have a negative impact on their success are beyond the scope of this study. As previously mentioned, beliefs, values, and preferences will have direct impacts on economic outcomes (Guiso et al., 2006). Nonetheless, our study shows that P.R.’s national mindset toward entrepreneurship is not acting as a source of entrepreneurial success. A positive national mindset toward entrepreneurship is essential to developing adequate collaborations and institutional and industrial structures and to responding to perceived opportunities (Aldrich & Zimmer, 1986); this finding may help to explains why the institutional factors as a whole, are not the most adequate for generating a successful entrepreneurial environment. Changing the mindset of a nation, as Romaguera (2010) states, is an incredibly challenging task that requires changing individual mindsets through a well-designed master plan. However, the entrepreneurial miracle is not a mystical product. As Romaguera (2010) exemplifies, it is part of a well-conceived plan of action that primarily requires knowing where we are as a country and where we want to go. It is in this sense that entrepreneurial education in P.R. and internationally seems to be scarce (Aponte & Rodríguez 2005; Corduras-Martinez et al. 2010). Along this line, our study confirms Varela (2011) and Gibb (2011), who show that the entrepreneurial education that has a successful effect on firm performance is the one that can impact entrepreneurs’ competencies, such as self-efficacy, and specific target-groups like new startups and those in the internationalization process. To achieve the desired impact of education, entrepreneurial advocates must determine which groups they want to impact and how. This also requires a blueprint plan with evaluation, measurement, and corrective action. In conclusion, those with a stake in entrepreneurial success—government administrators, entrepreneurial organizations, business associations, educators, and entrepreneurs—must be aware of all abovementioned discoveries to design a master plan that may lead P.R. to build a successful entrepreneurial environment.

66 American Journal of Management vol. 13(2) 2013 CONTRIBUTION TO THEORY AND PRACTICE

This study examines the impact of systemic and individual factors on entrepreneurial success using the island of P.R. as a case study. Limited scholarly research has been conducted on the entrepreneurial environment in P.R., and examinations of institutional and individual factors together are even scarcer. Moreover, no scholarly work has been conducted that analyzes the mediating roles of individual and inter-organizational networks on the island, making this a groundbreaking investigation. This research adds to the body of entrepreneurship theory demonstrating the relationships and factors that may facilitate or hinder entrepreneurial success. Our research suggests that a better interconnected entrepreneurial system and stronger individual competencies may be necessary for both practitioners and policy makers to develop a master plan that may contribute to improving the current and prospective entrepreneurial environment. Hence, the results of our research could be used to assist policy makers, entrepreneurial advocacy organizations, and entrepreneurs themselves with carrying out their entrepreneurial goals. Policy makers should be aware of the necessity of designing an integrative system through their policies and programs that help interlock entrepreneurial opportunities and education and a national mindset favorable to entrepreneurship for both current and future generations. Entrepreneurial advocacy organizations, for their part, should continue strengthening the inter-organizational networks that now seem to be very helpful for entrepreneurs, yet at the same time review the overall content of their programs. Entrepreneurs themselves should reevaluate the use and composition of their individual networks as well as their entrepreneurial competencies. In that line, academic institutions and entrepreneurial organizations that have programs geared toward entrepreneurs should be informed about the systemic and individual deficiencies so they may enhance their curriculum design for current and future entrepreneurs.

LIMITATIONS

The size and composition of our sample may limit the generalization of our findings as our sample is specific to entrepreneurs doing business in P.R. A wide variety of entrepreneurs were included in our sample to take into account their diverse individual and firm characteristics. However, the purpose of this study was to examine the systemic and individual factors that may facilitate or hinder entrepreneurial success in P.R., and our results may provide a basis for other countries. Yet, they should be examined bearing in mind each country’s individual context. In addition, constructs were measured by respondents who self-reported information about their firms and perceptions and may be inherently biased. That being said, potential bias is considered a minimal risk in this case for the development of practice-relevant theory as respondents were not asked to identify themselves or their organizations (Venkatraman & Ramanujam, 1986).

FUTURE RESEARCH

Our work suggests the need for further research on other possible interactions between institutional and individual factors that may help to develop a successful entrepreneurial environment, such as the mediator role of perceived opportunities or the impact of entrepreneurial policies and programs as moderator. Further research on individual and systemic factors not included in this research is also recommended such as the role of financial capital, more specifically the effect of individuals’ savings and the availability of investors. Lastly, based on our results, additional research into the composition, use, and development process of individual social networks, specifically, is advised.

REFERENCES

Acs, Z. J., & Szerb, L. (2010). The Global Entrepreneurship Index (GEINDEX). Foundations and Trends in Entrepreneurship, 5, (5), 341-435.

American Journal of Management vol. 13(2) 2013 67

Aldrich, H. E., & Zimmer, C. (1986). Entrepreneurship through Social Networks. In H. E. Aldrich (Ed.), Population Perspectives on Organizations, pp. 13-28. Uppsala: Acta Universitatis Upsaliensis.

Armstrong, J., & Overton, T. (1977). Estimating Nonresponse Bias in Mail Surveys. Journal of Marketing Research, 14, (3), 396-402.

Aponte, M. (2002a). Factores Condicionantes de la Creación de Empresas en Puerto Rico: Un Enfoque Institucional. Bellaterrra: Universitat Autónoma de Barcelona.

Aponte, M. (2002b). Start-Ups in Puerto Rico: Poor Entrepreneurial Spirit or Structural Problem? Paper presented to the International Council for Small Business, Puerto Rico, June 16-19.

Aponte, M., & Rodríguez, E. (2005). Informe Especial Sobre Política Pública en Puerto Rico. http://www.gemconsortium.org. Accessed on March 5, 2013.

Audretsch, D., & Thurik, R. (2004). A Model of the Entrepreneurial Economy. Papers on Entrepreneurship, Growth, and Public Policy. Jena, Germany: Group Entrepreneurship, Growth and Public Policy.

Bagozzi, R., Y. Yi, & Phillips, L. (1991). Assessing Construct Validity in Organizational Research. Administrative Science Quarterly, 36, (3), 421-458.

Bandura, A. (1997). Self-Efficacy: The Exercise of Control. New York: W. H. Freeman.

Baron, R. (2000). Psychological Perspectives on Entrepreneurship: Cognitive and Social Factors in Entrepreneurs' Success. Current Directions in Psychological Science, 9, (1), 15-18.

Baron, R., & Kenny, D. (1986). The Moderator-Mediator in Variable Distribution in Social Psychology Research. Journal of Personality and Social Psychology, 51, 1173-1182.

Baron, R. A., & Markman, G.D. (2000). Beyond Social Capital: How Social Skills Can Enhance Entrepreneurs' Success. Academy of Management Executive, 14, (1), 106-116.

Baron, R., & Markman, G. (2003). Beyond Social Capital: The Role of Entrepreneurs' Social Competence in Their Financial Success. Journal of Business Venturing, 18, (1), 41-60.

Beckert, J. (2010). How Do Fields Change? The Interrelations of Institutions, Networks, and Cognition in the Dynamics of Markets. Organization Studies, 31, (5), 605-627.

Bolton, R. N. (1993). Pretesting : Content Analysis of Respondents’ Concurrent Verbal Protocols. Marketing Science, 12, (3), 280-303.

Bosma, N., Jones, K., Autio, E., & Levie, J. (2008). Global entrepreneurship monitor (GEM): 2007 executive report. Babson College, London Business School and Global Entrepreneurship Research Consortium.

Boyd, N., & Vozikis, G. (1994).The Influence of Self-Efficacy on the Development of Entrepreneurial Intentions and Actions. Entrepreneurship Theory and Practice, 19, 63-77.

68 American Journal of Management vol. 13(2) 2013 Brüderl, J., & Preisendorfer, P. (2000). Fast-Growing Businesses. International Journal of Sociology, 30, (3), 45-70.

Butler, J., & Hansen, G. (1991). Network Evolution, Entrepreneurial Success, and Regional Development. Entrepreneurship and Regional Development: An International Journal, 3, (1), 1-16.

Casson, M. (2003). Entrepreneurship, Business Culture and the Theory of the Firm. In Z. Acs & D. Audretsch (Eds.), Handbook of Entrepreneurship Research, pp. 223-246. Great Britain: Kluwer Academic Publishers.

Chen, G., Gully, S., & Eden, D. (2001). Validation of a New General Self-Efficacy Scale. Organizational Research Methods, 4, (1), 62-83.

Chen, X., Zou, H.,& Wang, D. (2009). How do New Ventures Grow? Firm Capabilities, Growth Strategies and Performance. International Journal of Research in Marketing, 26, (4), 294-303.

Chin, W. (1998). Commentary: Issues and Opinion on Structural Equation Modeling. MIS Quarterly, 22, (1), vii-xvi.

Chin, W., Marcolin, B., & Newsted, P. (2003). A Partial Least Squares Latent Variable Modeling Approach for Measuring Interaction Effects: Results from a Monte Carlo Simulation Study and an Electronic-Mail Emotion/Adoption Study. Information Systems Research, 14, (2), 189.

Chua, J. (2009). Government Leadership-Business Voice: An Interaction and Mediation Analysis of Business Environment Reform in Indonesia. http://digitalcase.case.edu:9000/fedora/get/ksl:wea edm330/weaedm330.pdf. Accessed on March 5, 2013.

Corduras-Martinez, A., Levie, J., Kelley, D. J., Saemundsson, R. J., & Schott, T. (2010). Global Entrepreneurship Monitor Special Report: A Global Perspective on Entrepreneurship Education and Training. GERA.

Costello, A., & Osborne, J. (2005). Best Practices in Exploratory Factor Analysis: Four Recommendations for Getting the Most from Your Analysis. Practical Assessment, Research and Evaluation, 10, (7), 1-9.

Covin, J., Slevin, D., & Covin, T. (1990). Content and Performance of Growth-Seeking Small Firms in High- and Low-Technology Industries. Journal of Business Venturing, 5, (6), 391-412.

Covin, J. G., & Slevin, D. (1989). Strategic Managment of Small Firms in Hostile and Benign Environments. Strategic Management Journal, 10, (1), 75-87.

De Hoyos-Ruperto, M., Romaguera, J. M., Carlsson, B., & Perelli, S. (2012). Entrepreneurial Environment Dilemma in Puerto Rico: A Challenge of Self and System. Journal of Marketing Development and Competitiveness, 6, (3), 11-28.

Dennis, W. J. (2003). Raising Response Rates in Mail Surveys of Small Business Owners: Results of an Experiment. Journal of Small Business Management, 41, (3), 278-295.

Dubini, P., & Aldrich, H.E. (1991). Personal and Extended Networks are Central to the Entrepreneurial Process. Journal of Business Venturing, 6, (5), 305-313.

American Journal of Management vol. 13(2) 2013 69

Field, A. (2005). Discovering statistics using SPSS. Sage Publications.

Gartner, W. (1990). What are We Talking About When We Talk About Entrepreneurship? Journal of Business Venturing, 5, (1), 15-28.

Gartner, W., Shaver, K., Carter, N., & Reynolds, P. (2004). Handbook of Entrepreneurial Dynamics: The Process of Business Creation. California: Sage Publications, Inc.

Gefen, D., Straub, D., & Boudreau, M. (2000). Structural Equation Modeling and Regression: Guidelines for Research Practice. Communications of the Association for Information Systems, 4, (7).

Gerbing, D., & Anderson, J. (1988). An Updated Paradigm for Scale Development Incorporating Unidimensionality and Its Assessment. Journal of Marketing Research, 25, (2), 186-192.

Gibb, A. (2011). Espíritu empresarial: Soluciones únicas para ambientes únicos, Acaso es posible lograr esto con el paradigma existente? In R. Varela (Ed.), Educación empresarial: Desarrollo, Innovación y Cultura Empresarial. Santiago de Cali, Colombia: Universidad Icesi, 2, (220).

Guiso, L., Sapienza, P., & Zingales, L. (2006). Does the Culture Affect Economic Outcomes?Journal of Economic Perspective, 20, (2), 23-48.

Hair, J., Black, W., Babin, B. J., & Anderson, R.E. (2010). Multivariate Data Analyis (7th ed.). Upper Saddle River, NJ: Prentice Hall.

Hoang, H., & Antoncic, B. (2003). Network-Based Research in Entrepreneurship: A Critical Review. Journal of Business Venturing, 18, 165-187.

Igbaria, M., Livari, J., & Maragahh, H. (1995). Why do Individuals Use Computer Technology? A Finnish Case Study. Information and Management, 29, (5), 227-238.

Jarvis, C.B., MacKenzie, S., & Podsakoff, P. (2003). A Critical Review of Construct Indicators and Measurement Model Misspecification in Marketing and Consumer Research. Journal of Consumer Research, 30, 199-218.

Johannisson, B. (1998). Personal Networks in Emerging Knowledge-Based Firms: Spatial and Functional Patterns. Entrepreneurship and Regional Development, 10, (4), 297-312.

Kirby, D. A. (2003). Entrepreneurship education: Can business schools meet the challenge? in Creación de empresas: Entrepreneurship: Homenaje al profesor José María Veciana Vergés. Ed. E. Genesca, D. Urbano, J. Capellera, C. Guallarte, and J. Vergés. Bellatera: Universidad Autónoma de Barcelona, 359- 375.

Klyver, K., & Hindle, K. (2006). Do Social Networks Affect Entrepreneurship? A Test of the Fundamental Assumptions Using Large Sample, Longitudinal Data. Paper presented at the 20th Australian and New Zealand Academy of Management Conference, Rockhampton, Qld: (6-9 December).

Krueger, N. F. (2000). The Cognitive Infraestructure of Opportunity Emergence. Entrepreneurship Theory and Practice, 24, (3), 5-23.

Krueger, N., & Brazeal, D. (1994). Entrepreneurial Potential And Potential Entrepreneurs. Entrepreneurship Theory and Practice, 18, 91-105.

70 American Journal of Management vol. 13(2) 2013

Levie, J., & Autio, E. (2007). Entrepreneurial Framework Conditions and National Level Entrepreneurial Activity: Seven-Year Panel Study. Paper presented for the 3rd International Global Entrepreneurship Conference, Washington, D.C. October 1-3.

Manning, K., Birley, S., & Norburn, D. (1989). Developing New Ventures Strategy. Entrepreneurship Theory and Practice, 14, (1), 69-76.

Markham, G., Balkin, D., & Baron, R. (2002). Inventors and New Venture Formation: The Effects of General Self-Efficacy and Regretful Thinking. Entrepreneurship Theory and Practice, 27, (2), 149-165.

Mathieu, J. E., & Taylor, S. R. (2006). Clarifying Conditions and Decision Points for Mediational Type Inferences in Organizational Behavior. Journal of Organizational Behavior, 27, 1031–1056.

Miller, W. F. (2000) The ‘Habitat’ for Entrepreneurship. http://iis-db.stanford.edu/pubs/11898/Miller. pdf. Accessed on March 5, 2013.

Nunnally, J. (1978). Psycometric Theory. New York, NY: McGraw-Hill.

Petter, S., Straub, D., & Rai, A. (2007). Specifying Formative Constructs in Information Systems Research. MIS Quarterly, 31, (4), 623-656.

Peterman, N. E., & Kennedy, J. (2003). Enterprise Education: Influencing Students’ Perceptions of Entrepreneurship. Entrepreneurship Theory and Practice, 28, (2), 129–144.

Reynolds, P., Bosma, N., Autio, E., Hunt, S., De Bono, N., Servais, I., Lopez-Garcia, P., & Chin, N. (2005). Global Entrepreneurship Monitor: Data Collection Design and Implementation 1998-2003. Small Business Economics, 24, 205-231.

Romaguera, J. (2010). The Miracle of Changing the Mindset for Young, Would-Be Entrepreneurs! In F. Kiesner (Ed.), Creating Entrepreneurs: Making Miracles Happen, pp. 217-243. Singapore: World Scientific Publishing Co. Pte. Ltd.,

Ronstadt, R. (1987). The Educated Entrepreneurs: A New Era of Entrepreneurial Education is Beginning. American Journal of Small Business, 11, (4), 37-53.

Saxenian, A. (1994). Regional Advantage: Culture and Competition in Silicon Valley and Route 128. Cambridge: Harvard University Press.

Schwab, K. (Ed.) (2012). The Global Competitiveness Report 2012-2013. World Economic Forum.

Shane, S. (2003). A General Theory of Entrepreneurship. Cheltenham: Edward Elgar.

Shane, S., & Venkataraman, S. (2000). The Promise of Entrepreneurship as a Field of Research. The Academy of Management Review, 25, (1), 217-226.

Simon, M., Houghton, S., & Aquino, K. (1999). Cognitive Biases, Risk Perception and Venture Formation: How Individuals Decide to Start Companies. Journal of Business Venturing, 15, (2), 113-134.

American Journal of Management vol. 13(2) 2013 71

Stevenson, H., & Jarillo, J. (2007). A Paradigm of Entrepreneurship: Entrepreneurial Management. In A. Cuervo, D. Ribeiro, & S. Roig (Eds.), Entrepreneurship: Concepts, Theory and Perspective, pp. 155-170 Springer.

Tabachnick, B., & Fidell, L. (2007). Using . Boston, MA: Pearson Education.

Todaro, M. (1981). Economic Development in the Third World. New York: Longman.

Uzzi, B. (1996). The Sources and Consequences of Embeddedness for the Economic Performance of Organizations: The Network Effect. American Sociological Review, 61, (4), 674-698.

Varela, R. (2011). Educación Empresarial Basada en Competencias Empresariales. In R. Varela (Ed.), Educación empresarial: Desarrollo, innovación y cultura empresarial, 2, 220. Santiago de Cali, Colombia: Universidad Icesi.

Venkatraman, N., & Ramanujam, V. (1986). Measurement of Business Performance in Strategy Research: A Comparison of Approaches. Academy of Management Review, 11, (4), 801-814.

Welter, F., & Smallbone, D. (2011). Institutional Perspectives on Entrepreneurial Behavior in Challenging Environments. Journal of Small Business Management, 49, (1), 107-125.

Wilson, F., Kickul, J., & Marlino, D. (2007). Gender, Entrepreneurial Self-Efficacy and Entrepreneurial Career Intentions: Implications for Entrepreneurship Educaction. Entrepreneurship Theory and Practice, 31, (3), 387-406.

World Bank. 2013. Doing Business 2013: Smarter Regulations for Small and Medium-Size Enterprises. Washington, DC: World Bank Group. DOI: 10.1596/978-0-8213-9615-5.

World Economic Forum. (2009). Educating the Next Wave of Entrepreneurs: Unlocking Entrepreneurial Capabilities to Meet the Global Challenges of the 21st Century: A report of the Global Education Initiative. Switzerland: World Economic Forum.

Zhao, H., Seibert, S., & Hills, G. (2005). The Mediating Role of Self-Efficacy in the Development of Entrepreneurial Intentions. Journal of Applied Psychology, 30, (6), 1265-1272.

72 American Journal of Management vol. 13(2) 2013

The Need to Practice What We Teach: Succession Management in Higher Education

Jamye Long Delta State University

Cooper Johnson Delta State University

Sam Faught The University of Tennessee at Martin

Jonathan Street The University of Memphis

“Practice what you preach” is often a phrase used to emphasize the importance of maintaining one’s integrity through performing as one advises others. In the case of succession management, this phrase can be used to emphasize the differences between educators and practitioners. Furthermore, it is the practice of educators to instill in students the understanding that a succession plan is necessary in business practices. However, within the confines of higher education, succession management plans are rare. This brings into question if institutions are aware of the immoral implications that it establishes by teaching a concept itself is unwilling to implement.

INTRODUCTION

A succession management plan is a proactive process that ensures continuing leadership committed to the organization’s values, mission, and strategic plan by intentionally developing employees within the organization for advancement. For example, in 2001 when Herb Kelleher, the Southwest Airlines co- founder retired, he stated that Southwest prioritized succession planning and named James Parker, the company’s general counsel to be Chief Executive Officer and Colleen Barrett, Kelleher’s former legal secretary to be President and the Chief Operating Officer of the airline (Hirsch, 2001). While corporate America has embraced the model of succession management, the concept, although emphasized in the classroom in higher education, has largely been shunned by the administrations of universities and colleges. With the understanding that institutions of higher learning are operating businesses, universities need to implement the succession management strategies they teach in order to retain their credibility in the service-based business of educating.

American Journal of Management vol. 13(2) 2013 73

THE ORIGIN OF SUCCESSION MANAGEMENT

Mahler and Wrightnour (1973) established their dominance of the theory of succession management with their initial publication titled “Executive Continuity”. Mahler and Drotter (1986) reinstated the importance of the practice of succession management in their work, “The Succession Planning Handbook for the Chief Executive”. In both works, the authors assert the importance of practicing succession management of designated positions as well as the longevity and success of organizations. In his research, Mahler studied companies such as Exxon and General Electronics to demonstrate the way succession management practices can lead to the growth of future leaders. Mahler worked with Ted Levino, vice president of human resources for General Electronics at the time, to create and establish a set of guidelines to emphasize the importance of replacing key executive positions, often before a vacancy has been created. Mahler and Levino are recognized as having developed the succession strategy to such an extent that General Electronics became known as an “academy company” due to its success in producing future leaders (Kesler, 2002). Through his research, Mahler created a systematic approach to establishing a successful pattern in developing leaders. Over the years, management experts have added to the original methodology as well as principles that have evolved with respect to the value and means of succession management. Many of the new concepts have been developed through the failures of more antiquated methods (Kesler, 2002). For example, experts have determined that that it is more rewarding to practice the sound growth of leadership pools that include multiple candidates than it is to place emphasis on forecasting specific replacement management. Also, recommendations specify CEOs emphasize developing potential leaders rather than saving face with boards of directors who have little influence on the overall future of the company (Mahler & Drotter, 1986).

ESTABLISHING A BASE LINE: UNDERSTANDING LEADERSHIP THROUGH BUSINESSES AND HIGHER EDUCATION

Regarding the theory of developing leaders, practices of creating pools of replacement candidates should not lead to organizations merely creating replicas of their existing leadership. Although the current leadership may be successful, tomorrow’s leaders should have an understanding of the importance of flexibility and vision that is necessary to remain relevant in a constantly progressing world. By syncing this business strategy with the existing and forthcoming human capital of the organization, businesses are able to maximize the value and strength of their potential candidates for the succession of a position. This is accomplished by moving employees through different roles in a company and presenting them with challenges that require the invocation of knowledge acquired through past experiences. As demonstrated through the actions taken by Southwest Airlines, many of today’s premier companies are developing strategic succession management plans to remain competitive and atop their respective industries. Many colleges and universities have embraced the belief that the most effective means for operating their institutions is through that of a competitive business approach in the industry of higher education. This commonly accepted rationale of thinking leads to the belief that institutions of higher learning should implement forms of succession management in order to remain competitive with other colleges and universities with similar characteristics. However, many universities and colleges fail to understand the importance of this idea. Cembrowski and Costa (1998) stated that due to a lack of information in education sources about succession practices, a need exists for leaders in higher education to review the business literature on the topic. Although many universities will teach the importance of succession management for a healthy organization to their business students, many of these same institutions will fail to practice what they preach. Clunies (2007) examined this lack of implementation of succession management in higher education and concluded that significant contrasts in education and business cultures result in a challenge to apply succession planning in the field of academia. Lampton (2010) also cited the difficulty in connecting a principle taught in classes with a concept that should be applied in the administrations of higher education

74 American Journal of Management vol. 13(2) 2013 institutions. Lampton conducted a study that determined a majority of respondents believed that succession planning would not be useful at their university. This is an alarming response indicating avoidance by administrations in higher education to a proven successful business concept. Lampton’s findings disclose that the departmental managers in universities surveyed had made plans for implementing their own form of succession management within their division, but that they had not received any support or leadership from their supervisors regarding these plans.

OPENING MINDS TO OUR TEACHINGS

Although Lampton’s respondents felt that succession management should be implemented and could be used as a successful tool, they have not been able to reap the rewards that are associated with proper implementation of an established plan. Clunies (2007) supports Lampton’s supposition that the leadership in higher administration must be committed to a succession management plan or it will fail at the departmental level. Without the support of supervisors and current leaders, even the strongest plan will not be able to survive or successfully be executed. Educational leadership involved in succession planning across the entire organization ensures the proper enactment of the strategic plan. By keeping the values, mission, and strategic plan at the center of the organization’s succession management process, the organization, whether it is a corporation or an institution of higher learning, is able to compete with a rapidly evolving environment. If the succession plan is not correctly linked to the strategy that the organization as a whole is pursuing, then the plan is doomed to fail; and is thus a waste of money and time. However, if successfully implemented, a succession management plan will ensure that institutions will be able to retain and develop their current good employees, and also establish guidelines for attracting employees of that caliber throughout the entire organization for the foreseeable future. Clunies (2007) maintains that colleges and universities are continually being forced into a changing environment in which they must adapt in order to compete and survive. He asserts that part of this adaptation process is maintaining an evolving perspective based on introspective ideas such as: Are we keeping the level of employees desired for our organization, and what types of employees will we need to maintain our business strategy in the future?

IDEAS FOR PRACTICING WHAT WE TEACH

Cembrowski and Costa (1998) emphasize the role that human resources plays in the succession planning activities. They believe that the human resources department is responsible for overseeing and providing the information and data for the review process. Clunies (2007) agrees adding that top performing employees are generally discovered through periodic human resources review meetings and plans for their continued development are established. Mahler (1986) also believes that the review process is the most vital component in the succession system. These review meetings allow for the organization’s leaders to discuss candidates in an open environment that allows for unified support or criticism. Also, these meetings allow the key leadership positions to be able to maintain an understanding of the importance that the succession management process carries. Interestingly enough, processes similar to these are typically practiced in colleges and universities by their tenure boards (Clunies). Cembrowski and Costa (1998) conducted a study of leaders at a postsecondary institution in an effort to discover what was most important to their success. Their findings show that the leaders attribute their success to the key role their environment played. The opportunities to grow and learn are most prevalent in scenarios in which employees are given the chance to complete various job duties. This opportunity creates a need and desire to be challenged, which, in turn, produces personal growth and acquired knowledge through practical learning that could not have been learned otherwise. Clunies (2007) states that most institutions of higher learning have not implemented the practice of job rotation at the senior management level. This concept supports the premise that most colleges and universities are not successfully implementing succession management plans that provide the most benefit for their employees.

American Journal of Management vol. 13(2) 2013 75

Job rotation plays a key role in the success of employees throughout their time spent at the respective institution. Clunies (2007) concluded that for developmental purposes, employees should be rotated among several organizational positions designed to fully educate them on the various aspects of the business. This practice should include individuals experiencing differing positions even if the timing is not ideal for the short-term business goals. The important inference here is that the character and lessons that will be learned through these intentional movements will vastly outweigh the slight drawbacks that could be correlated with marginally less educated decision making in the short term. Although the employee might not make the same decision as their predecessor, they will be forced to push the limits of their current capabilities in order to make the best decision possible. In the long run, this will play a vital role in the success and cross-layering of the intellectual property of the institution’s employees. Clunies (2007) has postulated an interesting idea. Businesses and corporations have shown the advantages that arise from challenging employees through new job tasks. The challenges that are faced by high-potential candidates often require the same skill set needed for being a successful executive officer. However, imagine if this concept is paralleled with the practices of institutions of higher learning. A vice president of finance will have a very interesting view point when called upon to complete the tasks of the vice president of student services. Likewise, the vice president of academic affairs will have even more insight when asked to perform the duties of the vice president of development. If this chain of job rotation is continued for an established period of time, the lessons learned and understanding acquired through these positions would be of utmost value when the time comes for consideration of the next president of the institution. Success has been measured as a level of achievement and in life that is the direct result of education through learning and growth that is provided through the challenge of new experiences (Cembrowski & Costa 1998). Often, in universities and colleges, due to the fact that the main objective of these schools is to provide for the education of their students, the institution’s employees are not viewed as needing additional education. This neglect results in the faculty and staff creating their own plans of actions in order to obtain more education to become better employees. A self-motivated not an institution-directed education is the norm on most campus for employees. In addition to identifying the existence of self-motivated learning in higher education, Cembrowski and Costa (1998) also discovered that there are certain opportunities that increased the potential for positional progression up the ladder in universities and colleges. Their study indicates that job rotations, formal training plans, and administrative internship programs are viewed as the most rewarding mechanisms available for faculty through universities. Dilworth (1995) supports the importance behind the theory of job rotation and cross-position learning especially for lengthened periods of time that allow for the rotating employee’s accountability of their decisions. Formal training programs allow for a systematic approach to theory learning. However, the importance of this program can not completely be realized with the implementation of a mentorship or administrative internship program. The mentorship program allows for direct learning from an accomplished, successful advisor. An alternative to this is an administrative internship program, which is similar in style to the job rotation format. The employee is given an established period of time to experience the responsibilities of a new job. However, the internship program allows the opportunity for lower-tier staff to gain favor and credibility through performing administrative tasks that are deemed much more challenging than their prior position provided (Cembrowski & Costa, 1998). The importance of employee empowerment and education is key because it allows for trust to be established and further developed leading to the desired synergy that has become a predominant factor in leading organizations. When successful means of employee education are established, university departments will likely work in sync and be able to accomplish far greater feats than they could individually.

EVALUATING OUR PRACTICES

However, the best succession management plans will fail if the institutions that practice them do not involve the correct employees. These methods of improving the quality of the faculty and staff will not

76 American Journal of Management vol. 13(2) 2013 reach their greatest potential if the wrong candidates are put through the program. There is a valued importance in choosing the correct training pool that cannot easily be measured. Although external candidates are always an option when considering the succession management of a position, executive search-firms have shown that there is a distinct advantage to an internal versus an external candidate. Barden (2008), vice president of an academic executive search-firm, emphasizes that an external candidate may be more accomplished and have greater experience with respect to a given position, but the working knowledge of the institution that an internal candidate brings to the table is an incredible tool and frequent deciding factor. The intricacies provided through an internal candidate’s perspective often times lead to a more complex vetting process. Through the proper implementation of a succession plan, institutions of higher learning will be able to push their academic and organizational excellence to new levels. Utilizing this strategic process, colleges and universities will be able to attain a degree of accomplishment that can only be efficiently derived through employee self-motivation and internal support. Through these facilitating mechanisms, colleges and universities alike will be able to continue to advance their own capabilities and the means of measuring these advancements will be evident through the growth of the benefited students. The education of future leaders relies on the capacity of mentors to pass on gained knowledge and experience that is learned through job rotation, formal training programs, and employee education. Only by making the most of the faculty and staff, will institutions be able to capitalize their available tools for improvement and growth in their industry as an organization. By practicing the methods in the boardrooms that they are teaching in the classrooms, universities will become a haven of self- improvement and reciprocal education. Scott-Skillman (2007) restates the importance for institutions of higher learning to establish succession management plans by showing there is a deficiency of effective leaders within the foreseeable future of educators. Clunies (2007) asserts that an important aspect of this process is defining a benchmark to similar institutions. Establishing these benchmarks will allow for a comparison through which the institution will be able to measure their growth and success rate outside of the direct results of the students they are producing. In the absence of comparisons, institutions are allowing themselves to become vulnerable to not fulfilling their potential, which is a mistake of dire consequence. By not performing at the best of their ability, colleges and universities compromise their integrity as a body of higher learning, and as Scott-Skillman (2007) points out they are jeopardizing their existence. This risk of failure is due to the direct relationship between the institution’s existence and the current and future student’s perception of the value of that institution’s academic excellence.

CONCLUSION

Simply enough, the enrichment of the employees at a university has a direct reflection on the success of that university due to the enrollment of students desiring a high quality education. A succession management plan is the most effective way of attaining this desired outcome. In order to operate as a modern organization, institutions of higher learning must adhere to the insights and follow in the footsteps of larger private sector businesses with relation to their administrative practices. This can be seen in the success of the future practice of succession management in higher education. The future success of quality leadership at institutions of higher education largely depends on the implementation of a succession management plan. Taking into consideration careful selection of the personnel included in the plan and the appropriate training of the personnel, the university or college will initiate a positive and sustaining program designed to further the organization to greater heights and establish confidence of the university personnel in the future leadership of the institution. Through following in the instructions they issue to their business students, institutions can demonstrate a belief in their educational practices, as well as provide an example to their graduates and other institutions of the benefits of succession planning.

American Journal of Management vol. 13(2) 2013 77

REFERENCES

Barden, D. (2008). The Internal-Candidate Syndrome. The Chronicle of Higher Education. Retrieved September 17, 2011, from https://chronicle.com/article/The-Internal-Candidate/45809/

Cembrowski, B.J., & Costa, J.L. (1998). Succession Planning for Management Staff at a Western Canadian Postsecondary Technical Institute. Retrieved on October 12, 2011, from http://www.eric.ed.gov/ERICWebPortal/search/detailmini.jsp?_nfpb=true&_&ERICExtSearch_SearchVa lue_0=ED420219&ERICExtSearch_SearchType_0=no&accno=ED420219

Clunies, J. (2007) Benchmarking Succession Planning & Executive Development in Higher Education: Is the academy ready now to employ these corporate paradigms? Academic Leadership Live: The Online Journal. Retrieved October 19, 2011, from http://www.academicleadership.org/emprical_research/Benchmarking_Succession_Planning_Executive_ Development_in_Higher_Education.shtml

Dilworth, R. (1995). The DNA of the Learning Organization. Learning Organizations. Portland, OR. Productivity Press.

Hirsh, J. (2001). Southwest CEO to Step Down; Successor Named. LA Times. Retrieved September 17, 2011, from http://articles.latimes.com/2001/mar/20/business/fi-40040

Kesler, G. (2002). Why the Leadership Bench Never Gets Deeper: Ten Insights About Executive Talent Development. HR Planning Society Journal, 25, (1), 1-28.

Lampton, J. (2010). Management educators can bring awareness to the need to plan for succession. Journal of Instructional Pedagogies. Retrieved October 12, 2011, from http://www.aabri.com/manuscripts/10551.pdf

Mahler, W., & Wrightnour, W. (1973). Executive Continuity: How to build and retain and effective management team. Homewood, IL: Dow Jones-Irwin.

Mahler, W., & Drotter, S. (1986). The Succession Planning Handbook for the Chief Executive, Midland Park, NJ. Mahler Publishing, Co.

Scott-Skillman, T. (2007) Succession Planning-A Must for Colleges and Universities! iJournal. Retrieved October 20, 2011, from http://www.ijournal.us/issue_15/ij_15_00_TOCframe.html

78 American Journal of Management vol. 13(2) 2013

Meeting the Challenge of Assurance of Learning: Perspectives from Four Business Schools

Jane Whitney Gibson Nova Southeastern University

Regina A. Greenwood Nova Southeastern University

Bahaudin G. Mujtaba Nova Southeastern University

Shelley R. Robbins Capella University

Julia A. Teahen Baker College Online

Dana Tesone University of Central Florida

Six professors from four different universities discuss the strategies their business schools are currently using to capture and utilize assurance of learning data. The schools represent public and private as well as not-for-profit and for-profit and uniformly document the rigor and deliberateness with which assessment of learning is now being conducted. General recommendations are extrapolated to help other business schools who might be at an earlier stage of developing their assurance of learning protocols.

INTRODUCTION

Assessment can be seen as the process of establishing and/or understanding the learning outcomes that meet the learners’ needs, assessing students to determine whether or not they have achieved the learning outcomes through factual evidence, documenting those results, with the purpose of continually improving the process of teaching, learning and learner assessment. It is a well-documented challenge that accrediting bodies such as AACSB and the regional accrediting organizations have been increasing their requirements to document assessment of learning. Conferences, research, and workshops continue to be dedicated to this subject. Given the currency of this subject, it seems a good time to look at what four business schools in various parts of the country (South

American Journal of Management vol. 13(2) 2013 79

and central Florida, Michigan and Minnesota) are doing to ensure that students are achieving their learning goals and that this achievement can be documented and used for quality improvement purposes. The authors, all senior business faculty members, describe through personal experience the approaches their schools are using in designing and implementing their assurance of learning programs. Their schools represent graduate and undergraduate business education at both public and private universities in the not-for-profit as well as for-profit segments of higher education. After sharing specifics of their respective schools’ assurance of learning (AOL) efforts, the authors propose recommendations for business schools that may be at an earlier point of AOL planning and development. We begin by taking a closer look at what constitutes assurance of learning.

WHAT IS ASSURANCE OF LEARNING?

According to The Association to Advance Collegiate Schools of Business (AACSB), Assurance of Learning (AOL) “includes the interpretation and intent of the business assurance of learning standards.” (http://www.aacsb.edu/accreditation/business/standards/aol, para. 1) AOL programs help institutions answer the questions: “Do students achieve learning appropriate to the programs in which they participate? Do they have the knowledge and skills appropriate to their earned degrees?” (http://www.aacsb.edu/accreditation/business/standards/aol/defining_aol.asp, para. 8) Palomba and Banta (1999) more specifically define outcomes assessment as a systematic process of collecting, reviewing and using program information in order to improve student outcomes. AACSB first suggested a focus on outcomes assessment in 1991. At that time, the outcomes assessment movement was immature and a number of indirect assessments, including outside evaluations such as employer reports and alumni surveys, were considered appropriate. By 2003, however, the importance of outcomes assessments and the maturity of assessment processes had developed so that direct, specific measurement systems within an institution were necessary. AACSB now requires direct assessment, as do most robust AOL programs, in order to determine specific learning goals and collect relevant assessments of learning. Such systems allow institutions also to improve curriculum by providing fact-based information about how well students are learning or are not learning (AACSB, 2007). AOL has become increasingly important to institutions since it became the third category of accreditation standards required for business accreditation by the AACSB. AOL now shares equal importance with Strategic Management standards and Participant standards as a three pronged mechanism to determine the accreditation-worthiness of institutions providing business programs. The AOL standards rest on two principles: accountability and continuous improvement. Through the AOL process, both principles can be and should be operationalized. Trends in AOL today include an emphasis on how to assess functional areas of curricula, that is, examining what should be required in different disciplines. Different areas of study, such as economics or human resource management, are suggesting measures appropriate for their disciplines. Also, efforts to refine the measurements used within AOL programs are gaining momentum. A typical AOL process should include (AACSB Eligibility, 2012): 1. A listing of student learning goals and objectives 2. Aligning the goals with the curricula 3. Establishing measures and means for assessing learning 4. Collecting, analyzing and distribution of the information 5. Using the information in a program of continuous improvement.

AACSB endorses three different approaches to AOL. Assessment can be based on student selection (called selection assurance), course embedded measurements (tied to the program’s learning goals), or stand-alone testing or performance (AACSB, Eligibility, 2012). No one system is mandated but assuring learning is mandatory. Most often, assurance of learning is a broad-based, systematic effort which requires the school to look at overall program outcomes and how various courses relate to and measure

80 American Journal of Management vol. 13(2) 2013 those outcomes. Therefore, we can talk about AOL initiatives at the macro (program level) or micro (course level). We start by looking at how Baker College Online approaches the design of overall program wide AOL.

DESIGNING PROGRAM WIDE ASSURANCE OF LEARNING

Baker College Online offers more than 40 certificate, associate, bachelors, masters, and doctoral programs in business, health, human services, and computers and is part of Baker College, the largest private college in Michigan with more than 40,000 students worldwide. Accredited by the North Central Association of Colleges and Schools, Baker College Online was selected in 2013 as a Military-Friendly College by Military Advanced Education. One of the authors serves as President of Baker College Online and advises that proper assessment begins at the program development level to ensure program learning outcomes are tied to desired graduate achievement as well as the institutional mission and purposes. Educational institutions must be committed to assessment and evaluation at all levels in the organization. They must be able to answer the following two questions regarding graduates: What are our students able to do when they graduate? How do we know that they are able to do what we say they can do? Program design is critical in developing an assessment plan. There are four stages of program level design to assure learning outcome achievement at the program level, across all program delivery methods. The stages (see Figure 1) include: Pre-design work, identification of desired results, determination of acceptable evidence, and planning learning experiences.

Pre-Design Work Program level design at Baker College is aligned with the Mission, Purposes, and Institutional Student Learning Outcomes through the Understanding by Design (UbD) framework (Wiggins & McTighe, 2005). The program design process at Baker College begins with the collection of current and pertinent data regarding the program, whether new or existing. Information included consists of items such as career data, data to support the need for the program, linkage to mission and purposes, intended audience for the program, and the level of the program (certificate, associate, bachelor, masters, or doctorate). The goal of this phase of development is to determine the overall goal of the program. A committee of faculty, deans, instructional designers, career services advisors, advisory board members, professionals from the field, and assessment experts meet to define the goal of the program. The committee meets to brainstorm a giant list of topics to include in the program. Committee members are asked to think out of the box to create their “dream program.” Notes from the meetings in this stage are kept to be used in later phases of program development.

Identification of Desired Results The next phase of program development defines the key learning outcome requirements necessary for developing an appropriate knowledge base regarding the subject area being studied. Baker College uses Understanding by Design (UbD) principles in the development of programs and courses (Wiggins & McTighe, 2005). The process begins by asking committee members to complete the Baker College UbD Design Template. The template contains a list of questions to lead the committee to define the goals and “Big Ideas” of the program. Wiggins and McTighe (2005) define “Big Ideas” as; “Broad and abstract, represented by one or two words, universal in application, timeless – carry through the ages, and represented by different examples that share common attributes” (p. 69). Specifically, committee members are asked to answer the following questions:

American Journal of Management vol. 13(2) 2013 81

FIGURE 1 PROGRAM LEVEL DESIGN PROCESS

82 American Journal of Management vol. 13(2) 2013 1. What key knowledge and skills will students acquire in the program? What should students know? 2. What will students be able to do at the end of the program?

Information from the answers to these two questions are used to write program outcomes. Once the committee has agreed to all responses on the template, instructional designers work on converting the information into a rough draft of the program learning outcomes. The program outcomes must align with the Baker College Mission, Purposes, and Institutional Student Learning Outcomes. The program learning outcomes draft is reviewed by the committee. Adjustments are made as appropriate and then finalized by the committee.

Determination of Acceptable Evidence The next stage of the program development process is key to the assessment strategy. This phase is critical because it defines how learning outcome achievement will be measured at the program level. This process answers the question, “How are you going to know that students are successful at achieving the program outcomes?” Assessment experts are highly involved in this phase of development to define a capstone assessment. Answers to the above question include items such as: Ability for students to find a job related to the program, ability to be promoted at their current job, or the ability to move into the career or to a graduate level program. The assessment plan developed at the program level provides general ideas of assessment types that will provide evidence of successful completion of the program. The tools are designed and developed later based on the information gathered in this phase. Assessments include longitudinal assessments for transfer and assessments that measure student success at the end of the program. Assessments are formative at strategic points during the program to assess student learning and progress toward program outcomes. Once the plan is approved by the program development committee, instructional designers map the assessments to each of the program outcomes. The final assessment design plan is then approved by the committee.

Planning Learning Experiences The last stage in the program development process defines topic areas that need to be addressed in the program. The information gathered leads to the identification of courses, course student learning outcomes and enabling objectives. This stage is a bit messy at first as the committee works to categorize the Big Ideas listed in the first stage of the process. Topics are grouped together into a rough outline and eventually worked into a sequence of courses. The committee identifies new and existing courses to be used and sequences the courses in the order they will be taken in the program. This includes the identification of prerequisite courses. The program plan is then reviewed and approved by the development committee. Once approved, individual courses are scheduled for development. As with our program development, the course development process also uses Understanding by Design (UbD). In addition, Quality Matters is used as a core standard for course development. “Quality Matters (QM) is a faculty-centered, peer review process that is designed to certify the quality of online and blended courses. QM is a leader in quality assurance for online education and has received national recognition for its peer-based approach and continuous improvement in online education and learning (Quality Matters, 2013, para. 2)” Once courses are designed, approved, and launched, data must be collected, analyzed and used in a consistent way to assure continuous improvement.

Collecting and Using AOL Data Baker College recognizes that the use of effective assessment in evaluating and improving teaching and learning is critical. "In fully online environments, multiple measures, formative and summative assessments over the course timeline, and electronic interaction with the learner are sound assessment practices" (Milam, Voorhees, & Bedard-Voorhees, 2004, p. 77). As such, Baker College Online employs a variety of assessments which include: Standardized finals and research projects, standardized grading

American Journal of Management vol. 13(2) 2013 83

rubrics, course and program portfolios, student and faculty interaction/written communication, pre- and post-tests, certification exams, and employer surveys (90 days after student placement). "A major advantage of e-learning environments is that assessment activities can be embedded unobtrusively into the interactive structure of the program themselves" (Reeves, 2002, 26-27). Many of the standardized assessment tools at Baker College are also used as student evaluation to determine student grades. Additional assessment activities may include course and program content reviews, learning outcome assessment for courses and programs, textbook evaluation, satisfaction surveys for both courses and services, and course and program reviews conducted by both faculty and students. Retention statistics, successful completion statistics, graduation rates, and career placement rates are also evaluated to determine the effectiveness of the College’s academic programs. Waypoint Outcomes (http://waypointoutcomes.com) is one data collection tool the College uses to improve the quality of feedback to students on assessment activities as well as improving the quality of data analysis used by faculty and administration. Waypoint Outcomes can be used to create rubrics and surveys that provide documentation on student learning. (http://www.waypointoutcomes.com). Significant improvements in the quality of assessment reports have been achieved through this system. The data is summarized and given to faculty committees for their review. Assessment reports are also shared with the program development committee to analyze to determine whether changes are necessary to the program. The various faculty committees use the assessment reports to evaluate programs and courses. All curriculum change requests require documentation derived directly from assessment data and reports. Continuing our focus on individual courses, we next look at how one school has designed an AOL strategy to assess every course on a continuing basis.

COURSE LEVEL ASSURANCE OF LEARNING

The Wayne Huizenga School of Business & Entrepreneurship at Nova Southeastern University in Fort Lauderdale, FL serves more than 6600 students in undergraduate, masters and doctoral programs. At the undergraduate and master’s levels, a given course can be offered in four different formats: weekend, day, evening, and online (Gibson, 2011). Further, weekend courses are offered at distant locations including five different service centers in the State of Florida plus international locations. Individual undergraduate courses are taught almost exclusively by single instructors whereas master’s courses, until recently, were often taught in the Lead Professor Model. In the case of individually taught courses, there was an appointed “Key Faculty” which has recently been replaced by a Course Academic Leader (CAL). All undergraduate courses are assigned to individual instructors in a traditional model. These instructors include full-time faculty and adjuncts who serve on a course-by-course basis. In each case, a Course Academic Leader has been appointed to coordinate the gathering of Assurance of Learning (AOL) data. Until the summer of 2012, Masters courses were either taught individually with the use of a Key Faculty member OR as a team-taught Lead Professor course where the Lead Professor (LP) had been commissioned to establish a fully standardized course which is taught by a team of Instructional Specialists. The LP interacted with the students, provided lesson plans for the Instructional Specialist team and was responsible for a complementary video package and fully standardized deliverables and assessments complete with grading rubrics, the latter which were designed to provide consistency in grading (Donathan & Tynann, 2010). The use of rubrics has been well established including a survey of AACSB deans, 92% of whom said that using rubrics to grade written assignments was a part of their assessment protocol (Kelly, Tong & Chois, 2010). At present, the format for masters’ classes has transitioned back to a less standardized course delivery system using mostly single instructors while maintaining the standardized measurement of course competencies. Each course is assigned a Course Academic Leader whose responsibilities include coordination of AOL activities. It should be noted that the standardized Lead Professor format was very instrumental in creating the standardized measures, including rubrics, which are still used in course-level AOL today.

84 American Journal of Management vol. 13(2) 2013 Regardless of whether courses are individually or team taught, traditionally formatted or standardized, the assessment of learning (AOL) in the Huizenga School takes place at the course level on an every section, every term basis.

Course Level Assessment As with most schools, the various programs at the Huizenga School have clearly established Program Goals such as “Examine the importance of leading and influencing others, maintaining collaborative business relationships, exercising appropriate interpersonal skills, and performing effectively individually and in high-performance teams.” These Program Goals are then mapped to individual Course Competencies. A three-step AOL process is used to assure that assessment of learning is carried on in a comprehensive, continuous fashion. (1) Utilize standard measures of course competencies, using rubrics when appropriate, (2) Collect data across class sections each term, and (3) Analyze and document results with an eye toward continuous improvement. In Step One, common measures of course competencies are applied in each and every section of the course. These common measures are designed by the Course Academic Leader with help and agreement from other full time faculty who may regularly teach a particular course. The department chair gives the stamp-of-approval at the end of this process and the AOL Director or Assistant Director assures that the course competencies, as developed, successfully map back to the overall Program Goals. Once finalized, these measures are then required to be used by all faculty teaching the course. Beyond these measurement constraints, the individual faculty member has academic freedom to conduct the class however he or she desires Step Two occurs at the end of the course. The Course Academic Leader is responsible to collect from individual professors teaching the course the data related to the measurement of the course competencies. It is then the responsibility of the Course Academic Leader to consolidate these statistics and share the results with the team of instructors in preparation for holding an end-of-term meeting. Figure 2 shows an example of a consolidated report for one class with a total of four sections for a specific term. The far right column shows the average achievement score across the four sections.

FIGURE 2 AOL TERM SUMMARY CHART (AOL MEASURES FOR A TERM FOR ONE COURSE, 4 SECTIONS)

CC# Measure 5W1 5W2 1EE 1DY AVE CC#1 Measure One Final, Q 1-4 .73 .73 .88 .82 .79 CC#1 Measure Two Ref. Pap. 1, Q1 OR Wk. 1 DQ .69 .86 .84 .87 .82 CC#2 Measure One Midterm, Q 1-4 .83 .77 .79 .75 .79 CC#2 Measure Two Ref. Pap. 1, Q 2 OR Wk. 2 DQ .85 .89 .87 .87 .87 CC#3: Measure One Midterm, Q 5-8 .68 .72 .70 .79 .72 CC#3: Measure Two Ref. Pap. 2, Q 2 OR Wk. 5 DQ .73 .90 .69 .83 .78 CC#4: Measure One Midterm, Q 9-12 .73 .68 .80 .62 .71 CC#4: Measure Two Ref. Pap. 2, Q 3 OR Wk. 8 DQ .98 .86 .73 .75 .83 CC#5: Measure One Midterm, Q 13-15 .71 .71 .91 .75 .77 CC#5: Measure Two Ref. Pap. 1, Q 3 OR Wk.4 DQ .96 .89 .83 .87 .89 CC#6: Measure One Final, Q 5-6 .89 .90 .83 .82 .86 CC#6: Measure Two Assignment 1 .98 .90 .87 .89 .88 CC#7: Measure One Assignment 2 .84 .85 .82 .84 .84 CC#8: Measure One Final, Q 7-10 .84 .70 .82 .71 .77 CC#8: Measure Two Ref. Pap. 2, Q 1 OR Wk. 7 DQ .93 .90 .79 .89 .88

American Journal of Management vol. 13(2) 2013 85

Step three requires that the data gathered from individual course sections is analyzed by the faculty who taught the course with an eye toward identifying best practices and assuring continuous quality improvement. These end-of term meetings are held every term for every course and hosted by the Course Academic Leader who then finalizes this process by preparing a standardized End-of-Term Report which is then filed on the Assurance of Learning website. It should be noted that the comprehensiveness of this “every section, every course, every term” system of assessment of learning assures that all faculty, whether full-time or adjunct, are participating in the assurance of learning process. The benefits of this shared responsibility for AOL are more frequently achieved from the end-of-term meetings designed to close the loop.

CLOSING THE LOOP THROUGH A TEAM APPROACH

It takes a group of dedicated team members to effectively, efficiently and continuously assess learning outcomes and make improvements, thereby starting the assessment process, analyzing the data as well as the results, and ending the process through actual improvements. Figure 3 presents an overview of the Closing the Loop model practiced at NSU’s business school where the majority of courses include six to eight major course competencies which are assessed using one or two different assessment methods each. For example, a specific competency might be tested using an individual exercise or quiz as well as through the midterm exam or a case analysis to determine the extent to which students actually achieved the specific outcome.

FIGURE 3 CLOSING THE LOOP MODEL AT THE HUIZENGA SCHOOL

Step 1: Identify assessment measures for each course competency (CC).

Step 2: Build tests, cases, etc. to measure CCs.

Step 3: Implement assessment measures for assurance of learning (AOL) documentation.

Step 4: Evaluate results and make improvements.

Step 5: Implement the recommended changes for next semester.

Step 6: Report results and suggestions for “Closing the Loop”.

The individual student and each section’s results are aggregated to determine student achievement of program goals. (See Figure 2) Therefore, the focus is on what students will take with them as they complete the course. The faculty is required to ensure that the measures they develop and implement are

86 American Journal of Management vol. 13(2) 2013 “portable” which means that anyone teaching the course in any available modality should be able to use these measures with the same validity and reliability. Within one to two weeks of ending the term, the teaching faculty members all meet face-to-face, through conference calls or synchronized discussions, and/or regular email communication to discuss the results as well as lectures and presentations that worked well and those that need to be reevaluated, and to make changes based on their experiences and factual data. These meetings provide opportunities to the teaching faculty team to present their experiences and hear best practices from other colleagues. Together they review AOL measures and recommend changes if needed. The team approach to closing the loop not only has led to consistency of outcomes achievement between various faculty members teaching the course in various modalities across campuses, but it has also resulted in better communication and full engagement of all adjunct faculty members in improving the course and overall curriculum. The team approach has been especially effective in online or blended classes where online classrooms facilitate sharing of video, lectures, podcasts, cases, online links and resources, etc. In general, online classes seem to be particularly conducive to collecting online data.

STATE SCHOOLS, ONLINE COURSES, AND AOL

The motivation to engage in academic assurance initiatives is evident for progressive private institutions in that their reputations are contingent upon the quality of graduates’ performance in professional roles. This has not always been an emphasis for publicly funded universities. These institutions were historically protected by state Boards of Regents (BOR) and operated with mentalities of immunity from third party scrutiny. However, this scenario began changing in the 80s and 90s in the states of Texas and Florida (Ashworth, 1994) where one of the authors is a professor for a business school program in hospitality management. The Florida state BOR was disbanded by the legislature at the end of June of 2001 to be replaced by a new, politically-appointed, Board of Governors (BOG). In 2002, the newly established state BOG announced the intention to require comprehensive standardized testing for all state school baccalaureate students. The articulated model was similar to the Florida Comprehensive Aptitude Test (FCAT) exams required of public secondary school students for grade advancement and graduation. The collective group of college presidents proposed centralized academic learning assessment initiatives for every undergraduate degree program as a strategy to avert the imposition of the testing mandate. The BOG agreed to require Academic Learning Compacts (ALCs) for each undergraduate degree program. The end result was the requirement for each program to file annual academic learning plans and to report prior year results to a centralized University Assessment Committee (UAC) for review and approval. The ALC standard requires each program to report a minimum of 8 learning outcomes with at least two measures per outcome. Outcomes and measures for critical thinking and communication must be included in addition to discipline specific areas within each ALC set. The UAC is comprised of faculty members who are professionally trained specialists in the field of academic assessment/testing. Graduate degree programs are also required to submit to this process in anticipation of future state board requirements. The assessment process continues to evolve and will eventually require reporting compliance for each individual course contained within a degree program. Hence, a sense of urgency will soon exist to train all faculty members to report learning outcomes/ measures on a course-by-course basis. Currently, the responsibility for assessment plans and results falls directly upon department chairs and assessment coordinators. A small number of faculty members occasionally provide survey data for results reporting purposes within most colleges. Theoretically, the program assessment learning outcomes and measures drive each course with the intent to enhance discipline specific knowledge as well as critical thinking and communication skills. This equates to developed knowledge, skills, abilities and attitudes for professional training programs such as business and hospitality management, in other words— competencies required for the effective practice of professional management. Management practice consists of diagnostics and interventions concerning production systems. This suggests that on short

American Journal of Management vol. 13(2) 2013 87

notice, instructors will be asked to report evidence of actual performance for all learning activities contained within a course syllabus for every course that has been taught within a degree program.

Documenting Online Course Activities In essence a completed online course is a database. The backed-up course contains a repository of learning activity information from which queries may extract specific data for reporting purposes. Course designers are aware that the course content or home page provides a road map of structured information. As is the case with any course, the syllabus, texts, power point slides and lecture materials are readily available with the exception that a web course may contain or at least provide links for electronic versions of these documents. For the purpose of articulating teaching effectiveness, an instructor may provide a summary report of a self-audit or permit an external auditor to explore the activities linked to the content bar in the classroom. In this case the auditor will be able to review posted periodic announcements along with timelines to determine the effectiveness of the instructor’s communication of course expectations. In most cases assessment learning outcome measures will focus on metrics associated with class testing and written assignments. Exam questions reside within an ‘assessment’ database for most online courses. Each test is extracted from the database in the form of question sets from which randomized test questions and answers (in the case of multiple choice versions) appear to each student. An auditor may view the entire database, question sets, as well as preview exam samples and grade distribution reports for every completed test. Written assignments are retained in an ‘assignment’ database and may also be viewed by an auditor. Any experienced online course instructor is aware that the essence of a course occurs in the form of discussion board group interaction. The discussion board tool provides an archived transcript of every interaction that occurred throughout a course which may be easily tracked during an audit. The auditor may track the discussion view by week and by participant. Summary tracking statistics are readily available to the auditing viewer to determine overall levels of interactivity and grading distributions. Most current online courses include streaming video lectures that may reside on a separate university server or be posted to a public access forum. The content page of the course will enable the auditor to view each video lecture. Finally, most courses contain an embedded ‘grade book’. Auditors may view statistics related to the grade distribution for each assigned activity, as well as identify the distribution of total course grades with a click of a mouse. While providing AOL data to faculty, administration and accreditors is an obvious requirement in today’s business schools, a less common mandate is to provide such information to the public in a way that they can digest and use AOL data to assess the relative effectiveness of various academic programs. Capella University is one school which has distinguished itself in providing this type of public information.

COMMUNICATING AOL TO THE PUBLIC

Capella University, founded in 1993, became the first online and first for-profit university to receive the CHEA (Council for Higher Education Accreditation) Award for Outstanding Institutional Practice in Student Learning Outcomes in 2010 (Pearce and Offerman, 2010), Capella was driven to become an outcomes-based University for a variety of reasons. Initially, the idea was sparked by Capella’s participation as a charter member of the Academic Quality Improvement Program (AQUIP). As a institution which primarily serves adult learners, who are extremely outcomes-oriented, an outcomes- based approach made sense. Learning outcomes for programs are identified based on skills and knowledge needed to be successful in the workplace, as well as the standards of any professional organizations in the field (Pearce and Offerman, 2010). Eventually, Capella focused its efforts not only on developing a unique outcomes-based model for the institution, but also focused on communicating the results to accrediting organizations, students, potential

88 American Journal of Management vol. 13(2) 2013 students, and the general public through its website Capella Results (http://www.capellaresults.org). Capella has been able to do this effectively in part because of its nature as a fully online institution with access to large amounts of data about the learning process and the ability to mine this data and develop continuous improvement processes informed by that data.

The Advantages of an Online Data-Rich Environment Because Capella University is solely an online university, the institution is able to capture every interaction which is observable and reportable, and uses data to “understand program health, learning effectiveness, and student success in ways we did not fully imagine 10 years ago.” (Pearce and Offerman, 2010). This enables Capella to assess student and faculty performance at a high level of detail. Curriculum and courses are designed around meeting overall program outcomes, and these outcomes are then assessed at the undergraduate and masters’ levels through performance in the capstone course in their field Capella uses the concept of Action Analytics continually in the process of data mining and improvement of processes and courses through data analysis. However, it is not sufficient to merely mine the data and report to the public. It is the institutional commitment to making changes based on the data, and to continual improvement of the curriculum, courses, teaching, and achievement of student outcomes which is key. The school is able to quickly make changes because they are academically centralized, with standardized course content and assessments. The fundamental design of programs and very specific rubrics are what enable them to measure outcomes, publish them, and use the data to improve and ultimately assure that learning occurs, as well as continually improving the achievement of those outcomes.

Transparency of Outcomes and Results Capella University is a Charter member of the President’s Forum Transparency by Design Initiative, which selected the Western Interstate Commission for Higher Education (WICHE) Cooperative for Education Technologies (WCET) in 2008 to provide quality assurance on reporting standards. The website, College Choices for Adults (http://www.collegechoicesforadults.org/), was founded in 2009. Concurrent to these collaborative efforts with other institutions, Capella also developed its own website, Capella Results (http://www.capellaresults.org), to report both Learning Outcomes by degree program, career outcomes, and students’ satisfaction with the program. Capella is committed to making its results available to the public, as a demonstration of transparency, and in order to help establish and maintain credibility in adult online degree programs. The results related to learning outcomes are displayed on the website by program and degree level. Here is an example of the MS in Human Resources Management program. For these outcomes (aligned with the Society for Human Resources Management curriculum guidebook), measurements include non-performance as well as Basic, Proficient and Distinguished Performance. The program level outcomes with the percentage of students exhibiting each performance level (Basic, Proficient, Distinguished) are:

Analyze business theories, markets, and reporting practices in human resource management. (73%, 10%, 17%) Assess culture and change management in organizations. (17%, 27%, 57%) Evaluate strategic management and critical thinking in human resource management. (10%, 33%, 57%) Apply information technology solutions within an organization. (3%, 28%, 69%) Apply systems design and process management in human resource management. (10%, 23%, 67%) Communicate effectively. (7%, 28%, 66%) Analyze ethical and legal responsibilities in organizations and society. (56%, 12%, 32%)

Program chairs use this data to make adjustments in curriculum and courses. In particular, the HR program was designed to educate HR professionals not only in HRM, but in how HRM supported the

American Journal of Management vol. 13(2) 2013 89

business of the organization. The data set in the example shown suggested a much needed change in two of the courses in order to improve outcome 1, and to the ethics course to improve outcome 5. Having looked at some AOL initiatives in both public and private institutions, the authors would like to suggest some best practices for consideration by those who may be relatively newer to AOL design and implementation.

SUGGESTED BEST PRACTICES

Based on their experience and observations, the authors believe that a successful AOL program has to be system-wide and all inclusive. AOL needs to be a part of the culture and everyone from administration to advisors to faculty should have knowledge of and buy-in to the process. More specifically, we make the following recommendations. 1. An effective AOL program should start from the top and work its way down through the unit. First, program goals should emanate from the mission and vision of the school and program. Program goals may be determined by high level administration in conjunction with faculty or they may be developed by faculty but it is important to have buy-in and support from all levels. 2. There should be one or more assessment professionals guiding the process, but the process must be seen to belong to everyone. AOL is sometimes seen as an onerous chore and individual faculty are likely to look the other way if they think that the professionals are handling this chore. Assessment and assurance of learning must belong to the faculty and be seen as a key responsibility. 3. Once program goals are designed, communicated and embraced, courses should be designed, if new, or reviewed, if already existing, in order to map them to program goals. 4. After the curriculum of courses is approved, faculty need to determine course competencies or learning objectives, some of which should be mapped directly to project goals. Specific measures of these course competencies must be determined. Note that not all courses necessarily have to map directly to program goals. 5. Faculty and administration alike should have their role in AOL as a specific duty and one that they are assessed on. AOL cannot be an afterthought or a function which is undervalued or neglected. It is key to the learning process and the cycle of continuous improvement. 6. As AOL efforts mature, a consistent method of communicating results to a broader audience of constituents needs to be established. Recruiters, for example, can use this information to interest potential students; advisors can use it to help students decide what courses will bring them the most value; the public can use this information to reflect on the overall effectiveness and reputation of the program or school.

CONCLUSION

The authors have provided an experiential perspective from the AOL trenches. They caution that AOL is not for the faint of heart. It is long-term, labor-intensive work which has the potential to validate the educational experiences we provide for our students. It is imperative that all business school stakeholders have an understanding and some type of involvement in this process whether it is the primary and formative involvement of the faculty, the user perspective of the student, or the evaluative function of the accrediting agencies. The perspectives given here are from four different business schools in both the private and public sectors and from both nonprofit and for-profit institutions. Assurance of Learning is here to stay. The methods used will get more rigorous and the metrics more exacting as the AOL process matures. It is time to get involved in purposeful AOL activities in order to improve our educational outcomes.

90 American Journal of Management vol. 13(2) 2013 REFERENCES

AACSB. (2013). Assurance of Learning. AACSB International – The Association to Advance Collegiate Schools of Business. Accessed May 7, 2013 at http://www.aacsb.edu/accreditation/business/standards/aol.

AACSB. (2012). Assurance of Learning Overview and Intent of Standards. AACSB International – The Association to Advance Collegiate Schools of Business. Accessed April 2, 2012 at http://www.aacsb.edu/accreditation/business/standards/aol/defining_aol.asp

AACSB. (2012). Eligibility Procedures and Accreditation Standards for Business Accreditation. Tampa, FL. AACSB International.

AACSB International Accreditation Coordinating Committee and AACSB International Accreditation Quality Committee, 2007. AACSB Assurance of Learning Standards: An Interpretation. Accessed April 2, 2012 at http://www.uwlax.edu/ba/AOL/docs/2007---AOLPaper-final-11-20-07[1].pdf

Ashworth, K.H. (1994). Performance-based Funding in Higher Education: The Texas Case Study. Change: The Magazine of Higher Learning. 26, (6), 8-15.

Barnes, B. and Blackwell, C. (2004). Taking Business Training Online: Lessons from Academe. Journal of Applied Management and Entrepreneurship. 9, (1), 3-20.

Capella University. (2012). Capella Results. http://www.capellaresults.org. Accessed on April 1, 2012

Donathan, K. & Tymann, P. (2010). The Development and Use of Scoring Rubrics. Sigcse ’10. Proceedings of the 41st ACM Technical Symposium on Computer Science Education. Retrieved February 12, 2011 from http://delivery.acm.org/10.1145/1740000/1734423/p477- donathan.pdf?key1=1734423&key2=2434257921&coll=DL&dl=ACM&CFID=9711439&CFTOKEN=3 4623308.

Forum, (Sept. 3, 2004). How Can Colleges Prove They’re Doing Their Jobs? The Chronicle of Higher Education, 51, (2), B6-B10.

Gibson, J. W. (2011). Measuring Course Competencies in a School of Business: The Use of Standardized Curriculum and Rubrics. American Journal of Business Education, 4, (8), 1-6.

Jankowski, N. (2011). Capella University: An Outcomes Based Institution. National Institute for Learning Outcomes Assessment, http://learningoutcomesassessment.org/casestudies.html.

Kelly, C., Tong, P., & Choi, B-J. (2010). A Review of Assessment of Student Learning Programs at AACSB Schools: A Dean’s Perspective. Journal of Education for Business, 85, (5), 299-306.

Milan, J., Voorhees, R.A., & Bedard-Voorhees, A. (2004, Summer). Assessment of Online Education: Policies, Practices, and Recommendations. New Directions for Community Colleges, 126, 73 – 85.

Mujtaba, B. G. and Preziosi, R. (2006). Adult Education in Academia: Recruiting and Retaining Extraordinary Facilitators of Learning (2nd edition). Information Age Publishing: Greenwich, Connecticut.

American Journal of Management vol. 13(2) 2013 91

Mujtaba, B. & Mujtaba, L. (February, 2004). Creating a Healthy Learning Environment for Student Success in the Classroom. The Internet TESL Journal. X(2) The article can be retrieved via the following URL link: http://iteslj.org/ or: http://iteslj.org/Articles/Mujtaba-Environment.html.

Mujtaba, B., Preziosi, R., & Mujtaba, L. (2004). Adult Learning and the Extraordinary Teacher. Teaching and Learning Conference (TLC) Proceedings. College Teaching and Learning Conference (January 5-9, 2004). Orlando, FL.

Palomba, C.A., and T. W. Banta. (1999). Assessment Essentials. San Francisco, CA., Jossey-Bass.

Pearce, K.D & Offerman, M.J. (2010). Capella University: Innovation Driven by an Outcomes-Based Institution. Continuing Higher Education Review, 74:161-168.

Quality Matters. (2013). http://www.qmprogram.org.

Reeves, T.C. (2002, November-December). Keys to Successful E-learning: Outcomes, Assessment and Evaluation. Educational Technology, 42, (6), 23 – 29.

Ridley, D. R., and Husband, J. E. (1998). Online Education: A Study of Academic Rigor and Integrity. Journal of Instructional Psychology, 25, (3), 184-188.

Tallent-Runnels, M. K., Cooper, S., Lan, W. Y., Thomas, J. A., & Busby, C. (2005). How to Teach Online: What the Research Says. Distance Learning, 2, (1), 21-27.

Western Cooperative for Educational Technologies. (2012). College Choices for Adults. http://www.collegechoicesforadults.org/. Accessed on April 1, 2012

Western Cooperative for Educational Telecommunications. (2001, March). Best practices for Electronically Offered Degree and Certificate Programs. http://www.wcet.info/resources/accreditation/Accrediting%20-%20Best%20Practices.pdf

Wiggins, G. and McTighe, J. (2005). Understanding by Design, 2nd ed. Alexandria, Virginia: Association for Supervision and Curriculum Development.

Williams, A. (2011). Assurance of Learning and the Lead Professor Model. Presentation at the Huizenga School Faculty Meeting. September 2011, Nova Southeastern University. Fort Lauderdale, Florida: USA.

92 American Journal of Management vol. 13(2) 2013