<<

Beyong journal rankings 29

Evaluating Journal Quality: Beyond Journal Rankings

Eric S. Williams, PhD, Nancy Borkowski, DBA, Stephen J. O’Connor, PhD, & Haiyan Qu, PhD

Abstract Health administration is a multidisciplinary field which is housed in a broad variety of schools and colleges. This diversity presents a challenge for both the scholar and institution to adequately evaluate the scholarship of health administration faculty. Traditionally, this has been done through expert opin- ion supported by journal-rating studies and journal-ranking lists. In recent years, , represented by the Journal (JIF), have been embraced as a supplement or, in some instances, a replacement. The history and development of the JIF is discussed along with some critiques. Alterna- tives to the JIF are presented. The paper concludes with some observations and lessons for academics and the field as a whole.

Please address correspondence to: Eric S. Williams, PhD, 146 Alston Hall, Box 870225, Culver- house College of Commerce, University of Alabama, Tuscaloosa, AL 35487-0225 Phone: (205) 348-8930; Email: [email protected] 30 The Journal of Health Administration Education Winter 2018

Background Health Administration differs from other disciplines in that it brings a diverse, multidisciplinary faculty together to investigate and teach about the healthcare industry. Further, health administration programs are situated in a variety of places including colleges of allied health professions, colleges of public health, medical schools, nursing schools, business schools, and even political departments. Across these diverse settings, health administration faculty face a challenge in documenting the quality of their scholarship, especially with regard to colleagues and administrators who may share the same discipline (i.e., marketing, strategy, finance, etc.) but do not specialize in health admin- istration (Bevan, 2004; Brown, 2011). Most faculty document the quality of their scholarship through the reputations of the journals where they are published. A journal’s status results from such considerations as the editor’s reputation, prestige of the editorial board members, association with a professional society, selectivity, reviewing rigor, and impact on policy and practice (Rodger, McKenna, & Brown, 2007). Traditionally, this reputation was captured by polling experts (faculty, department chairs, deans, etc.) and creating journal lists documenting how journals are ranked by perceived quality or prestige (Vucovich, Blaine- Baker, & Smith, 2008). These lists can be internal to the department, college or university, or they can come from national and international journal lists relevant to the business discipline. These include the Financial Times 45, Cabell, UT Dallas Business Journal list, or Australia Business Dean’s Council. In the health administration field, a number of journal rating studies have been carried out (Bowkowski, Williams, O’Connor, & Haiyen, 2018; Brooks, Walker, & Szorady, 1991; Harle, Vest, & Menachemi, 2016; McCracken & Coffey, 1996; Menachemi, Hogan, & DelliFraine, 2015; Shewchuk, O’Connor, Williams, & Savage, 2006; Williams, Stewart, O’Connor, Savage, & Shewchuk, 2002). Further, expert ratings of journals (e.g., journal lists) have been recognized by some accreditation organizations (e.g., the Association to Advance Collegiate Schools of Business) as one way of establishing continuing faculty qualification to teach in a particular discipline. As the total number of journals grows and new publication platforms proliferate, it becomes increasingly difficult for a faculty member to document the quality of their work (Belter, 2015). This challenge is magnified by the growth of online and open-access journals, some of which have been called predatory with weak or non-existent and with frequent fees for articles (Elliott, 2012; Kearney, 2015). Thus, government agencies, grant funders, college administrators, and individual faculty alike have turned Beyong journal rankings 31 to analysis for measuring not only the quality of journals, but also of individual scholars, departments, universities, and even countries (Adam, 2002; Cameron, 2005). Further, bibliometrics play an increasing role in faculty evaluation, rewards, and funding (Ironside, 2007; Smith, Crookes, & Crookes, 2013). The remainder of this article will provide an overview and critique of the first and most used bibliometric measure, the Journal Impact Factor (JIF), present several common alternatives to the JIF, and conclude with observations and suggestions for junior scholars and the field itself.

Introduction of the Impact Factor The Journal Impact Factor was first introduced by Eugene Garfield in 1955 (Garfield, 2006) to help librarians select the best set of journal subscriptions on which to spend their limited budgets. It was subsequently used by Irving H. Sher and Eugene Garfield to select journals for the then new Science (Garfield, 2006). Since then, it has become the most known and used of the bibliometrics. The original formulation of the journal impact factor was a ratio of the number of to articles in the journal over the previous two years to the number of citable items published in the same journal over the same two years. In 2007, Thompson Reuters, then the publisher of the from which the JIF is calculated, introduced the five-year impact factor in which the citations were counted over five years rather than two. The main advantage of the JIF (and other bibliometrics) is that it presents a seemingly objective single number purporting to capture the importance of a journal within its field through the number of citations made to articles in that journal (Svensson, 2010). Writing about the Social Science Citation Index and the JIF in particular, Svensson (2010) states that its advantages include: (a) wide and international coverage; (b) accessibility online; (c) the assump- tion of objectivity; and (d) the diffusion and favor in front of other measures (e.g., prestige and popularity). Holden et al., (2006) examined the predictive validity of the JIF of 17 social work journals with subsequent citations. They found a correlation of 0.41 with the total number of citations after 5 years and a correlation of 0.42 with total number of citations after 10 years. Using Cohen’s (1988) guidelines for interpretation of correlations, they suggested that the predictive validity for the JIF was medium to large, but expressed skepticism about its utility as an indicator of the quality of scholarship. Another study correlated the JIF’s of nine general medical journals with physicians’ rating of journal quality (Saha, Saint, & Christakis, 2003). They found an overall correlation of 0.82 and concluded that the JIF is a good indicator of quality 32 The Journal of Health Administration Education Winter 2018 for general medical journals. Similarly, Yue, Wilson, and Boller (2007) found a correlation of 0.67 between the journal ratings of 254 clinical neurologists with the JIF’s of 41 neurology journals. Given these advantages, the JIF has had a sizable impact in the United States (Fassoulaki, Sarantopoulos, Papilas, Patris, & Melemeni, 2001), but its impact seems even greater in the rest of the world (DeMaria, 2003). Fassoulaki and colleagues (2001) examined North American and European academic anesthesiologists’ view of the JIF. Among European anesthesiologists, the JIF was viewed as being significantly more influential in academic appoint- ments and research funding, and a high JIF was more valued than among their North American colleagues. In Australia, Bennett, Genoni and Haddow (2011) suggest that bibliometrics – and especially the JIF – have become the principle way to measure academic quality. Smith, Crookes, and Crookes (2013) discuss the impact that the bibliometric-heavy Excellence in Research in Australia (ERA) initiative has on researchers and institutions. Adam (2002) cites three examples including one from Germany where the JIF is used as part of a formula to determine department funding. A second example comes from Italy, where cancer researchers must complete a worksheet “calculating the average impact factor of the journals in which their publications appear.” The most radical example is from Finland, where university hospital funding from the government is based, in part, on the JIF associated with hospital- based university faculty. Cameron (2005) notes that in the UK’s Research Assessment Exercise, up to 30% of university funding is based on the quality of research as determined by bibliometrics including the JIF. While the JIF has had a considerable influence and impact on the academic world, researchers have identified a number of disadvantages. Perhaps the most important of these disadvantages with the JIF is its misuse. Garfield (2006) observed that granting agencies (as well as other actors in the academic area) “often wish to bypass the work involved in obtaining actual citations counts for individual articles and authors.” Publication in a high-impact factor journal grants prestige to an article that it may or may not subsequently earn on its own merits (through citations or impact on practice). This critique was echoed in a report by the National Communication Association report to the Council of Communication Associations (NCA, 2013) decrying the misuse of the impact factor as a “shorthand, objective barometer of research quality” for promotion and tenure. Smith, Crookes, and Crookes (2013) likewise are critical of the outsized role that JIF and other bibliometrics play in Australian academics, while Holden et al. (2006) express concern over its use as a measure of scholarship. Beyong journal rankings 33

Another aspect of misuse lies in the misplaced assumption that the JIF can be compared across fields. In a wide-ranging study, Althouse, West, Bergstrom, and Bergstrom (2009) examined the SCI and SSCI for change in the JIF across time and differences in fields. Using 88 non-overlapping field categories de- veloped by Rosvall and Bergstrom (2008), they found an eight-fold difference between the fields of Molecular and Cell Biology, with an average weighted impact factor of 4.76, and History, with a 0.42. A further analysis looked at the increase in impact factors in each of the 88 fields over time and found that fields vary greatly in the rate of growth. Gregory Perry in his Presidential Ad- dress for the Western Agricultural Economic Association (Perry, 2012) noted a significant number of concerns with the JIF, especially in its limited utility in comparing across fields and suggested that the other bibliometrics would be better for cross-field comparisons. Another aspect of misuse of JIF is that some journals game their impact factor through self-citation. Self-citation comes from encouraging (implicitly or explicitly) potential authors to cite relevant research from the target jour- nal in preference to other journals (Cameron, 2005). Wilhite and Fong (2012) reported on a survey of academics in social and business finding that 20% of respondents reported being coerced and over 40% were aware of coercive efforts. Chorus and Waltman (2016) examined self-citations from 1987 to 2015, comparing levels of self-citation year-over-year for all journals in the database in the Sciences and Social Sciences. They found that the rate of self-citation was stable between 1987 and 2004, but increased rapidly after that, a trend they said corresponded “well with the growing obsession with impact factor as a journal evaluation measure over the last decade.” Campanario (2011) looked at a subset of journals with large increases or decreases in JIF over a one-year timeframe and found 54% of the large increases and 42% of the large decreases were due to large increases or decreases in self-citation. Looking at self-citation by authors, Kulkarni and colleagues (2011) examine three high-profile medical journals finding that 1 in 15 citations was a self-citation by the author. Another aspect of gaming involves altering the mix of articles in the journal towards article types that are typically more cited. Bevan (2004) noted that review articles are more often cited than original research. Fuster (2017) reports that a journal in the cardiovascular field increased its JIF by 35% between 2011 and 2012 by publishing more clinical guidelines and scientific statements, and fewer original manuscripts. Shanta, Pradhan, and Sharma (2013) speculated that journals may not accept case studies which are unlikely to be heavily cited, or they might publish a likely-to-be-heavily-cited article early in the year to get more citations during the two-year JIF timeframe. Cameron (2005) suggested 34 The Journal of Health Administration Education Winter 2018 that journals may “choose to unveil new paradigms, host controversies, or solicit papers from authors with good citations history…” Some JIF inflation may come from gaming the numerator and denominator of the JIF. The type of publications included in each actually differ (Cameron, 2005). The denominator includes source materials which are original research, case reports, notes, and reviews. The numerator includes citations to all document types, both source and non-source. Non-source materials include editorials, correspondence, and opinion pieces (Bevan, 2004). For example, a journal with an active correspondence section may manifest a higher impact factor due to the inclusion of this non-source material (Cameron, 2005). Ad- dressing this criticism, Garfield (2006) suggested that “some distortion will result” from this practice, but estimated that it only resulted in a 5-10% change in the JIF, and that only a small number of journals would be affected. Another criticism of the JIF comes from the fact that impact factor is the average number of citations received by the in a journal. Thus, prospectively, each article is viewed as having the same quality which can be misleading. Over time, the true influence of an article will emerge with its total number of citations. Not surprisingly, the distribution of the number of citations received per article is decidedly non-normal with a sizable skew. In a of all articles with at least one citation from 1900 to 2005, Garfield (2006) documented this skew. For example, 60.9% of articles had between 1 and 9 citations while another 10.6 % had between 10 and 14 cita- tions. In another measure of this skew, Garfield (2007) speculated that 20% of all published articles accounted for 80% of citations. Larivière, Gingras, and Archambault (2009) found that 32% of social science articles, 12% of medical articles, and 27% of natural sciences articles were uncited.

Alternatives to the Journal Impact Factor From its beginnings as a tool for librarians to select the best set of journal sub- scriptions on which to spend their limited budgets, bibliometrics has rapidly expanded into the academic world, offering the JIF and many other ways to rank or rate journals and authors coming from a variety of citation database providers including Thomson Reuters (now Clarivate Analytics), , and . The remainder of this article will present several popular alter- native bibliometric methods for rating both journals and individual authors, and conclude with observations and suggestions for scholars and the field. Drawing on network theory, the score attempts to rate the total importance of a by its relationship to other journals through the network of citations. This is similar to how Google ranks web pages, except Eigenfactor uses academic citations rather than hyperlinks (Bergstrom, 2007; Beyong journal rankings 35

Bergstrom, West, & Wiseman, 2008). The most influential journals are those frequently cited by other journals, overcoming the issue of self-citation. The Eigenfactor is calculated through an iterative algorithm using the Thomson- Reuters (Clarivate Analytics) Web of Science database (five-year window), in which citations from higher-ranked journals are weighted more heavily than citations from lower-level journals. The Eigenfactor score of a journal is higher as it garners citations from highly ranked journals. The Eigenfactor does not adjust for article count, so journals with more articles and more highly cited journals will have higher Eigenfactor scores (Brown, 2011). A related bibliometric is the Article Influence Score (Bergstrom, 2007; Bergstrom et al., 2008) which measures the average influence of an article appearing in the same journal and, as such, is directly analogous to the JIF. It is calculated by dividing Journal Eigenfactor by the number of articles and it is normalized and scaled with a mean of 1.0. This means that if a journal has a 2.0 Article Influence Score, it can be interpreted as having two times the influence of the average journal. The SCImago Journal Rank is a metric which accounts for the number of citations received by a journal and the importance of those citations. Like the Eigenfactor Score, it draws on the page-rank algorithm in measuring the importance of different citations and journals, and their positioning in the academic citation network. However, it uses the Elsevier’s database and a three year window (Brown, 2011). The Source Normalized Impact per Paper (SNIP) measure was developed by Moed (2010). It is a ratio of a journal’s per-paper citation count compared to the “citation potential” of a given field. Citation potential of a field comes from the observation that some fields have traditions of publishing long reference lists, while others have shorter lists (Garfield, 2006). Thus, SNIP adjusts for differences in citation probability across fields and makes possible comparisons between disciplines. The h-Index (Hirsch) index was developed by Jorge E. Hirsch as an author- level metric (Hirsch, 2005) which can also be used as a metric for journals, groups of scholars, departments or universities (Barnes, 2016). It measures both scholarly productivity and the impact (number of citations) of the scholar’s (or group’s) work. In concept, the h-index reflects the publication of h papers that have been cited h times. Put another way, an h-index of 7 means that the scholar has at least 7 papers which have been cited at least 7 times. An h-index of 29 means that the scholar has at least 29 papers which have been cited at least 29 times. In effect, only the most cited articles contribute to the h-index. Over the course of a career, the h-index of a scholar can only grow. 36 The Journal of Health Administration Education Winter 2018

Given the different publication and citation traditions of different disciplines, comparing h-indices between disciplines is not instructive. Barnes (2016) notes the proliferation of h-indices (corrections or expansions) and raises questions about the construct validity of the h-index. In an attempt to quantify the influence of scholarship in the world beyond academe and address the shortcomings of peer-review, a set of alternative metrics called has been developed. These metrics do not examine citations from journals, but rather mentions in online social media including downloads, blog mentions, Tweets, posts, online reference manag- ers, Wikipedia pages, and news outlet coverage (Peoples, Midway, Sackett, Lynch, & Cooney, 2016). While this method is still in its infancy and remains controversial (Cheung, 2013), organizations are calculating altmetrics in- cluding , Impact Story, and Plum Analytics, as well as publishers including Elsevier, , and BioMed Central are providing altmetric data for their journals (Kolahi & Khazaei, 2016). The bibliometric literature has investigated Altmetrics versus more traditional metrics and has found some correlations. For example, Alhoori and Furata (2013) examine individual alt- metrics with citation count, JIF, Eigenfactor, Article Influence Score, Scientific Journal Rankings, and h-index, and find that these journal-level metrics have moderate correlation with individual altmetrics. A composite Journal Social Impact altmetric had no correlation with citation count and Eigenfactor, and moderate correlations with the JIF, Article Influence, and SJR. The authors suggest that Altmetrics methods could provide an early indicator of the influ- ence of some scholarly venues.

Observations for faculty and the field It is clear that bibliometrics have become part of the academic ecosystem. Indeed, they have supplanted the traditional means of journal evaluation in some quarters. Goldenberg and Goyal (2015) remind us that “We must remem- ber that the JIF was originally derived as a measure to compare individual journals. Unfortunately, the use of citation analysis and JIFs is widespread and has become a surrogate measure of research quality. Despite these limi- tations, we are ‘stuck’ with the JIF.” Fuster (2017) puts it succinctly, saying that the JIF “remains an imperfect metric, but is one by which we are judged externally.” If bibliometrics are going to play a large role in academic life, it is important that scholars, particularly young scholars and scholars-in-training, receive instruction on the nuances, strengths, and weaknesses of these metrics so that they can make informed decisions about publication placement and career strategy. Beyong journal rankings 37

The case for altmetrics is not so clean cut. Altmetrics has only recently appeared on the academic scene and does not appear to be well accepted by the academic ecosystem (although Elsevier recently highlighted the Plum alt- metrics for the journal, Social Science and Medicine, in a mass email). Further, altmetrics are the equivalent of citation counts which serve as the basis for bibliometrics. Citation counts by themselves provide a gross measure of the popularity or impact of a journal (or academic). Only by applying bibliometrics to the raw citation counts can we develop a more sophisticated understanding of a journal’s importance within a field. Further, citation counts and biblio- metrics are derived from a large but limited universe of journals. Altmetrics draw from a broad array of non-traditional sources. This has considerable potential for assessing the “practical” impact of an article, but only in a broad sense. That said, just knowing the number of downloads, tweets, or Mendele uses an article has is not very helpful without a context. With citation counts, bibliometrics provide that context. For example, an article impact score of 2.0 suggests that a journal has two times the influence of an average journal. It is reasonable to assume that that altmetric providers and bibliometricians recognize this issue and are developing metrics that will provide more context for altmetrics. Drawing on the observation that bibliometrics are increasingly well inte- grated into academe and altmetrics remains a gross measure, what does an academic need to know? This article has summarized the major benefits and drawbacks of the JIF. As far as the other bibliometrics, our advice would be to read the respective initial articles introducing the techniques as cited herein, and then read through the commentaries on each. Having done some of that, our sense is that bibliometrics based on the network technique (i.e., Eigenfactor Score and SCImago Journal Rank) overcome some of the issues with the JIF. The Article Influence Score would be our choice as it uses the network model of the Eigenfactor and derives a measure which is normalized with a mean of 1.0. Perry (2012) agrees, arguing that it corrects “for self-citation bias and disciplinary differences in how citations are used.” This provides a readily interpretable measure of journal quality. The Source Normalized Impact per Paper (SNIP) method provides a useful metric. It allows comparison across fields, which is less possible with the other bibliometrics. Theh -index provides a useful measure which can be calculated for individual authors, groups of authors, departments, colleges, or universities. However, Barnes (2016) out- lines some concerns with it. 38 The Journal of Health Administration Education Winter 2018

Table 1 Bibliometrics and Altmetrics

Citation Aligned Metric Proprietary Description Window Database Journal Y Ratio of journal articles to 2 years Web of Impact Factor citation count of all articles (5 years) Science Eigenfactor N Page rank algorithm 5 years Web of Score Science Article Influ- N Journal eigenfactor score 5 years Web of ence Score divided by number of Science articles and normed and scaled with a mean of 1.0 SCImago N Page rank algorithm 3 years Scopus Journal Rank Source N Ratio of a journal’s per 3 years Scopus Normalized paper citation count Impact per compared to the “citation Paper potential” of a given field h-index N Reflects the publication of Variable Google h papers that have been Scholar cited h times Altmetrics N Counts of article mentions Variable None in social media outlets in- cluding downloads, blogs, tweets, Facebook pages, online reference manag- ers, Wikipedia pages, and news outlets

Another less understood aspect of bibliometrics that academics need to consider lies in the role that different databases play (Table 1). The first major database was Garfield’s “Science Citation Index” and later the “Social Science Citation Index and Arts and Humanities Citation Index.” These evolved into the current Web of Science (WoS) database, a proprietary product of Clarivate Analytics (previously owned by Thomson Reuters). The 2016 database features more than 11,000 journals from 81 countries and 234 disciplines, with articles going back to the early 1900’s (Haddad, 2017). Both Brown (2011) and Bevan (2004) suggest that the WoS is biased toward American, English language, basic science and general-subject journals. For example, Brown (2011) notes Beyong journal rankings 39 that although there are over 400 nursing journals, only 46 are included in the WoS database. One alternative to WoS is Elsevier’s Scopus database. Brown (2011) notes that Scopus is not proprietary and that it covers a wider breadth of journals (approximately 22,000 in 2017). However, the major issue is that coverage before 1996 is considerably sparser than WoS. The other major com- petitor is (GS). GS canvasses a much wider range of publica- tions beyond academic journals (Harzing & van der Wal, 2008). As a result, an h-index offered by GS will generally be higher than one calculated from WoS or Scopus. Hodge and Lascase (2011) examined the potential of using GS for academic decisions in social work versus the JIF and found a correlation of 0.86 between the Google Scholar h-index and the JIF. They concluded that the two measures were measuring the same underlying construct and argued the 10-year citation window used in calculating their h-index (using Google Scholar database) resulted in a better measure than the JIF. Jasco (2012) cau- tions against excess enthusiasm for using GS due to high error rates in author attributions. He goes on to commend GS for reducing error rates, but remains unsupportive of their use for bibliometric purposes. He also criticizes GS for lack of transparency in the composition of its database and also for the amount of citations from non-academic sources. A nuance related to the three databases are the alignments of the different bibliometrics. Theoretically, any bibliometric method could be used with any of the three major databases. However, in practice, certain bibliometrics are used with certain databases. The JIF, the Eigenfactor Score, and the Article Influence Score are aligned with Clarivate Analytics’ Web of Science database (the JIF is a proprietary product of Clarivate Analytics). SCImago Journal Rank and Source Normalized Impact per Paper are aligned with Elsevier’s Scopus database, while the h-index is aligned with Google Scholar. While only the JIF is proprietary, it is possible (but with varying level of difficulty) to obtain one of the other bibliometrics using a non-aligned database. Thus, when a particular bibliometric measure is used by a scholar, institution, or government entity, they should understand its alignment with a particular database and the strengths and weaknesses of that database. Another consideration in the use of bibliometrics lies in the selection of which journal a scholar chooses to submit an article. In many clinical fields, scholars have wrestled with the issue of whether to place an article in the best fitting (typically practitioner) journal or in the highest ranked journal (Ben- nett et al., 2011; Bevan, 2004; Brown, 2011). This decision is complicated by the fact that many practitioner journals are not indexed in some of the major databases, and the aforementioned overrepresentation of general science and general medicine journals in the WoS database (Bevan, 2004; Brown, 2011). 40 The Journal of Health Administration Education Winter 2018

Brown (2011) frets that occupational therapy (OT) research published outside of the field in journals with an established JIF represent a potential loss to the body of OT knowledge and practice. Bennett, Genoni and Haddow (2011) and Ironside (2007) raise the same issue for nursing. Health Administration falls into the same camp as we have a significant practitioner population that can benefit from . One way that many academics address this issue is publishing a theoretical, empirical paper in an indexed journal and then a related practice-focused paper in a practitioner journal. The challenge here is ensuring that the two articles are sufficiently differentiated and do not fall into the “” trap. Beyond publishing an article in a good journal, academics can increas- ingly take steps to disseminate their work more broadly. For example, Cosco (2015) explored the relationship of impact and social media among medical journals, finding that 28% of general medical journals had profiles and the size of their following was strongly linked to impact factor and citations “suggesting that higher quality research received more mainstream attention.” Niyazov and colleauges (2016) examined a set of articles posted on Academia. edu with a matched set of unposted articles. They found that posted articles received 16% more citations after one year, 51% after three years, and 51% more after five years. This is 58% more citations than articles published to personal or department homepages after five years. Beyond Academic.edu, authors may choose to create profiles in Google Scholar or ResearchGate. In our own field, Harle, Vest, and Menachemi (2016) used 191 publically available Google Scholar profiles of health administration scholars to examine faculty productivity. Turning to how bibliometrics may be used in the field health administra- tion, the literature contains several suggestions, mostly seeking to hold on to peer review while also using bibliometrics. The best example of this is in Belter (2015) who suggests an integrated method where the strengths of one approach compensate for weaknesses of the other. Bibliometrics are easily and widely available, but, as we discussed above, subject to uncritical use or outright abuse. Peer review, to many, remains the “gold standard” for judging scholarly excellence, but also has its own biases and misuses. Belter (2015) suggests that “combining bibliometric indicators and peer review results in more fair, balanced and accurate assessment of scientific research.” Garfield (2006) himself lamented the abuse of the JIF and desired to see proper citation analysis in making judgements about the quality of scholarly work. Brown (2011) suggests using a range of bibliometrics and developing discipline spe- cific metrics. Jasco (2012) reports that the ERA in Australia uses a mixture of peer rating and Scopus-based metrics. Brown (2011) suggested that the World Beyong journal rankings 41

Federation of Occupational Therapists convene a panel of experts to generate a peer-reviewed list of OT journals in order to ultimately create a ranking of journals. He suggested that the experts draw on their own expertise as well as bibliometrics. In a 2016 Professional Development Workshop Session at the Academy of Management convened by the authors, a panel of Chairs and Deans discussed this point at length. The panelists agreed that senior faculty need to lead the way in establishing how and where traditional journal ratings and citation counts would be used versus bibliometrics and altmetrics. Fortunately, we already have a series of reputable journal-rating surveys to provide some guidance. It might be wise to follow Brown’s (2011) suggestion and convene a panel of experts to determine a ranked journal list for the field. Either the Association of University Programs in Health Administration (AUPHA) or the Commission on the Accreditation of Healthcare Management Education (CAHME) could establish a working group with the charge of creating (and maintaining) a journal-quality ranking integrating both peer-review and bibliometric measures. Such a measure could be integrated with CAHME’s accreditation standards and used in healthcare administration departments. In departments or schools where health administration is only one of several disciplines, such a list would be helpful in making the case for the quality of scholarship within our specialty discipline versus other, more mainstream disciplines (where the bibliometrics scores tend higher). Regardless, bib- liometrics are here to stay, and, if used correctly, can be a useful method of evaluation of scholarship.

References Adam, D. (2002). The counting house. , 415, 726.

Althouse, B. M., West, J. D., Bergstrom, C. T., & Bergstrom, T. (2009). Differences in impact factor across fields and over time.Journal of the American Society for Information Science and Technology, 60(1), 27-34.

Barnes, C. S. (2016). The construct validity of the h-index. Journal of Documentation, 72(5), 878-895.

Belter, C. W. (2015). Bibliometric indicators: opportunities and limits. Journal of the Medical Library Association , 103(4), 219-221.

Bennett, D., Genoni, P., & Haddow, G. (2011). FoR Codes pendulum: Publishing choices within Australian research assessment. Australian Universities Review, 53(2), 88-98. 42 The Journal of Health Administration Education Winter 2018

Bergstrom, C. (2007). Eigenfactor: Measuring the value and prestige of scholarly journals. College & Research Libraries News, 68(5), 314-316.

Bergstrom, C. T., West, J. D., & Wiseman, M. A. (2008). The Eigenfactor™ metrics. The Journal of Neuroscience, 28(45), 11433-11434.

Bevan, D. (2004). Impact factor 2002--too much “impact”? Clinical and Investigative Medicine, 27(2), 65-66.

Borkowski, N., Williams, E.S., O’Connor, S.J. & Qu, H. (2018). Outlets for Health Care Management Research: An Updated Assessment of Journal Ratings. Journal of Health Administration Education, 35(1), 47-64.

Brooks, C. H., Walker, L. R., & Szorady, R. (1991). Rating Journals in Health Care Administration. Medical Care, 29(8), 755-765.

Brown, T. (2011). Journal quality metrics: Options to consider other than impact factors. American Journal of Occupational Therapy, 65(3), 346-350.

Cameron, B. D. (2005). Trends in the usage of ISI bibliometric data: Uses, abuses, and implication. Portal: Libraries and the Academy, 5(1), 105-125.

Campanario, J. M. (2011). Large increases and decreases in journal impact factors in only one year: The effect of journal self-citations.Journal of the American Society for Information Science and Technology, 62(2), 230-235.

Cheung, M. K. (2013). Altmetrics: Too soon for use in assessment. Nature, 494(7436), 176-176.

Chorus, C., & Waltman, L. (2016). A large-scale analysis of impact factor biased journal self-citations. PLoS One, 11(8), 1-11.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (Second ed.). Hillsdale, NJ: Lawrence Erlbaum.

Cosco, T. D. (2015). Medical journals, impact and social media: an ecological study of the Twittersphere.Canadian Medical Association Journal, 187(18), 1353.

DeMaria, A. N. (2003). A report card for journals. Journal of the American College of Cardiology, 42(5), 952. Beyong journal rankings 43

Elliott, C. (2012). On predatory publishers: A Q&A with Jeffrey Beall. The Chronicle of Higher Education. Retrieved from https://www.chronicle.com/blogs/brainstorm/on-predatory-publishers- a-qa-with-jeffrey-beall/47667

Fassoulaki, A., Sarantopoulos, C., Papilas, K., Patris, K., & Melemeni, A. (2001). Academic anesthesiologists’ views on the importance of the impact factor of scientific journals: a North American and European survey. Canadian Journal of Anaesthesia, 48(10), 953-957.

Fuster, V. (2017). Impact factor. Journal of the American College of Cardiology, 70(12), 1530.

Garfield, E. (2006). The history and meaning of the journal impact factor. Journal of the American Medical Association, 295(1), 90-93.

Garfield, E. (2007). The evolution of the Science Citation Index to the ebW of Science, Scientometric Evaluation, and Historiography. Retrieved December 13, 2017 from garfield.library.upenn.edu/papers/barcelona2007.pdf

Haddad, M. (2017). Use and relevance of bibliometrics for nursing. Nursing Standard, 31(37), 55-63.

Harle, C. A., Vest, J. R., & Menachemi, N. (2016). Using bibliometric big data to analyze faculty research productivity in health policy and management. Journal of Health Administration Education, 33(2), 285-293.

Harzing, A. W. K., & van der Wal, R. (2008). Google Scholar as a new source for citation analysis. Ethics in Science and Environmental Politics, 8(1), 61-73.

Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National academy of Sciences of the United States of America, 16569-16572.

Hodge, D. R., & Lacasse, J. R. (2011). Ranking disciplinary journals with the Google Scholar h-index: A new tool for constructing cases for tenure, promotion, and other professional decisions. Journal of Social Work Education, 47(3), 579-596. 44 The Journal of Health Administration Education Winter 2018

Holden, G., Rosenberg, G., Barker, K., & Onghena, P. (2006). An assessment of the predictive validity of impact factor scores: Implications for academic employment decisions in social work. Research on Social Work Practice, 16(6), 613-624.

Ironside, P. M. (2007). Guest editorial. Advancing the science of nursing education: rethinking the meaning and significance of impact factors. Journal of Continuing Education in Nursing, 38(3), 99-100.

Jasco, P. (2012). Using Google Scholar for journal impact factors and the h‐index in nationwide publishing assessments in academia – siren songs and air‐raid sirens. Online Information Review, 36(3), 462-478.

Kearney, M. H. (2015). Predatory publishing: what authors need to know. Research in Nursing & Health, 38(1), 1-3.

Kolahi, J., & Khazaei, S. (2016). Altmetric: Top 50 dental articles in 2014. British Dental Journal, 220, 569.

Kulkarni, A. V., Aziz, B., Shams, I., & Busse, J. W. (2011). Author self-citation in the general medical literature. PLoS One, 6(6), 1-5.

Larivière, V., Gingras, Y., & Archambault, É. (2009). The decline in the concentration of citations, 1900–2007. Journal of the American Society for Information Science and Technology, 60(4), 858-862.

McCracken, M. J., & Coffey, B. S. (1996). An empirical assessment of health care management journals: A business perspective. Medical Care Research and Review, 53(1), 48-70.

Menachemi, N., Hogan, T. H., & DelliFraine, J. L. (2015). Journal rankings by health management faculty members: Are there differences by rank, leadership status, or area of expertise? Journal of Healthcare Management, 60(1): 17-28.

NCA. (2013). Impact factors, journal quality, and communication journals: A report for the Council of Communication Associations: Washington, DC: National Communication Association. Beyong journal rankings 45

Niyazov, Y., Vogel, C., Price, R., Lund, B., Judd, D., Akil, A., . . . Shron, M. (2016). meets discoverability: Citations to articles posted to Academia.edu. PLoS One, 11(2).

Peoples, B. K., Midway, S. R., Sackett, D., Lynch, A., & Cooney, P. B. (2016). Twitter predicts citation rates of ecological research.PLoS One, 11(11), 1-11.

Perry, G. M. (2012). Deciding where to publish: Some observations on journal impact factor and article influence score.Journal of Agricultural and Resource Economics, 37(3), 335-348.

Rodger, S., McKenna, K., & Brown, T. (2007). Quality and impact of occupational therapy journals: Authors’ perspectives. Australian Occupational Therapy Journal, 54(3), 174-184.

Rosvall, M., & Bergstrom, C. T. (2008). Maps of random walks on complex networks reveal community structure. Proceedings of the National Academy of Sciences, 105(4), 1118-1123.

Saha, S., Saint, S., & Christakis, D. A. (2003). Impact factor: a valid measure of journal quality? Journal of the Medical Library Association, 91(1), 42-46.

Shanta, A., Pradhan, A. S., & Sharma, S. D. (2013). Impact factor of a scientific journal: Is it a measure of quality of research? Journal of Medical Physics / Association of Medical Physicists of India, 38(4), 155-157.

Shewchuk, R. M., O’Connor, S. J., Williams, E. S., & Savage, G. T. (2006). Beyond rankings: Using cognitive mapping to understand what health care journals represent. Social Science & Medicine, 62(5), 1192-1204.

Smith, K. M., Crookes, E., & Crookes, P. A. (2013). Measuring research ‘impact’ for academic promotion: issues from the literature. Journal of Higher Education Policy and Management, 35(4), 410-420.

Svensson, G. (2010). SSCI and its impact factors: A “prisoner’s dilemma”? . European Journal of Marketing, 44(1), 23-33.

Vucovich, L. A., Blaine-Baker, J., & Smith, J. T. (2008). Analyzing the impact of an author’s publications. Journal of the Medical Library Association, 96(1), 63-66. 46 The Journal of Health Administration Education Winter 2018

Wilhite, A. W., & Fong, E. A. (2012). Coercive citation in . Science, 335(6068), 542-543.

Williams, E. S., Stewart, R. T., O’Connor, S., Savage, G. T., & Shewchuk, R. (2002). Rating outlets for health care management research: An update and extension. Medical Care Research and Review, 59(3), 337-352.

Yue, W., Wilson, C. S., & Boller, F. (2007). Peer assessment of journal quality in clinical neurology. Journal of the Medical Library Association, 95(1), 70-76.