<<

Upeksha Perera and ManjulaWijewickrema 35

Relationship between Journal-Ranking Metrics for 18.1. a Multidisciplinary Set of portal Journals

Upeksha Perera and ManjulaWijewickrema publication, for : Ranking of scholarly journals is important to many parties. Studying the relationships among various ranking metrics is key to understanding the significance of one metric based on another. This research investigates the relationship among four major journal-ranking indicators: the (IF), the score (ES), the acceptedh-index (hI), and SCImago Journal Rank (SJR). The authors used 519 journals from a diverse range of subjects and executed a correlation analysis. The results show that a higher correlationand exists between impact factor and SCImago Journal Rank (0.796). However, a very strong positive correlation between two metrics does not necessarily imply that journals are ranked according to the same pattern. edited,

copy Introduction

t is important for a researcher to know the rank of the journal to which he or she intends to submit a for numerous reasons, including to identify the most appropriate journals for , to form an idea about the level of journals, and to Iidentify the reviewed,publishing outlets that could advance the author’s career.1 , the statistical analysis of journals and similar publications, has introduced journal-ranking indicatorspeer for evaluating publication outlets. These metrics are defined by how they reflectis the properties of the considered journals. Having an idea about the relationships among different ranking indicators is important to predict the behavior of one metric mss.based on another. For instance, some metrics emphasize popularity, while others indicate prestige.2 Hence, a relationship between the metrics of popularity and prestige could be used to illustrate the performance of one metric in comparison to another. This

portal: Libraries and the Academy, Vol. 18, No. 1 (2018), pp. 35–58. Copyright © 2018 by Johns Hopkins University Press, Baltimore, MD 21218. 36 Relationship between Journal-Ranking Metrics for a Multidisciplinary Set of Journals

The relationships between metrics are important to authors as well as to publishers. From the author’s It is important for a researcher to point of view, the knowledge of rank- know the rank of the journal to which ing metrics and their associations helps to submit manuscripts to balanced he or she intends to submit a manu- journals. For example, an author might select journals that are balanced in both script for numerous reasons, includ- 18.1. ing to identify the most appropriate popularity and prestige. Zao Liu and Gary Wan reveal that publishers, ad- journals for publishing, to form an ministrators, and librarians also want 3 portal idea about the level of journals, and to collect these measures. The admin- istrators of academic institutions are to identify the publishing outlets that eager to obtain journal-ranking metrics could advance the author’s career. because a part of academic tenure and promotions is based on them. In the case of the publisherspublication, and editors of journals, these ranks and their relationship reveal the status of journals.for In addition, librar- ians can often refer to these statistics to decide journal subscriptions and allocate funds. Researchers have already conducted studies to develop new ranking indicators and to find relationships among existing metrics. Furthermore, almost all these attempts have 4 identified both strong and feeble characteristics ofaccepted the existing indicators. However, most studies have focused on the correlation between a limited number of metrics: in 5 most cases, only the journal impact factor andand the h-index. It is relatively difficult to find comparison studies between the Eigenfactor score and SCImago Journal Rank. By contrast, our study reveals relationships among several metrics and hence goes beyond the earlier studies. We selected four frequently used metrics—impact factor, Eigenfactor score, h-index, and SCImago Journaledited, Rank—for the comparison because the monitoring of the scientific influence of journals still relies on them. Previous studies chose copytheir study samples from a single subject stream, and their results may vary based on the context or the size of the selected sample of journals. Thus, their results cannot be generalized to The size of the journal, the type of all disciplines. However, the current articles it publishes,reviewed, the language of study does not confine itself to a cer- the journal, open or closed access, and tain subject domain, and, as a result, the findings can be generalized more even thepeer subdiscipline of the journal accurately. There are disputes about is could impact -based metrics. applying metric comparison studies across different subjects.6 However, mss. we question this argument because the diversity of subjects is not the sole factor that affects these comparisons. The size of This the journal, the type of articles it publishes, the language of the journal, open or closed access, and even the subdiscipline of the journal could impact citation-based metrics. Upeksha Perera and ManjulaWijewickrema 37

Importance

The introduction of this paper explains the significance of comparison studies among journal-ranking metrics. However, there are more reasons to give further attention to studies of journal ranking. With the rapid increase of commercial publishers, has gradually turned into a profit-oriented industry. Consequently, many predatory journals charge publishing fees to authors without providing legitimate editorial and publishing ser- 18.1. vices. It is difficult to keep authors from being exploited by these journals due to their higher acceptance rates and the ease of publish- ing in them.7 Predatory journals do not ensure Most academic institutionsportal quality or help scholars advance their careers, consult journal-ranking lists however, so authors should be encouraged to publish in more reputable journals whenever for evaluating the academic possible. They should avoid “wasting” an ar- achievements of their scholars. ticle that might be more widely read and cited publication, if it were published in a better journal. Thus, the existence of reliable journal-ranking systems is important tofor make a good decision about the submission venue. Most academic institutions consult journal-ranking lists for evaluating the academic achievements of their scholars. However, the ranking lists can be biased for several rea- sons.8 For example, the personal preferences of the acceptedcommittee members or the expert who prepared the list can lead to the incorrect ranking of a journal. Hence, developing new journal-ranking metrics and studying existingand systems are crucial to ensure accurate lists. Studies of journal ranking are also important for the wider academic community. For instance, although the impact factor receives the extensive attention of scholars, it is limited to the journals indexed edited,by Thomson Reuters. Not everyone can pay to access these impact factor reports, keeping some scholars away from their use. More studies are imperative for a wider copycommunity of authors to get the maximum benefit of ranking metrics with the least restrictions. Scholarly works are not confined to articles published in journals. They also appear in a range of formats, such as conference presentations, video demonstrations, workshop activities, and the like. However, it is difficult to find proper methodologies for ranking these other reviewed,formats. As the number of formats is increasing with the development of technology, the need for new methodologies to rank them is vital. Expansionpeer of journal ranking studies is rushing to achieve these goals, and many youngis researchers are enthusiastic to learn about this field. Librarians play an impor- tant role in directing new scholars toward valid journal-ranking measures. As a result, mss.libraries in many academic institutions have developed comprehensive subject guides to lead researchers to explore the field further. In addition, these guides assist authors This in finding the most appropriate journal outlets to which to submit their manuscripts. This paper provides a list of online subject guides developed by university libraries (see the Appendix). 38 Relationship between Journal-Ranking Metrics for a Multidisciplinary Set of Journals

Four Ranking Metrics

The current study attempts to find relationships among four metrics: impact factor, Eigenfactor score, h-index, and SCImago Journal Rank. A short introduction to these four metrics is appropriate.

Impact Factor The impact factor is one of the most widely employed metrics in academia. It is often 18.1. used to assess the relative importance of a journal. In general, impact factor values are calculated for the journals indexed by Thomson Reuters and published yearly in . The impact factor of a two-year time window is defined as the ratioportal between the numerator—that is, the number of received by the journal in the current year to the articles published in the two preceding years—and the denominator, the total number of citable articles published in the journal within the same two years. Although impact factor has received significant attention from scholars, it is criticized for its limitations. One limitation involves increasing a journal’s impactpublication, factor through self-citation, in which articles in the journal refer to other articlesfor previously published in the same journal. Other shortcomings of the impact factor include that it is based on a small set of journals,9 that it has a bias toward popularity,10 and its dependence on citation behavior.11

Eigenfactor Score accepted The Eigenfactor score is a prestige metric availableand to measure the overall importance of a journal.12 The approach used by the Eigenfactor score resembles that used by for ranking websites in a search. Generally, the score is based on the number of citations edited, received by the journal in the considered Citations in highly cited journals year for the articles published during the influence the Eigenfactor score past five years. However, the metric also copy more than citations in lesser cited, considers the type of journals that contrib- uted these citations. Citations in highly lower-tier journals. This measure cited journals influence the Eigenfactor may be more robust than the im- score more than citations in lesser cited, 13 pact factor, whichreviewed, counts merely lower-tier journals. This measure may be more robust than the impact factor, which the numberpeer of citations without counts merely the number of citations with- consideringis their origin. out considering their origin. The ability of Eigenfactor to consider a target window of five years, to use the structure of the entire mss.citation network, and to remove self-citations, and its strong relationship with other ranking metrics can be considered as significant advantages.14 Furthermore, it allows This better comparison among various research fields because the metric has been adjusted for citation differences across multiple disciplines. One can access Eigenfactor scores freely from eigenfactor.org. Upeksha Perera and ManjulaWijewickrema 39

h-Index The h-index (h denotes high-impact) values the impact of an individual’s scholarly output rather than measuring popularity. Although the metric was originally developed to assess researchers, it has been adapted to evaluate journals too. The h-index for a journal can be simply determined as y, if the journal has y articles that all were cited at least y times.15 This value is calculated using data from , Web of , or . One of the significant advantages of theh -index is its flexibility for adjusting to different time 18.1. frames. Hence, it can cope with the research culture of any given discipline.16 In addi- tion to evaluating individual researchers and journals, the scientific output of research groups, faculties, and countries have been determined by the h-index.17 portal

SCImago Journal Rank SCImago Journal Rank is another indicator available to measure the scientific prestige of journals.18 Generally, this metric re- flects the number of times an average In the Eigenfactor scorepublication, and SCImago article in a journal is cited. Neverthe- for less, it assigns weights to each citation Journal Rank, a citation in an anthro- based on the importance of the cited pology journal counts for more than journal. The calculation runs through a citation in a biochemistry journal, several stages with an iterative pro- cess. Weights are calculated using becauseaccepted citations are less common information in ’s Scopus data- (andand therefore more “valuable”) in for a three-year citation window. Although some researchers consider anthropology. Likewise, a citation in a SCImago Journal Rank as a variant top journal counts for more than a ci- edited, of the Eigenfactor score, it also has tation in a mid-level specialty journal. characteristics of the impact factor measure.19 A remarkable achievementcopy of this metric over the impact factor is that it avoids purposive inflation of rank by self- citations. It does not entirely neglect self-citations, but it restricts them to a maximum of 33 percent of the total citations.20 Both thereviewed, Eigenfactor score and SCImago Journal Rank are recursive measures. That is, they account for differences in among the various disciplines and are weighted to account for the impact of each citing journal while considering the citation network.peer In the Eigenfactor score and SCImago Journal Rank, a citation in an anthropol- ogyis journal counts for more than a citation in a biochemistry journal, because citations are less common (and therefore more “valuable”) in anthropology. Likewise, a citation mss.in a top journal counts for more than a citation in a mid-level specialty journal. These two related characteristics make Eigenfactor and SCImago Journal Rank different from This the impact factor and the h-index, which do not account for variations in citation rates across disciplines or journals. 40 Relationship between Journal-Ranking Metrics for a Multidisciplinary Set of Journals

Previous Studies

Researchers have already conducted several studies to find the relationships between different journal-ranking indicators. Gad Saad carried out research to determine the correlation between a journal’s h-index and its impact factor.21 The study was done for two separate samples. The first sample consisted of 50 business-related journals, and the second sample comprised 42 marketing journals. Saad’s research reported a significant correlation between the two metrics in both samples. Julian Olden illustrated how to 18.1. use the h-index to determine the standard of ecological journals.22 Further, he described a moderate positive relationship between the h-index and the Institute for Scientific In- formation (ISI) impact factor. He used 111 journals for this research and studied patternsportal of h-indexes to investigate the journals’ past performance over a 25-year time frame. Another researcher reported a higher correlation between the impact factor and the h-index for psychiatry journals than for pharmacology journals.23 That paper also discussed the possibility of using the h-index as a complementary alternative to the well-known impact factor of journals in the same discipline. publication, In 2010, a team introduced a new metric called SCImago Journal Rank as an attempt to offer an alternative to the impact factor.24 By that time, the impactfor factor was highly criticized for its bias toward self-citations and non-citable items. However, the same paper claimed that SCImago Journal Rank and the impact factor are strongly correlated. Despite the strong correlation, the research team observed that many journals with high impact factors received lower ratings in SCImago Journalaccepted Rank, perhaps because SCI- mago Journal Rank accounts for the average citation impact within each field of study. Because microbiology is a field with high citationand rates, for example, most microbiology journals have high impact factors. Only the better microbiology journals have high values in SCImago Journal Rank, however. Therefore, the authors of this paper were encouraged to conduct subsequent studies to determineedited, the precise reason for this phenomenon. David Hodge and Jeffrey Lacasse studied a collection of social work journals.25 They determined correlations betweencopy the Google Scholar h-index and the impact factor. In addition, their research revealed a relationship between the g-index, a measure of a scientist’s productivity, and the impact factor for the same subject domain. Their find- ings resembled those from previous studies, and both indexes correlated highly with the impact factor. Liu and Wan again carried out a similar type of investigation.26 They have revealedreviewed, a strong relationship of the impact factor with both the h-index and the g-index. These researchers have retrieved the impact factor values from Journal Citation Reportspeer and the h- and g-indexes from and Google Scholar, respectively. However,is their study was limited to the subject domain of virology. Furthermore, they proved that there was no great difference among rank orders obtained for the h- and mss.g- indexes from Web of Science and Google Scholar. Another interesting study was done by a group to determine the suitability of the 27 This h-index to evaluate the scholarly impact of journals in business and management. The findings revealed a high correlation between theh -index and the impact factor, although the correlation with data taken from Web of Science showed a higher correlation than that with data taken from Google Scholar. Ultimately, this paper concluded that the h-index is a more appropriate indicator than the impact factor to evaluate the citation impact of Upeksha Perera and ManjulaWijewickrema 41

scholarly journals. Yet another study looked at the correlation between the impact fac- tor and another three journal-ranking indexes.28 According to the findings, These studies reveal a strong correla- the highest correlation was reported tion between the impact factor and between the impact factor and the Article Influence Score, the average the h-index. Both metrics attempt to standing of a journal’s articles over the measure the same underlying con- first five years after publication, while 18.1. the lowest correlation was between struct, journal quality, which may the impact factor and Eigenfactor. explain this strong association. However, this study was limited to a portal specific set of journals from pediatric neurology. These studies reveal a strong correlation between the impact factor and the h-index. Both metrics attempt to measure the same underlying construct, journal qual- ity,29 which may explain this strong association. publication, Methodology for

The authors randomly selected a collection of 519 journal titles from 11,761 titles in the 2014 Thomson Reuters Journal Citation Reports for this study. In addition to selecting titles, the authors obtained impact factor values and Eigenfactor scores for these se- lected titles from the same list. The 2014 h-index acceptedand SCImago Journal Rank values of the corresponding titles were retrieved fromand the SCImago Journal Rank official website Scimago Journal & Country Rank (http://www.scimagojr.com/). Unlike many previ- ous studies,30 we did not limit our journal sample to a single subject area (see Figure 1). Next, the authors calculated the Pearson correlation coefficient (r values) to com- pare each of the four variables—impactedited, factor, Eigenfactor score, h-index, and SCImago Journal Rank—with each of the others. The correlation coefficient indicates the extent to which the two variables arecopy related, with values that range from 0.00 (neither variable is of any value in predicting the other) to 1.00 (either variable can be used to predict the other with no error whatsoever). Based on these correlation values, we calculated the average correlation to all other measures to have an idea of how close each lay to all other measures. In other words, this process found the metric that has the closest reviewed, and the weakest relationship with every other measure. Similar methods were used to calculate all the average correlation values (see Table 3). Furthermore, with a view to conformpeer to the assumptions of Pearson correlation,31 we took logarithms of each mea- sure.is Later, adapting one of the previous methods,32 we visualized the rank changes between the four measures to see how the actual ranking was affected when moving mss.from one metric to another, although the high correlation suggests similarity between the metrics (see Figure 5). This Finally, the authors performed a factor analysis to extract the major dimensions and to identify the clusters that the four impact metrics form.33 They executed all analyses using SPSS version 22. 42 Relationship between Journal-Ranking Metrics for a Multidisciplinary Set of Journals

18.1.

portal

publication, for

Figure 1. Numbers of journals by subject area in the selectedaccepted sample and Results and Discussion

Correlation Analysis edited, Figure 2 illustrates the values for the measures impact factor, Eigenfactor score, h-index, and SCImago Journal Rank for the study set of 519 journals. Moreover, each graph has been plotted starting from thecopy highest-ranked journal to the lowest. The analogous points of the horizontal axis should not necessarily represent the same journal in all four graphs. The trend in the impact factor and SCImago Journal Rank are similar in shape except for the actual measure values. In contrast, the Eigenfactor score has a steeper descent at the beginningreviewed, but becomes less steep farther down the curve. The curve for the h-index is much more gradual compared to the other curves. We can further visualize this us- ing Figurepeer 3 and Table 1. Figure 3 illustrates the boxplot showing the distribution for the isnormalized measure values between [0,1], and Table 1 shows the quartiles of the metrics, respectively. The boxplot for the Eigenfactor score is less spread and skewed, while the boxplot mss.for the h-index is more spread and less skewed. This illustrates the concentration of the two distributions. Meanwhile, the impact factor and SCImago Journal Rank have similar, This in-between boxplots for their normalized data. In general, different ranking metrics determine their values based on distinct scales. For example, the scale used to measure SCImago Journal Rank values is smaller than the scale used for the impact factor values. Therefore, normalization must be done to bring the values given by all the four metrics into the same scale to produce a comparable Upeksha Perera and ManjulaWijewickrema 43

18.1.

portal

publication,

Figure 2. Variation of the values of impact factor, Eigenfactor score, h-index,for and SCImago Journal Rank for the study set of 519 journals

accepted and

edited,

copy

reviewed,

peer is

mss.

This

Figure 3. Boxplots showing the distribution for normalized measure values between [0,1] impact factor (IF), Eigenfactor score (ES), h-index (hI), and SCImago Journal Rank (SJR) 44 Relationship between Journal-Ranking Metrics for a Multidisciplinary Set of Journals

Table 1. Quartile values for the journal metrics

Metric Q * Q † Q ‡ 3 2 1 18.1.

Impact factor 0.7192 1.3560 2.4130 Eigenfactor score 0.0008 0.0022 0.0077 portal h-index 14.0000 32.0000 57.7500 SCImago Journal Rank 0.3335 0.6590 1.2622

*The third quartile (Q3) is the middle value between the median and the highest value of the data set. †The second quartile (Q ) is the median of the data. 2 publication, ‡ The firstquartile (Q1) is the middle number between the smallest number and the median of the data set. for

set of data for the four metrics. As Table 1 shows, the interquartile ranges and for the Eigenfactor score are 0.0055 and 0.0014, respectively,accepted verifying a skewed distribution. Table 2 shows the Pearson correlation coefficients between all the four metrics. Cor- relation is an effect size, and it can be used to anddescribe the strength of relationship using the guide that James Evans has suggested:

0.00–0.19 “very weak” edited, 0.20–0.39 “weak” 0.40–0.59 “moderate” 0.60–0.79 “strong” copy 0.80–1.0 “very strong.”34

Table 2 gives the correlation values of the four metrics with each other. The impact factor and SCImago Journal Rank have a very strong, highly significant positive correla- reviewed, tion value of 0.796, while the impact factor and the Eigenfactor has the lowest Eigenfactor score show a very weak but significant peer positive correlation of 0.126. The Eigenfactor score averageis correlation to all shows no strong correlation with either h-index or other measures, indicating SCImago Journal Rank. However, the correlation mss.its isolated position among between the h-index and SCImago Journal Rank is strong. Table 3 shows the average correlation the studied set of measures. of each metric to the other three metrics in the This study. According to Table 3, Eigenfactor has the lowest average correlation to all other measures, indicating its isolated position among the studied set of measures. Each variable was transformed (that is, entered as its natural logarithm) to maintain the linearity necessary Upeksha Perera and ManjulaWijewickrema 45

Table 2. Pearson correlation coefficients between metrics

Eigenfactor score h-index SCImago Journal Rank 18.1.

Impact factor 0.126* 0.580† 0.796† portal Eigenfactor score 0.275† 0.102 h-index 0.596†

*Significant values at significance level 0.05. †Significant values at significance level 0.01. publication, for

Table 3. accepted Each metric’s average correlation with the other three and

Metric Average r value edited, Impact factor 0.500667 SCImago Journal Rank 0.498 h-index copy 0.483667 Eigenfactor score 0.167667

reviewed, for the calculation of correlation coefficients.35 The relationships between the variables, in logarithmic form, are essentially linear. Thepeer confidence ellipses show where the specified percentage of the data should lie, assumingis a bivariate normal distribution. We can see that log(SJR) versus log(h-index) and log(SJR) versus log(ES) are confined in the specified confidence interval. mss. The confidence ellipses collapse diagonally as the correlation between two variables approaches 1 or –1. They are more circular when two variables are uncorrelated. In this This sense, log(IF) versus log(SJR) are highly correlated because the ellipse is somewhat elongated, while log(h-index) versus log(SJR) are the least correlated variables because the ellipse is more circular. In addition to the strong correlation values, Figure 4 shows the close relationship between the impact factor and SCImago Journal Rank. Here, the normalized (to the 46 Relationship between Journal-Ranking Metrics for a Multidisciplinary Set of Journals

18.1.

portal

Figure 4. Normalized impact factor and SCImago Journal Rank values of the sample.publication, Normalized data to the range [0,1] for

range [0, 1]) impact factor and SCImago Journal Rank values are graphed. We used the standard abbreviations of ISO (International Organizationaccepted for Standardization) 4 to in- dicate the journal titles. For example, J. Am Math. Soc. stands for Journal of the American Mathematical Society and Prog. Mater. Sci. for Progressand in Materials Science. From the plot of the impact factor and SCImago Journal Rank, we can see that SCI- mago Journal Rank closely follows the impact factor. Prog. Mater. Sci. has the highest impact factor and SCImago Journal Rank value for the study set. At times, however, edited, the impact factor overestimates SCImago Journal Rank, and vice versa, because of non- citable items, self-citations, and other factors. Consider the impact factorcopy and SCImago Journal Rank values obtained for World Psychiatry. Table 4 shows a high impact factor compared to SCImago Journal Rank even when the data are normalized to the range [0,1] (0.518 and 0.25, respectively). According to the data available in the SCImago Journal Rank official website, 23 percent of docu- ments were citablereviewed, from 2011 to 2013. This implies a high impact factor but low SCImago Journal Rank in 2014. This could be the effect of non-citable items.35 Another example is J. Am. Math. Soc., which has a low impact factor but a high SCImagopeer Journal Rank value. The reason may be that the journal has very low self- is citations (1/263 in 2014) and almost no non-citable items (according to the SCImago Journal Rank official website). mss. Figure 5 depicts the variation of the ranks with respect to the impact factor, the Eigenfactor score, the h-index, and SCImago Journal Rank of the top 10 journals (accord- This ing to the impact factor) in the study set. This information is further given by Table 5. Even in the top 10, there are drastic changes in the ranks when moving from one measure to another. For example, Prog. Mater. Sci., which ranks 26th with respect to the impact factor and 40th with respect to SCImago Journal Rank, drops more than 1,300 places in the Eigenfactor score and more than 1,020 places in h-index rank with respect Upeksha Perera and ManjulaWijewickrema 47

Table 4. Impact factor and SCImago Journal Rank with their normalized values for two journals 18.1. Normalized SCImago Normalized Impact impact Journal SCImago factor factor Rank Journal Rankportal

World Psychiatry 14.225 0.518 2.875 0.25

Journal of the American 2.556 0.092 6.605 0.57 Mathematical Society publication, for

accepted and

edited,

copy

reviewed,

Figurepeer 5. Variation of the ranks of the 10 top-ranked journals (according to impact factor) in the studyis set

mss.to the impact factor. The SCImago Journal Rank of Psychol. Bull. (Psychological Bulletin) drops more than 40 places with respect to the impact factor. This Moving down the list, there are even more significant changes. For example,Kidney Int. Suppl. (Kidney International Supplements) differs more than 13,000 places when ranked according to the impact factor and the h-index. Even more dramatic changes happen in ranks farther down. Therefore, although a strong correlation exists between the values of considered metrics (for example, the impact factor and SCImago Journal Rank), these correlations do not apply in the same way to ranks. 48 Relationship between Journal-Ranking Metrics for a Multidisciplinary Set of Journals

Table 5. World rankings of the top 10 journals (according to impact factor) in the study set 18.1. Impact Eigenfactor SCImago Journal* factor score h-index Journal Rank portal Prog. Mater. Sci. 26 1326 1050 40 Psychol. Bull. 95 588 127 136 World Psychiatry publication, 110 3615 5298 567 B. Am. Meteorol. Soc. for 150 513 499 195 Kidney Int. Suppl. 174 3672 14057 1187 NPG Asia Mater. accepted 184 3489 6914 466 J. Am. Soc. Nephrol. and 210 252 123 283 Br. J. Psychiatry 275 edited,631 258 672 Environ. Health Persp. 278copy 288 154 611 BBA-Rev. Cancer 287 1964 898 542*

Journal titles are abbreviated according to ISO (International Organization for Standardization) 4 standard abbreviations:reviewed, Prog. Mater. Sci.= Progress in Materials Science; Psychol. Bull.= Psychological Bulletin; B. Am. Meteorol. Soc. = Bulletin of the American Meteorological Society; Kidney Int. Suppl.= Kidney Internationalpeer Supplements; NPG Asia Mater. = NPG [ Publishing Group] Asia Materials; J. Am.is Soc. Nephrol. = Journal of the American Society of Nephrology; Br. J. Psychiatry = British Journal of Psychiatry; Environ. Health Persp. = Environmental Health Perspectives; and BBA-Rev. Cancer = BBA mss.[Biochimica et Biophysica Acta] Reviews on Cancer.

This Upeksha Perera and ManjulaWijewickrema 49

Table 6. Results of Kaiser-Meyer-Olkin (KMO) test and Bartlett’s test of sphericity* 18.1. Kaiser-Meyer-Olkin measure of sampling adequacy 0.682

Bartlett’s test of sphericity Approximate chi-squared 741.937 portal Degree of freedom 6 Significance 0.000

*The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy and Bartlett’s test of sphericity are used to test whether it is appropriate to proceed with factor analysis. Thepublication, KMO value should be greater than 0.50 for the factor analysis to proceed. Bartlett’s test of sphericity also established the appropriateness of factor analysis by confirming a significant correlationfor among variables.

Factor Analysis accepted The Kaiser-Meyer-Olkin (KMO) test and Bartlett’sand test of sphericity both indicated that the data were appropriate for factor analysis based on sampling adequacy and the case- to-variable ratio.36 The given value, KMO>0.6 (mediocre), indicated that it was sufficient to perform factor analysis on the edited,data set (see Table 6).37 Principal component analysis, a variant of factor analysis, was employed to iden- tify the structure underlying the set of four variables—that is, to determine whether all four variables can be fullycopy represented by a single factor representing generic citation impact, or whether multiple factors (representing specific types or aspects of citation impact) must be used to represent the set of four variables. The goal of factor analysis is to identify the relatively few dimensions underlying a larger set of variables—in this case, the dimensionsreviewed, underlying the four citation impact indicators (SCImago Journal Rank, impact factor, h-index, and Eigenfactor score). The results are given in Table 7. We 39 used peervarimax rotation with Kaiser normalization as the rotation method and observed thatis the rotation converged in three iterations. The number of components was deter- mined by the Eigenvalue-one criterion. Based on Table 7, the correlations between four measures have been mapped onto two principal components of principal component mss.analysis (see Figure 6). The impact factor and the SCImago Journal Rank form a cluster at the bottom right, This the Eigenfactor score is isolated in upper left corner, and the h-index lies between the two groups. These results are analogous to the results obtained in a previous study.40 The Eigenfactor score forming a separate group can be explained by having the lowest average correlation with all the other measures, as shown in Table 3. This is further 50 Relationship between Journal-Ranking Metrics for a Multidisciplinary Set of Journals

Table 7. Factor loadings (rotated component matrix)*

Component 1 2 18.1.

SCImago Journal Rank 0.925 Impact factor 0.916 portal h-index 0.770 0.323 Eigenfactor score 0.981

*Each loading indicates the extent to which the underlying factor contributes to one of the four measured variables (SCImago Journal Rank, impact factor, h-index, or Eigenfactorpublication, score). for

accepted and

edited,

copy

reviewed,

peer is

mss.

Figure 6. Correlations of the four measures of journal rank on the first two components of principal This component analysis (PCA) Upeksha Perera and ManjulaWijewickrema 51

18.1.

portal

publication, for

accepted and Figure 7. Hierarchical cluster analysis of the four measures of journal rank

edited,

emphasized in Figure 7, a dendrogram or branching, treelike diagram illustrating the copy relationship between the clusters. According to the results given by both correlation analysis and factor analysis (see Table 2, Figure 6, and Figure 7), the size-independent measures such as the impact fac- tor and SCImago Journal Rank build the strongest correlations between them. That is, the two size-independentreviewed, metrics (SCImago Journal Rank and impact factor) both load highest on the first factor, which seems to represent the average “citedness” of an article in eachpeer journal. In contrast, the two size-dependent metrics (h-index and Eigenfactor) loadis highest on the second factor, which represents the “citedness” of the journal as a whole—all articles combined. Scores on the second factor, but not the first, can therefore be expected to vary with the number of articles published in each journal. mss. The two recursive measures, SCImago Journal Rank and Eigenfactor, account for the “citedness” of each subject area as well as the importance of the journals in which the This citations appear. The other two metrics, the impact factor and the h-index, consider nei- ther of those characteristics. This distinction cannot be seen in the factor analysis results, however, since SCImago Journal Rank and Eigenfactor do not share enough variance to create a single “recursive metric” factor. Table 7 therefore suggests that the primary 52 Relationship between Journal-Ranking Metrics for a Multidisciplinary Set of Journals

distinction among the four metrics is not whether they are recursive, but whether they are size-dependent (see Table 7).

Conclusion

There is a very strong, positive, and highly significant correlation between the values of the impact factor and SCImago Journal Rank for a general collection of journals, most of them from the sciences. 18.1. There is a very strong, positive, and high- Hence, either SCImago Journal ly significant correlation between the Rank or the impact factor can be portal values of the impact factor and SCImago used to represent citation impact in a general sense. In contrast, both Journal Rank for a general collection of the h-index and the Eigenfactor journals, most of them from the sciences. score have unique attributes or idiosyncrasies that make them Hence, either SCImago Journal Rank or less suitable as publication,generic indicators the impact factor can be used to repre- of citation forimpact. However, the sent citation impact in a general sense. impact factor and SCImago Jour- nal Rank reflect some disparity on occasion. This inconsistency could result from the variation in the number of non-citable items and self-citations. More studies are needed to investigate this disparity further.accepted The discovery of strong correlation betweenand the impact factor values and the SCI- mago Journal Rank values can be utilized under different circumstances. The SCImago Journal Rank measures are available for free, but the impact factor values are not. As a result, many authors have difficulty choosing their publication venue based on the impact factor scores. However, these difficultiesedited, can be reduced to some extent by investigating the SCImago Journal Rank values of corresponding journals. If an author needs to select a journal based on the highestcopy impact factor, the selection can be done after arranging the needed journals according to their SCImago Journal Rank values. Librarians can use a similar procedure to decide which journals they should subscribe to for the next year. Another practical difficulty arises because the impact factor values are assigned only for the journals indexed by Thomson Reuters. As a result, administrators of academic reviewed, institutions cannot evaluate the performance of employees based on the impact factor if the publications are not indexed by Thomson Reuters. This is a common problem for academicpeer institutions because many of them still use the impact factor as the major indicatoris to evaluate academic performance. Nevertheless, since the SCImago Journal Rank covers a larger number of journals than the impact factor, administrators can use mss.the strong correlation between the impact factor and SCImago Journal Rank to get an uneven idea about the possible impact factor values of journals. Moreover, editors and This publishers will benefit from this strong correlation between the impact factor and the SCImago Journal Rank. They can use the SCImago Journal Rank as an alternative to the impact factor to evaluate the journals they manage or publish. Upeksha Perera and ManjulaWijewickrema 53

Since the Eigenfactor score values show a weak or very weak correlation with the other three measures, Eigenfactor behaves much differently from the others. This argu- ment is further supported by the principal component analysis. Although there are strong correlations between some of the metrics, each indica- tor produces a unique rank-ordering of journals that is distinctive in at least Although there are strong correla- some respects. Thus, it will be a matter tions between some of the metrics, 18.1. of which rank is being used; one cannot each indicator produces a unique be replaced by another, because they measure different aspects of a journal. rank-ordering of journals that is portal The authors recommend further studies distinctive in at least some respects. to address this matter exclusively and to investigate the nature of the association between the correlations of the values of pairs of metrics and the journal ranks of the same pairs of metrics. publication, Upeksha Perera is a senior lecturer in the Department of Mathematics atfor the University of Kelaniya in Kelaniya, Sri Lanka; she may be reached by e-mail at: [email protected].

Manjula Wijewickrema is a senior assistant librarian in the Main Library at the Sabaragamuwa University of Sri Lanka in Belihuloya, Sri Lanka; he may be reached by e-mail at: manju@lib. sab.ac.lk. accepted and

Appendix edited,

Here is a list of online subjectcopy guides for journal-ranking metrics developed by university libraries. These guides include some or all the following information: • Introduction to numerous journal-ranking metrics. • Strengths and weaknesses of available ranking metrics. • Videoreviewed, demonstrations regarding how to use ranking measures. • More reading materials for in-depth details of different ranking measures. • Instructions for identifying predatory publications and avoiding them. •peer Instruction on how to use different databases or citation reports to retrieve needed is information. • Instructions to evaluate publishers. mss. • Further details of and bibliometrics. • Information about citation management software. This • Information about periodical directories (for example, Ulrich’s Periodicals Direc- tory). • Using JANE (Journal/Author Name Estimator) and EBSCO Discovery Service as tools to select appropriate journals for submitting manuscripts. 54 Relationship between Journal-Ranking Metrics for a Multidisciplinary Set of Journals

University Uniform resource locator (URL)

Alberta, University of, Edmonton, Canada http://guides.library.ualberta.ca/c.php?g=565326& p=3894639 Auburn University, Auburn, Alabama http://libguides.auburn.edu/c.php?g=518949&p= 3549571 Auckland, University of, Auckland, https://www.library.auckland.ac.nz/subject-guides/ New Zealand bus/infosources/pbrf_information.htm#journal_ 18.1. impact_factors Auckland University of Technology, http://aut.ac.nz.libguides.com/impact Auckland, New Zealand portal Australian Catholic University, http://libguides.acu.edu.au/research_impact_guide/ Sydney, Australia journal_impact Australian National University, http://libguides.anu.edu.au/c.php?g=465149&p= Canberra, Australia 3180825 British Columbia, University of, http://guides.library.ubc.ca/citationmetricsworkshop/publication, Vancouver, Canada journals Brock University, St. Catherines, http://researchguides.library.brocku.ca/c.php?g=99797for Ontario, Canada &p=646650 California, University of, Berkeley http://guides.lib.berkeley.edu/researchimpact/journal- impact California, University of, Davis http://guides.lib.ucdavis.edu/content.php?pid=554723accepted &sid=4572216#17378757 California, University of, Irvine http://guides.lib.uci.edu/rand esearchimpact-metrics/ source_impact California, University of, Los Angeles http://guides.library.ucla.edu/impact Carleton University, Ottawa, Canada edited,https://library .carleton.ca/help/journal-rankings Charles Sturt University, http://libguides.csu.edu.au/c.php?g=139596&p= New South Wales, Australia 913240#s-lg-box-wrapper-13951473 Chatham University, copy Pittsburgh, Pennsylvania http://libguides.chatham.edu/facultydev/journalrank Chicago, University of http://guides.lib.uchicago.edu/c.php?g=298280& p=1991766 City University of Hong Kong, http://libguides.library.cityu.edu.hk/researchimpact/ Hong Kong, Chinareviewed, impact-of-journals Curtin University, Perth, Australia http://libguides.library.curtin.edu.au/c.php?g= peer 202403&p=1332898 Dukeis University, Durham, North Carolina http://guides.mclibrary.duke.edu/researchimpact Georgetown University, Washington, D.C. http://guides.library.georgetown.edu/c.php?g= mss. 318870&p=2164655 Georgia State University, Atlanta http://research.library.gsu.edu/c.php?g=115299 This &p=753704 Harvard University, http://guides.library.harvard.edu/c.php?g=309907& Cambridge, Massachusetts p=2070141 Hong Kong Polytechnic University, Hong Kong, China http://libguides.lb.polyu.edu.hk/journalimpact Upeksha Perera and ManjulaWijewickrema 55

University Uniform resource locator (URL)

Illinois, University of, at Chicago http://researchguides.uic.edu/journalselection Iowa, University of, Iowa City http://guides.lib.uiowa.edu/scholarlyimpact/ Maryland, University of, College Park http://lib.guides.umd.edu/bibliometrics Massachusetts, University of, Lowell http://libguides.uml.edu/JournalImpact Michigan, University of, Ann Arbor http://guides.lib.umich.edu/citation 18.1. Missouri, University of, Columbia http://libraryguides.missouri.edu/impact Monash University, Melbourne, Australia http://guides.lib.monash.edu/c.php?g=219665& p=1453955 portal Nevada, University of, Reno http://guides.library.unr.edu/impact New South Wales, University of, http://subjectguides.library.unsw.edu.au/research Sydney, Australia impact/ New York City College of Technology http://libguides.citytech.cuny.edu/c.php?g=559775 &p=3850145 publication, Newcastle University, Newcastle http://libguides.ncl.ac.uk/impact/journal upon Tyne, for North Carolina, University of, at http://guides.lib.unc.edu/measureimpact/journal Chapel Hill impact Ohio University, Athens http://libguides.library.ohiou.edu/c.php?g=40967& p=930736 accepted Pennsylvania, University of, Philadelphia http://guides.library.upenn.edu/education/publishing Pennsylvania State University, http://guides.libraries.psu.edu/bibliometrics/journaland State College measures Queensland, University of, http://guides.library.uq.edu.au/grants-promotions Brisbane, Australia edited, Queensland University of Technology, http://libguides.library.qut.edu.au/measuring Brisbane, Australia researchimpact Rutgers, the State University ofcopy http://libguides.rutgers.edu/c.php?g=336414&p= New Jersey, New Brunswick 2271168 Saskatchewan, University of, http://libguides.usask.ca/c.php?g=121420&p=793232 Saskatoon, Canada Singapore Management http://researchguides.smu.edu.sg/c.php?g=421999& University, Singaporereviewed, p=2880631 Southern Illinois University, Carbondale http://libguides.lib.siu.edu/content.php?pid=49214 peer &sid=362062 Stellenboschis University, http://libguides.sun.ac.za/content.php?pid=429497& Stellenbosch, South Africa sid=3883894 mss.Stirling, University of, http://libguides.stir.ac.uk/databaseguides/jcr Stirling, United Kingdom Stony Brook University, http://guides.library.stonybrook.edu/medicine/ This Stony Brook, New York measuring_research_impact Suffolk, University of, http://libguides.uos.ac.uk/Historysubjectguide/ Ipswich, United Kingdom journals Swinburne University of Technology, http://www.swinburne.edu.my/library2/subject Melbourne, Australia guide/engscience.html 56 Relationship between Journal-Ranking Metrics for a Multidisciplinary Set of Journals

University Uniform resource locator (URL)

Sydney, University of, Sydney, Australia http://libguides.library.usyd.edu.au/c.php?g=508097 &p=3477006 Syracuse University, Syracuse, New York http://researchguides.library.syr.edu/citationmetrics/ journalmetrics Texas Tech University, Lubbock http://guides.library.ttu.edu/impact Toronto, University of, Canada http://guides.library.utoronto.ca/c.php?g=250624 18.1. &p=1671315 Tulane University, New Orleans, http://libguides.tulane.edu/dermatology/ Louisiana dermjournals portal Utah, University of, Salt Lake City http://campusguides.lib.utah.edu/bibliometrics Victoria University of Wellington, http://libguides.victoria.ac.nz/optimising-your- Wellington, New Zealand research-impact/journal-metrics Virginia Polytechnic Institute and http://www.lib.vt.edu/research/metrics/journal/ State University, Blacksburg index.html publication, Warwick, University of, Coventry, https://www2.warwick.ac.uk/services/library/staff/ United Kingdom research/disseminating-research/which-journals/for Washington, University of, Seattle http://guides.lib.uw.edu/hsl/impactfactors Western Australia, University of, http://guides.library.uwa.edu.au/c.php?g=325233 Crawley, Australia &p=2177836 Yale University, New Haven, Connecticut http://guides.libraryaccepted.yale.edu/articlepublishing and

Notes edited,

1. Bo-Christer Björk and Jonas Holmström, “ Scientific Journals from the Submitting Author’s Viewpoint,”copy Learned Publishing 19, 2 (2006): 147–55. 2. Johan Bollen, Marko A. Rodriquez, and Herbert Van de Sompel, “Journal Status,” 69, 3 (2006): 669–87; Ying Ding and Blaise Cronin, “Popular and/or Prestigious? Measures of Scholarly Esteem,” Information Processing & Management 47, 1 (2011): 80–96; Massimo Franceschet, “The Difference between Popularity and Prestige in the Sciencesreviewed, and in the Social Sciences: A Bibliometric Analysis,” 4, 1 (2010): 55–63. 3. Zao Liu and Gary (Gang) Wan, “Comparing Journal Impact Factor and H-Type Indices in Virologypeer Journals,” Library Philosophy and Practice (2012). 4. isGuillaume Chapron and Aurélie Husté, “Open, Fair, and Free Journal Ranking for Researchers,” BioScience 56, 7 (2006): 558–59; Rodigo Costas and María Bordons, “The H-Index: Advantages, Limitations and Its Relation with Other Bibliometric Indicators at mss. the Micro Level,” Journal of Informetrics 1, 3 (2007): 193–203; Clint D. Kelly and Michael D. Jennions, “The H Index and Career Assessment by Numbers,” Trends in Ecology & Evolution This 21, 4 (2006): 167–70; Julian D. Olden, “How Do Ecological Journals Stack-Up? Ranking of Scientific Quality According to the H Index,” Ecoscience 14, 3 (2007): 370–76. 5. Eugene Garfield, “The History and Meaning of the Journal Impact Factor,” JAMA (Journal of the American Medical Association) 295, 1 (2006): 90–93; Jorge E. Hirsch, “An Index to Quantify an Individual’s Scientific Research Output,” Proceedings of the National Academy of Sciences of the United States of America 102, 46 (2005): 16569–72. 6. Benjamin M. Althouse, Jevin D. West, Carl T. Bergstrom, and Theodore Bergstrom, Upeksha Perera and ManjulaWijewickrema 57

“Differences in Impact Factor across Fields and over Time,” Journal of the American Society for Information Science and Technology 60, 1 (2009): 27–34; Ana M. Ramírez, Esther O. García, and J. Antonio Del Río, “Renormalized Impact Factor,” Scientometrics 47, 1 (2000): 3–9. 7. Robert E. Bartholomew, “Science for Sale: The Rise of Predatory Journals,” Journal of the Royal Society of Medicine 107, 10 (2014): 384–85. 8. Paul Benjamin Lowry, Gregory D. Moody, James Gaskin, Dennis F. Galletta, Sean Humphreys, Jordan B. Barlow, and David Wilson, “Evaluating Journal Quality and the Association for Information Systems (AIS) Senior Scholars’ Journal Basket via Bibliometric Measures: Do Expert Journal Assessments Add Value?” MIS Quarterly 37, 4 (2013): 18.1. 993–1012. 9. Andrew P. Kurmis, “Understanding the Limitations of the Journal Impact Factor,” Journal of Bone & Joint Surgery 85, 12 (2003): 2449–54. portal 10. Bollen, Rodriquez, and Van de Sompel, “Journal Status.” 11. Pablo Dorta-González and María-Isabel Dorta-González, “Comparing Journals from Different Fields of Science and Social Science through a JCR [Journal Citation Reports] Subject Categories Normalized Impact Factor,” Scientometrics 95, 2 (2013): 645–72. 12. , “Eigenfactor: Measuring the Value and Prestige of Scholarly Journals,” College & Research Library News 68, 5 (2007): 314–16. 13. Ibid. publication, 14. Jevin D. West, Theodore C. Bergstrom, and Carl T. Bergstrom, “Thefor Eigenfactor™ Metrics: A Network Approach to Assessing Scholarly Journals,” College & Research Libraries 71, 3 (2010): 236–44; Massimo Franceschet, “Ten Good Reasons to Use the Eigenfactor™ Metrics,” Information Processing & Management 46, 5 (2010): 555–58; David R. Hodge and Jeffrey R. Lacasse, “Ranking Disciplinary Journals with the Google Scholar H-Index: A New Tool for Constructing Cases for Tenure, Promotion,accepted and Other Professional Decisions,” Journal of Social Work Education 47, 3 (2011): 579–96. 15. Hodge and Lacasse, “Ranking Disciplinary Journals with the Google Scholar H-Index.” 16. Ibid. and 17. Anthony F. J. Van Raan, “Comparison of the Hirsch-Index with Standard Bibliometric Indicators and with Peer Judgment for 147 Chemistry Research Groups,” Scientometrics 67, 3 (2006): 491–502; Lutz Bornmannedited, and Hans-Dieter Daniel, “The State of h Index Research,” EMBO [European Molecular Biology Organization] Reports 10, 1 (2009): 2–6. 18. Borja González-Pereira, Vicente P. Guerrero-Bote, and Félix Moya-Anegón, “A New Approach to the Metric copyof Journals’ Scientific Prestige: The SJR [SCImago Journal Rank] Indicator,” Journal of Informetrics 4, 3 (2010): 379–91. 19. Corey J. A. Bradshaw and Barry W. Brook, “How to Rank Journals,” PLOS [Public Library of Science] One 11, 3 (2016). 20. González-Pereira, Guerrero-Bote, and Moya-Anegón, “A New Approach to the Metric of Journals’reviewed, Scientific Prestige.” 21. Gad Saad, “Exploring the H-Index at the Author and Journal Levels Using Bibliometric Data of Productive Consumer Scholars and Business-Related Journals Respectively,” Scientometricspeer 69, 1 (2006): 117–20. 22.is Olden, “How Do Ecological Journals Stack-Up?” 23 Pascal Bador and Thierry Lafouge, “Comparative Analysis between Impact Factor and H-Index for Pharmacology and Psychiatry Journals,” Scientometrics 84, 1 (2009): 65–79. mss.24. González-Pereira, Guerrero-Bote, and Moya-Anegón, “A New Approach to the Metric of Journals’ Scientific Prestige.” This 25. Hodge and Lacasse, “Ranking Disciplinary Journals with the Google Scholar H-Index.” 26. Liu and Wan, “Comparing Journal Impact Factor and H-Type Indices in Virology Journals.” 27. John Mingers, Frederico Macri, and Dan Petrovici, “Using the H-Index to Measure the Quality of Journals in the Field of Business and Management,” Information Processing & Management 48, 2 (2012): 234–41. 58 Relationship between Journal-Ranking Metrics for a Multidisciplinary Set of Journals

28. Hamidreza Kianifar, Ramin Sadeghi, and Leili Zarifmahmoudi, “Comparison between Impact Factor, Eigenfactor Metrics, and SCImago Journal Rank Indicator of Pediatric Neurology Journals,” Acta Informatica Medica 22, 2 (2014): 103. 29. Hodge and Lacasse, “Ranking Disciplinary Journals with the Google Scholar H-Index.” 30. Liu and Wan, “Comparing Journal Impact Factor and H-Type Indices in Virology Journals”; Mingers, Macri, and Petrovici, “Using the H-Index to Measure the Quality of Journals in the Field of Business and Management”; Kianifar, Sadeghi, and Zarifmahmoudi, “Comparison between Impact Factor, Eigenfactor Metrics, and SCImago Journal Rank Indicator of Pediatric Neurology Journals”; Mario Cantín, M. Muñoz, and Ignacio Roa, 18.1. “Comparison between Impact Factor, Eigenfactor Score, and SCImago Journal Rank Indicator in Anatomy and Morphology Journals,” International Journal of Morphology 33, 3 (2015): 1183–88. portal 31. Philip M. Davis, “Eigenfactor: Does the Principle of Repeated Improvement Result in Better Estimates than Raw Citation Counts?” Journal of the American Society for Information Science and Technology 59, 13 (2008): 2186–88. 32. Jevin West, Theodore Bergstrom, and Carl T. Bergstrom, “Big Macs and Eigenfactor Scores: Don’t Let Correlation Coefficients Fool You,” Journal of the American Society for Information Science and Technology 61, 9 (2010): 1800–1807. 33. Johan Bollen, Herbert Van de Sompel, Aric Hagberg, and Ryan Chute, “Apublication, Principal Component Analysis of 39 Scientific Impact Measures,” PLOS [Publicfor Library of Science] One 4, 6 (2009): e6022. 34. James D. Evans, Straightforward Statistics for the Behavioral Sciences (Pacific Grove, CA: Brooks/Cole, 1996). 35. David S. Moore and George P. McCabe, Introduction to the Practice of Statistics (New York: Freeman, 1989). accepted 36. Sadeghi Ramin and Alireza Sarraf Shirazi, “Comparison between Impact Factor, SCImago Journal Rank Indicator and Eigenfactor Score of Nuclear Medicine Journals,” Nuclear Medicine Review 15, 2 (2012): 132–36. and 37. Henry F. Kaiser, “An Index of Factorial Simplicity,” Psychometrika 39, 1 (1974): 31–36; Maurice S. Bartlett, “Tests of Significance in Factor Analysis,” British Journal of Mathematical and Statistical Psychology 3, 2 (1950):edited, 77–85. 38. Kaiser, “An Index of Factorial Simplicity.” 39. Henry F. Kaiser, “The Verimax Criterion for Analytic Rotation in Factor Analysis,” Psychometrika 23, 3 (1958): copy187–200. 40. Bollen, Rodriquez, and Van de Sompel, “Journal Status.”

reviewed,

peer is

mss.

This