Quick viewing(Text Mode)

Evaluating Research: Do Bibliometric Indicators Provide the Best

Evaluating Research: Do Bibliometric Indicators Provide the Best

EUGENE GARFIELD lNSTITUTE FOR SCIENTIFIC lNFOUMATIONe 3501 MAR KETST PHILADELPHIA PA 191C4

Evaluating : Do Bibliornetric Indicators Provide the Best Measures?

Number 14 April 3, 1989 In recent months we have reprinted sever- Beginning with an examination of the peer al studies, originally published elsewhere, review system, King goes onto assess vari- involving the use of biblio-scientometric ous bibliometric indicators, including pub- measures to evaluate research. In one such lication counts, citation and co-citation study, Michael R. Hrdperin and Alok K. analysis, journal impact factors, and a newer Chakrabarti, Drexel University, Philadel- method known as co-word analysis. (For phia, examined the relationship between the reasons of space, we have omitted King’s volume of scientific and technical papers discussion of other indicators, such as pa- produced by industrial scientists and the tent analysis, measures of estwm [which in- characteristics of the corporations employ- clude attraction of outside funding, mem- ing them. 1 Another study, by economist bership in professional societies, winning of Arthur M. Diamond, Jr., University of Ne- international prizes, and so on], and input- braska, Omaha, evaluated the role that ci- output studies. ) She concludes by recom- tations play in determining salaries.z And mending continued development of new we recently reprinted an article from the methods, including online techniques for Times Higher Education Supplement publication and citation retrieval. (London) in which I discussed the strengths Discussing citation and co-citation analy- and weaknesses of citation analysis as a mea- sis, King cites various critical assessments sure of research performance in individual of these techniques. Since we have ad- university departments.q dressed many such criticisms on previous From those specific studies, we move to occasions,3$7 I will not do so here. I will a more general discussion by Jean King, simply present King’s paper, which is a then at the Agricultural and Food Research thorough and valuable review. As budgets Council, London, UK, now aftliated with and funding for come under increas- the Cancer Research Campaign, London. ing scrutiny at rdl levels, it is essential that Her review originally appeared in the Jour- evahtative methods be as accurate, as nal of Inforrrdon Science, sponsored by the balanced, and as appropriately applied as Institute of Information Scientists. The ar- possible. ticle discusses a variety of bibliometric mea- ***** sures and the implications of their use in re- search evahtation.~ King kindly abridged My thanks to Chris King for his help in her paper for publication here. the preparation of this . wowN

REFERENCES

1. Garffeld E. Measurins R&D productivity through scicntotnetrics. Current Conrems (30):3-9, 25 July 19S8. 2, —. Can researchers bank on citation analysis? Current Gmrents (44):3-12, 3 I October 198S. 3. ——. The impact of citation counts-a UK pxspectiv.. Current Cotients (37):3-5, 12 September 198S. 4, King J. A review of bibliometric and other science indicators and their role in research evaluation. J. inform Sci. 13:261-76, 1987. 5. Garfield E, Can criticism and documentation of research papers be automated? E.mays of an infomwion scientist. Philadelphia 1S1 Press, 1977. Vol. 1, p. 83-%. 6 —----- How to use citation analysis for faculty evaluations, and when is it relevant? Parts 1 & 2. 16id., t9S4. Vol. 6. p. 354-72. 7. ———. Uses and misuses of citation frequemcy. Ibid., 19S6. Vol. 8. p. 403-9.

93 A review of bibliometric and other science indicators and their role in research evaluation.

Jean King

Recent reductions in research budgets have led to the capital intensity of research (e.g. the ‘sophisti- need for greater selectivity in resource atkation. Ma- cation’ factor; the growth of ‘Big Science’); ex- Surcs of past pzrfomszncc are stilt snwng the most prOm- panding objectives and opportunities, with msmy ising mans of deciding between competing interests. new fields emerging; an increase in collaborative, B!bhometry, the measurement of scientific publications often multidisciplinary, research projects which and of their impact on the scicnti conrrnuni ty, assessed by the citations they attmct, provides a portfolio of in- require coordination; a coateacence of basic and dicators that can bc combined to give a useful picture applid research, with much research now of a of recent research actwiry, fn this stateaf-tk-arl review strategic mfure (i.e. long-terns basic research un- the vsrious methodologiesthst hsve been developed sre dertaken with fairly specific applications in view); outlined in terms of their strengths, weaknessessnd P- finally, economic constraints require choices to ttcular applications. The present limitations of science be made between different disciplines, fields and mdicafors in research evaluation are considered and research proposafs. 1The process has some future directions for developnwm in techniques thus come under increasing pressure. are suggested.

1. Introduction 3. The peer review system

In recent years policy makers and research The peer review system has been criticized on managers have become increasingly interested in several counts: the tssc of indicator-s of scientific output. The back- (1) The partiality of peers is an increasing prob ground to this development is considered, and in lem as the concentration of research facilities in Sections 2 and 3, the need for accountability and fewer, larger centres makes peers with no vested the current pressures on the peer review process interest in the review increasingly diffietdt to fid. are outfined. The types of bibliometric indicators (2) The ‘old Lmy’network often results in older, that may have vafue in assessment studies are re- entrenchd fields receiving greater recognition viewed in Section 4, ranging from weffdcctsme.rrt- than new, emerging areas of research, whife de- ed and widely applied approaches such as publi- clining fields may be protected out of a sense of cation counting, citation frequency analysis and loyalty to colleagues. Thus the peer review pro- co-citation mapping, to less-well-known methods cess may often be ineffective as a mechanism for such as journal-weighted citation analysis ad eo- restructuring scientific activity. word mapping. Directions for frshsre research are (3) The ‘Mo’ effect may restdt in a greater fike- suggested that may lead to more widely applicable Iihood of fmschng for more ‘visible’ scientists and and more routinely available indkwors of scien- for higher status departments or institutes. tific performance. (4) Reviewers often have quite different ideas abut what aspects of the research they are assess- ing, what criteria they should use and how these 2. Accountability in scienee should be interpreted. The review itself may vary Prior to the 1%0’s, funding for scientific re- from a brief assessment by post to site visits by search in the UK was relatively unrestricted, Re- panels of reviewers. sources were allocated largely on the basis of in- (5) The peer review process assurncs that a high ternal scientific criteria (criteria generated from level of agreement exists among scientists about within the scientific tield concerned), which were what constitutes good quafity work, who is doing evafuated by the peer review process. Various fac- it and where promising lines of enquiry lie, art tors have resulted in a need for greater selectivi- assumption that may not hold in newer ty in the allocation of ftrnds: an increase in the specialities.

Abndsed verwo” of an mrick tit cmsmatly appuaj m du Journal oJ/nfomcion Scwnce 135. 1987, pp. 261-76 Repnntrd wrh pemussIon of Ekvier of ElscvIer Fubkbers B.V

94 (6) Resource inputs to the review process, Mb ticle with its acconrparryirrg list of citations has in terms rf administrative costs and of scientists’ always beers the accepted medium by which the time, are considerable but usually ignored. scientific community reports the results of its in- Various studies have addressed these criticisms. vestigations, However, while a publication count For example, art arudysis of National Science gives a measure of the total volume of research Foundation (NSF) review outcomes found little output, it provides no indication as to the quality evidence of bias in favour of eminent researchers of the work performed. or institutiorrs.z However, a later study, in which Other objections include: independently selected panels re-evahsated fund- (1) Informal and formal, non-joumsJ methods ing decisions by NSF reviewers. found a high lev- of communication in science are ignored.7 el of disagreement among peers over the same (2) Publication practices vary across fields and proposals, It concluded that the chances of receiv- between journals. Also, the social and political ing a grant were determined about 50% by the pressures on a group to publish vary according characteristics of the propml and prirtciprd in- to country, to the publication practices of the em- vestigator and 50% by random elements i.e. the ploying institution and to the emphasis placed on ‘luck of the reviewer draw’. 3 The interpretation number of publications for obtaining tenure, pro- of this data has beerr questioned, however, and motion, grants etc. the fact that most disagreement was found in the (3) It is often very difficult to retrieve all the mid-region between acceptance arrd rejection em- papers for a particular field, and to define the phasized. 4 In another study, a sample of NSF re- boundaries of that field in order to make a com- viewers and applicants was asked which of two parative judgement i.e. the choice of a suitable equaJly good proposals the individuals thought database is problematical.s was more likely to be firnded (a) from a well- (4) Over the past few decades the number of known versus a little known institution and (b) papers with multiple authorship has incread con- containing mainstream versus radical ideas. The siderably .9,10 Although this is largely due to a prospects of the proposal from the better known greater prevalence of collaborative research, the irratittrte and with maimmam views were favoured gratuitous Confernng of co-authorship is not un- by the majorny of responderrts.~ The -r review common. Another recent trend had been the process remains an essential element in assessing shrinking length of papers, resulting in the emer- the quality of science, but improvements to the gence of the ‘Least Publishable Unit’. This may system are warranted. These might include: be associated with various factors including frag- (1) The right of reply by researchers to criti- mentation of &ta. 11Thus, an awareness of the cisms of their proposafs, a system already oper- usc of publication counts for assessment may en- ating for grant applicants to the NSF (USA) and courage undesirable practices. 12 the Nederlands Organisatie voor Zuiver-Weten- Despite these constraints, studies have shown schappelijk Ortderzoek (Netfterlsmds Organization a reasonable degree of correlation between pub- for pure scientific Research [ZWO]). lication counts and other measures of scientific (2) The use of external peers, from neighborrr- merit, such as tirnding and peer ranking and some ing fields and other countnes.b of these have beerr reviewed by Jones. 13 More (3) Clear guidelines on the criteria to be recentfy, a correlation of 0.95 was found between employed. the amount of NationaJ Institutes of Health (NIH) (4) The use of objective scientific indicators to funds received and the rttsrrdm of biomedical pub complement the peer review process. Iications produced 2 years later by 120 US medi- cal schools. 14 Similarly, a peer ranking of US faculties and graduate programttres (the Roose- 4. Blbliometric indicators Andersen ranking) was found to correlate most highly with the research ‘size’ of the institution, The most widely applied indicators for research as measured by the number of papers it pro- evaluation to date have beers those based on bib- duced. 15 liometric arrafysis, including publication counts and citation analysis.

4.2. Citation analysis 4.1. Publication counts This involves counting the number of citations, The simplest bibliometric irrdhtor is the num- derived from the Science Citation Irrdex” (WY), ber of published articlea that a researcher or group to a particular paper for a period of years afier has produced. For basic research the journal ar- its publication. The exact period will vary from

95 field to field, since the time lag between publica- (a) A considerable number of citations maybe tion and maximum number of citations received lost in an automated search of the SCI, due to such in a year differs between s~isdities. Citation problems as homographs (authors with the same anrdysis presents a number of serious tdtrrical name), inconsistency in the use of initials and in and other problems beyond those already dis- the spelling of non-English names and citing cussed for publication counts. errors. (b) There have been substantial changes in the journal set covered by the SCI, with some jour- Reasons for citing nals dropped and a larger number of new ones (a) Citation analysis makes the assumption that added, so that no consistent set exista. Also, since an intelklurd link exists between the citing source 1977 non-journal material e.g. conference pro- and referersw article. Cronin16 reviews 10 differ- ceedings, etc. were included in the ent classifications of reasons for citing including database .20 ‘hat-tipping’, over-elaborate repting and citing (c) The Xl journaf set shows a considerable the most popular research trends to curry favour bias in favour of US and other English-language with editors, grant-awarding bodies etc. journals and against journals from the USSR and Murugesan and Moravcsik]7 have produced a other countries with non-Romars alphabets.z I four-fold classification which is fairly compre- hensive: (1) Conceptual/Operational Field-dependent factors (theory) /(method) (a) Citation (and publication) practices vary be- (2) Organic/Perfunctory tween fields and over time.zo For example, bio- (essential) /(non-essential) chemistry papers now generally mntain about 30 (3) Evolutiorrary/JuxtapositionaI references whereas mathematics papers usually (development of idea) /(contrasting idea) have less than ten. Is A study of one Dutch uni- (4) Confirrnative/Negative versity’s publications found that 40% of mathe- (supports tindings)/(opposes findings) matics articles, white written in English, were not They found that 41% of citations to 30 articles published in journals but in re~ch reports, and in the Physics Review were in the ‘perfunctory’ therefore were not covered by the SC1. Coverage catego~, of physics and astronomy publications, however, (b) Work that is incorrect maybe highly cited. was over 80% .22 However, it has been argued that even incorrect (b) The probability of Ming cited is also tield- work, if cited, has made an impact or has stimu- dependent; a relatively small, isolated field will lated further research efforts, white work that is attract far fewer citations than either a more gen- simply of bad quality is usually ignored by the eral field or research within a narrower field that scientific community. 16 has a wider focus of interest. (c) Methodological papers are among the most (c) The decay rate of citation frequency will highly cited, which, it is argued, reflects their con- vary with each field, and with the relative impact siderable ‘usefulness’ to science. Converse] y, of the paper, i.e. a Iittfe cited paper will become marry tecfiqu= and theories become assimilated obsolete more rapidly than a more highly cited into the science knowledge and their origi- one.23 nators cease to be acknowledged (the ‘oblitera- Some of these problems, and ways of minirnb- tion phenomenon ‘). ing their effects, have been summarized by Mar- (d) Self-citation may artificially inflate citation tin and Irvine. 24 rates; in one study of psychology papers, 10% For example, the citation rate of a paper may of authors self-cited for more than 30% of their be considerai a partial measure of its ‘impact’ citations.19 [rather than of ita ‘quality’ or ‘importance’), where impact is defined as “its actual influence on sur- Database firrrilafions rounding research activity at a given time”. In while all publications databases are subject to this context, citation analysis may be used as one various weaknesses, such as typographical errors, of a number of imperfect indicators (including inconsistent or incomplete coverage of the liter- publication counts, peer review and higfdy cited ature etc., such problems are particularly serious papers) in Irvine and Martin’s “Method of con- in the case of the SC1 sirtct it is the primary source verging partial indicators”. This method is based of all citation data. The following factors have m the application of a range of performance in- been stressed: dicators to match research groups using similar

96 Table 1 rabl. 1

Stud,es us,ng btbhmretnc md,.amrs to assess speLI.lItms I. Comparison of dmct citauon cowmns and journal mffwncc b.s,c set.nce as research prcd.ct, v,ly measures

%=t.W DtrecI .WWC.” co.nlm. Journal Influence Measure Rad,o astronomy Opwd asmmomy Ava,lable qu,ckly, as soon H,@ .“.,~y physws a, btbhosrapfues are tom. lnsect,ctdcs (md.strial developnve”t) pleted NOI as Iabour ,“ten. Weak mlemctm”s ,“ physics ,,’+. or conlp”latlo”ally

(..penm..tal US. thwreucal) 45 complex Rd.uv+ .asdy Ocean currents (theorettcai .s .bserv.tmnal “onnahzed —varmus [echmques; 46 D,s”dmtag,s Pmt.,. crystallography 46 Must W&,t 3-5 yea,, aller Only wd,d (or rdauwly Integrated optncs 47 p.bhcat,on large sets of pap,, G.net,cs — genenc mstabdny drosophda genet,cs 8 Sohd state physics ( [heorewd .s. expenmenlai) Labour ,“te”s,vc a“d cc,m- Low ,dmd,f,catmn of spe. —spin glass. extended .-ray absorplmn l,ne p“tallo”ally complex ctiftc htshly cited ,“d, v,duals structure; quantum fl.tds 8 and papers Jour”d I.. Must always be carefully fluencc may cbanse .,e, nonnafm.d. t,m.

research facilities and publishing in the same body source 31 of international journals subject to comparable ref- ereeing procedures. When the indkators all point in the same dklion, the results of the evalua- tedious, repetitive tasks, cannot be claimed to be tion are regarded as beiig more refiskblethan those mor-free, it is undoubtedly the most accurate one based on a singie indicator like peer review .25 wifable at present. If bibliometry in general, and The limitations of citation counts, their use as onfy :itation anafysis in particular, are to become more one of a number of indicators, and the cmrtimking meful tools in science evacuation and policy, then inqmrtance of peer review have also been meam must be developed of generating databases strewed.zb md of retrieving citations on a semi-automated High citation counts have been shown to cor- ~asis with minimal omissions and errors. In a pio- relate closely with recognized quality indicators mxing study of onfine retrievrd of publications such as honorific awards, includksg the Nobel related to chemical oceano graphy, a Fremh group Prize .27 Similarly, significant positive correla- wphasizes the need to establish a set of standardg tions between the aggregate peer ratings of de- for writing and indexing papers, which would fa- =ents or institutes and the citation frequencies cilitate the identification of relevant publications of their members have been reported. 15 ming boolean searches qo Citation frequency arrafysis has been increas- ingly used in the evaluation of science. National $.3. Journal ‘Impact’ or ‘Influence ’ performance in eight major fields was assessed by Irvine et S12S using aggregate, computer- Manual citation analysis is lalmur intensive and generated data, while manual citation anafysis on ;uitable orrly for srnafl-state studies. For larger, several specialities was used by the Royal Soci- lggregatd data, weighted citation counts based ety to assess UK performance in genetics and solid m the average number of citations each journaf state physics. g Gther studies are summarized in tceives, provide an easier if lew accurate edrtrate Table 1. ]f citations gained. Gari5eld’s ‘’ is In the Netherkmds,2~ and in Hungary,2g re- he ratio of the number of citations a journal re- search groups within departments or faculties have :eives to the numlxr of papers published over a ken compared using publication and citation wiod of time. The Journal Citation Reports@ arrafysis. Where sucn mall groups are concerned, :JCR” ) gives yearly impact factors for the jour- extreme care must be taken to recover sdl rele- mfs covered by SCI, calcufatexl as citations re- vant data, since a missing highly cited paper would %ived that year to the publications of the previous greatfy distort the results. It has been suggested 1years. However, this approach ignores the rel- that rates of 99 % and 95 % coverage for publica- atively high citation rates generated by review ar- tions and citations respectively are required.~ A icles, the greater weight that arguably ghorrld be further difficulty at the disaggregate level is the iccorded citations from bigher prestige journafs, selection of groups for comparison with the group md inter-field variations in citation practices. The being assessed. Another problem is that, to date, advantages and disadvantages of using such most smaller-scale smdles have been made man- weighted citation counts have been summarized ually and although tlis method, which involves )y Rothman31 (see Table 2).

97 Various studies have used jourmd influence: for the new ‘variable level clustering’ sets a limit to example in the USA a comparison of peer rating cluster size. These techniques improve the disci- and the citation influence rating of journals in 10 plinary balance of the maps in the SCI fde.K,35 different fields found a strong positive relation- Used in conjutxtion wills conventional historical ship. 32 On a srtder scale, the joumrd ‘packet’ methods, co-citation analysis was found to be a of journals in which departments published was vduabie tool for identifying important foci of in- constructed for university research groups in the tellectual activity (researvh fronts) in the speciality Netherlands. Using a ‘JournaJ Citation Score’ of weak interaction in physics. % However some (JCS) analogous to SCf’s ‘impact factor’ the authors criticised the methodology on severrd weighted average JCS value was calculated for grounds, including the time lag between the ac- each group. The value obtained represented the tual inception of a new line of research and the ‘expected’ number of citations per article for that formation of a cluste~ the over-representation of journal set, which could then be compared with theoretical as compared to experimented papers, the actuat number of citations received by the re- due to their greater case of production; and the search group .22 However, the use of the joumd loss of many relevant papers due to such factors impact methodology for small-scale, disaggre- as the circulation of pre-prints, publication in gate evrduations remains controversird, since Russian journals (not covered by SCJ), and the ordy approximate measures are obtained. ‘obliteration phenomenon’, Similar objections have been raised for the spe- ciality of spin-glass, where co-citation data were compared with a manually generated bibliogra- 4.4, Co-citation analysis phy, and additional problems included errors in Bibliographic data have also keen used to con- citations, the inclusion within clusters of papers stmct maps of the structure of science using cogi- from related but distinct fields and the subjectivity tation (cluster) analysis, This is baaed on 2 as- inherent in the setting of threshold levels although sumptions: (1) that when 2 papers are cited to- these levels strongly affected the size and con- gether by a third paper, then a cognitive relation- tent of clusters. 37 ship exists between them arxi (2) that the strength Thus while co-citation analysis has evident use- of this relationship is proportional to the frequency fulness for social scientists who wish to study the of the co-citation linkage (i.e. the number of structure of science, its role in science policy papers that co-cite them). Clusters of related remains controversisd. However it has been ap- papers may be constructed for a specified thresh- plied by the Raad van Advies voor het Weten- old of co-citation and the relationships between schapsbeleid (Science Policy Advisory Council clusters displayed spatially using muJtidirnension- [RAWB]) in the Netherlands to identify those al scaling. The clusters represent specialities or fields in which Dutch scientists have a consider- fields, while links between them reveal interdis- able international impact, and both Oermany and ciplinary relationships. Australia are taking an interest in the RAWB Thus, co-citation anaJysis maybe used to map Study .38 various features including: the structure of re- search fields or specialities; communications be- 4.5, Co-word analysis tween fields or specialities and, using time series, the development of active research fronts or the This methodology has been developed by the historical development of a Particular area of Centre de Sociologic de l’Innovation (Center for knowledge.3s In the 1S1Atlas of”Science”, mul- Wiology Imovation [CSU) in Paris. It involves tiple levels of clusters have been used to produce analysing papers to identify keywords that de- ‘nested maps’ which provide hierarchical or re- scribe their research content and linking papers giorudized structures of large fields such as bio- by the degree of co+ccumnce of these keyworda chemistry or biotechnology. to produce a ‘map index’ of a speciality. Many Recent improvements in the methodology make journals and abstracting services already provide allowances for inter-field variations. ‘Fractional such keywords. One of the main advantages of citation counting’ overcomes the problem of the co-word analysis would seem to lie in its inde- over-representation of high referencing areas such pendence from the SC1databaae, which has a con- as biomedicine and the under-representation of siderable US and English-language bias, and may low referencing ones such as mathematics. The therefore cover only partially the research out- citing paper now carries a totsd strength of 1 which put of many smaller, non English-speaking coun- is divided equally between all its refere=. Also, tries. Also the considerable time lag associatd

98 with citation-based analyses is avoided and the Conclusion method is not fimited to scientific publications but may be applied to sfl forms of scientific literary My purpose has been to outfine the different output such as technicat reports and patent appli- types of science indicator and the various meth- cations, and may therefore be used to investigate odologies that have been used in the assessment applid as well as basic research. of scientific performance. Most blbliometric stud- In order to minimize the weaknesses inherent ies have operated at the level of the scientific spe- in manual indexing, the CSI has developed a cnm- ciality or field, and the reliability of results ob- puter-aided indexing technique for full-text da- tained by applying bibliometric techniques to smsfl tabases cafled LEXINET in which the controlled numbers of publications has raised serious doubta, vocabulary is constructed and updated interactive- due to the conceptual and technical problems out- ly each time a new document is arudysed. 39 lined in section 4.2. However, many managers Together with the LEXIMAPPE programme for are today confronted with the need for objective co-word mapping, ttds technique is being devel- indicators, to complement the peer review pro- oped as a science policy tool and has already been cess, for smaff or medium-sized, often mrdti-dis- used to analyse publications from research on ciplinary, research groups. There is therefore a dietary tibre and aquiculture; various documen- need to develop reliable, preferably field-inde- tary sources, including project proposals, for mac- pendent, indicators for use at a diaaggregated romolecular chemistry research and patents in the level; whether journal ‘influence’ or bibliographic field of biotechncdogy. The authors emphasize that coupling can provide such indicators has yet to the method is in a developmental stage and raise be assessed. In addition, there is a need for these various technicaf issues to be resolved. ‘.41 indicators to be generated on a routine basis; a An Anglo-French study in progress on acid rain major constraint to such a system is the labour-in- research is using co-word analysis to identify: tensive nature of the currently accepted manual (a) research ‘themes’ based on small clusters methodology. The development of online tech- of frequently-cccurring keywords in papers (max- niques, with minimal and acceptable rates of error, imum 10 keyworda); for both publication and citation retrieval, is ur- (b) the strength of the intcrrraf links within gentfy required; more consistent interdatabase themes (when strongly linked, these are said to formatting and indexing for publications would represent highly cohesive areas of research which greatly facilitate this prccess. It would also be use- may in fact be peripheral to the speciality under ful to assess whether sampling significantly alters study), and the results obtained from complete sets of publi- (c) the number of extcmsf links between cations and citations (e. g. taking peak citation themes, which may indkate centrality to the spe- years only), since this would substantially reduce ciality (where there are many links the theme is the volume of data to be handled. In terms of re- deemed to be central to the speciality and vice- search inputs, there is a need for consistent and versa). comparable data at a relatively dissggregatcd ‘tlrus a strategic map maybe drawn up in which level, to facilitate mcaningfrd input-output assess- the centrality and internal cohesiveness of themes ments. Finafly, there is an urgent need for debate are related in order to identify either potentially among research managers and policy makers neglected or peripheral themes of research. Thk about the mle that science indicators shoufd play, methodology is termed ‘research network both in terms of the weight they should carry rel- management’ .42 There may therefore be a poficy ative to, for example, peer review, and the level role in the future for co-word analysis at the level at which they should be incorporated into the de- of determining prrority areas of research. cision-making process.

REFERZNCZS

1. Zbmn J. Criteriafor mtionzd priorities in rcsesrch. Rise and JW1 of a priori~ ficLf. Proceedings of Itw ESRC/NSF Swumium. 22-24 SemmbeI 19S5, Paris, France. Stmsbourg, France: ~w-= *ien~ FOu*tim 19s5. p; 95-125. 2. Cote S, Rubln L & ColeJ R. Peer review ‘and the suppm of science. Sci, Amer. 237(4):34-41, 1977. 3. Cok S, Cole J R & SbnonG A. Chance and consensus in WI review. Science 2 !4:SS1 -6. 19S 1. 4, taters. Science 2t41292-4, 19S1; 215:344-S, 19S2. 5. Mltroff I I & Chubln DE. Peerreviewof the NSD: a dialectical policy amtysis. .%x Smd. Sci. 9:19’-232, 1979. 6. Irvfne J & Martin B R. Whatdirections for basic research? (Gibbons M, (iummett P & UdSac.nka B M, A ) Science ad technology policy in k IPM’s and beyond. kmton: Lmgman, 19S4. p. 67-98.

99 7. Sdge D. Quantitative measures of mmmunkation in science: a criticaf review, Hi.w, Sci. 11102-34, 1979. 8 Advfsory Board for the Research Cmmcfls. Evacuation of Mtionaf pdormance in basic r=ch, ABRC science policy smdies no. 1. London: ABRC, i9SJ5. 9 Price D J D. ,Lmk scimcc, big scw=nce. New York, Columbia University Press, 1%3. 118 p. 10. Lindsey D. Production amf citation measures in the soaology of scimce: the problem of nmftipk awftorship. Sac Srud. Sci. 10145-62, 1980 11. Broad W. Tlc publishing game: getting more for less. .%ence 21 I: I I 37-9, 1981. 12. Greaherg D. Fraud d the scientific method. New Sci. 112(1533):64, 1986. 13. Jones L V. The assessment of whlarship NcwDirect.Progrm Eval. 6: I-20, 191?J3. 14 McAffJster P R & Narin F. Characterization of the rcsearcb papers of US medical sdmols, J. kc Soc [nfom. Sci. 34:123-31, 1983. 15. Anderson R C, Narin F & McAffMer P. Fubhcauon ratings versus peer ratings of universities. J. ,4mcr. Soc fnjbnm &i. 29:91-103, 1978. 16. CronJn B. ?7u ciw$ion process. London Taylor Graham, 1984. 103 p. 17. Murugesan P & Moravcsik M J. Var]ation of the nature of citation measures wnh joumafs and scientific spewahties. J. hr. SOc. Inform. SCI. 29:141.7, 1978. IS, Garffeld E. Is citauon andys,s a Iegiomate evacuation tool? Scit-ntomeoim I :359-75, 1979. [9. Potter A L. Cmatmn anafyws: queries and caveats. SoC. Stud. Sci. 7;257-67, 1977. 20. Mod H F, Burger W J M, Frankfort J G & Van Ram A F J. ‘f% application of bibliometric imlkators: imp.xtarn field- and timede~ndent factors to bc considere’i .$cienmncrncs 8:177-203, 1965. 21. Carpenter M P & Narin F. T3w adequacy of the Science as an indicator of international scientific activity. J. Amer. Sm. Infbnn. Sci. 32:43Q-9, 1981. 22. Mm?tf H F, Burger W J M, FnutkforI J G & Y’utb .4 F J. Tbcw of tdbfiometicdatafOr* m=ure~nt Of universityresearch performance. Re$. Policy 14:13 I -49, 1985. 23. FoUy G, HaJtman B, Nagy J & Ruff I. MaJmJologicnJproblemsin ranking wientists by citation amdysis. .$cienfomemics 3:135-47, 1981. 24 Martfn B R & Irvine J. Assessing basic research some panml ind]cat.ors of scienu tic progress m radio astronomy Rcs. Policy 12:61-90, 1963. 25 Irvine J & Mnttln B R. Evacuating blg science: CERN’s past performance and future prospem. Scienromemcs 7:2gl-Y38, 1985. 26. Garffeld E. Uses and misuses of cwauon freqttemy. E.may$ OJ an vformuion scicnn.sf; gbosnvritmg and odwr essays Pbdadelphia: [S1 Press, 1986. Vol. S p. 403-9. 27. Aarmson S. The footnotes of science. Mosaic M2):22-7, 1975, 28. Jrvine J, Martin B R, Peacock T & Turner R. Charung the decJute m British science. Nanue 316:587-90, 1985. 29. Vinfder P. Management system for a sclenutic research mstimte based on the assessment of scientific publications. Rex Po/icy 15:77-87, 1986 30 H assatmly P & Dot! H. Informauon systems and saentomeoic study in chenucal oceanography. NATD ASI Series (29:9-31 19S6. 31. Rotfuman H. ABRC science pohcy study; Jtnher snufies on tht evaluation and masurenwnl of scientific research. (Report for the Econonuc and Science Research Cotmil.) Bnslol, UK: Bristol Polytechnic, 1985. 32. McAffister P R, hderson R C & NarJn F. Comparison of peer and citauon assessment of the influence of scientific joumafs.J. Amer. k. Inform SCI. 3 I: 147-52, 1980. 33 Garffeld E. Mapping the stmcnme of saence. Citation indcnng-m /hcory and applicarkw in science, rechdogy and tht hunwni(ics. New York: Wiley, 1979. p. 96-147. 34. Smafl H & Sweeney E. Clustering the Science Cimion hufec umg c&cmtions. 1. A mmpanscm of metfwds. Scienwnetncs 739 I Q, I985. 35 Smafl H, Sweeney E & Greenfee E. Clustering the Science Ciration /ndeI using co-citations. U. Mapping scvence Scienmmefrics 8:321-40, 1985. 36. SuJlivan D, WSdte D H & Barfwni E J. C@cWJion .mafy%% of science: an evacuation. Sm. Sfud. Set. 7:223-40, 1977. 37. Hicks D. Lmutations of co-creation anafysis as a tcoJ for science pohcy. Sot. Stud. Sci. 17:295-316, 1987. 38, Hoc@tlemstra R. An atfas of science: sidestepping discipJme Imitations a great advantage of cluster anafysis. Sci. Po{icy Neth. 7(2): 124, 1985. 39. Courtiaf J P, Ctdfm M, BauJnS & Turner W A. Co-wtmianalysu. Pa~r presented at the CNRS/SPSG Semmmr on Science IwlIcmors and Policy, %Ience PolicySuppm Group.t7-18September1986,J-m&m. 40. C8flottM, CourtJsdJ-P, Turner W A & Btudn S. From translations 10 pmbkmatic networks: m introduction to m-word anafysis. Sot. Sci. /nfonn. 22:191-235, 1983. 41. Caflon M, Law J & RJpA, els. Mapping the dynmic$ of $ciencc and m+vwlogy. London Macmiflan, 1986. 42. Bauin S, Courtiaf J-P & Law J. Policy and the inrerprefm”on of co-word data: nores on Ik acid ram study. Paper presentd at the CNRS/SPSG Seminar on Science Indicators and Pohcy, 17-18 September 1986, London. 43, Irvine J & Martin B R. Assessm8 basic research: the case of the Isaac Newton telescope. Sot. Stud. .Sci, t3:49-86, 1983 44 Rotfunan H & Lester G. The me of bibhometnc indicators !“ the study of msecdcide research. Scwnwm-tncs 8:247-62, 1985, 45, Sulflvan D, W3dte D H & Barbord E J. The state of saene indicators m the 5pKl~L~ of weak mteractmns. SK. Srud. SC,. 7: 167-20Q, 1977. 46 Crouch D, Irvine J & Martin B R. Bibliornetric analysis for smence poJicy. an evacuation of the United Kingdom’s research performance in ocean currents ad protein c!yswdlography. Sctenwrtetrics 9:239-67, 1986. 47. Hkks D, MatlJn B R & Irvine J. Bibhometuc techniques for monimring performance in strategic research: the case of integrated opucs. R. D. Manage. 1621 t -24, 1986.

100