The Impact Factor Fallacy

Total Page:16

File Type:pdf, Size:1020Kb

The Impact Factor Fallacy bioRxiv preprint doi: https://doi.org/10.1101/108027; this version posted February 20, 2017. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. 1 Title: 2 The impact factor fallacy 3 Authors: 4 Frieder Michel Paulus 1; Nicole Cruz 2,3; Sören Krach 1 5 Affiliations: 6 1 Department of Psychiatry and Psychotherapy, Social Neuroscience Lab, University of Lübeck, 7 Ratzeburger Allee 160, D-23538 Lübeck, Germany 8 2 Department of Psychological Sciences, Birkbeck, University of London, London, UK 9 3 Laboratoire CHArt, École Pratique des Hautes Études (EPHE), Paris, France 10 Corresponding Authors: 11 Frieder Paulus, phone: ++49-(0)451- 31017527, email: [email protected] 12 Sören Krach, phone: ++49-(0)451-5001717, email: [email protected] 13 Department of Psychiatry and Psychotherapy, Social Neuroscience Lab, University of Lübeck, 14 Ratzeburger Allee 160, D-23538 Lübeck, Germany 15 Author contributions: 16 FMP, NC, and SK wrote the manuscript. 17 Running title: The impact factor fallacy 18 Word count abstract: 135 19 Word count manuscript (including footnotes): 4013 20 Number of tables: 1 21 Number of figures: 0 1 bioRxiv preprint doi: https://doi.org/10.1101/108027; this version posted February 20, 2017. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. 22 Abstract 23 The use of the journal impact factor (JIF) as a measure for the quality of individual 24 manuscripts and the merits of scientists has faced significant criticism in recent years. We add to 25 the current criticism in arguing that such an application of the JIF in policy and decision making 26 in academia is based on false beliefs and unwarranted inferences. To approach the problem, we 27 use principles of deductive and inductive reasoning to illustrate the fallacies that are inherent to 28 using journal based metrics for evaluating the work of scientists. In doing so, we elaborate that if 29 we judge scientific quality based on the JIF or other journal based metrics we are either guided by 30 invalid or weak arguments or in fact consider our uncertainty about the quality of the work and 31 not the quality itself. 2 bioRxiv preprint doi: https://doi.org/10.1101/108027; this version posted February 20, 2017. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. 32 Introduction 33 The journal impact factor (JIF) was initially used to help librarians make decisions about 34 journals (Garfield, 2006). However, during the last decades the usage of the JIF has significantly 35 changed. In deviating from its original purpose it is now widely used to evaluate the quality of 36 individual publications and the work of scientists (Amin & Mabe, 2003; Arnold & Fowler, 2010). 37 Since then, the measure itself has been criticized for various reasons. For example, it is well 38 known that the JIF is an inaccurate estimate for the expected number of citations of an article 39 within a specific journal (Callaway, 2016; Larivière et al., 2016) and that it is relatively easy to 40 manipulate (McVeigh & Mann, 2009; Tort, Targino, & Amaral, 2012). Nonetheless, the JIF has 41 deeply affected the work of scientists and decision making in academia. Scientists get jobs, 42 tenure, grants, and bonuses based on the impact of the journals they are publishing their 43 manuscripts in, outgrowths’ which were critically discussed in many previous reviews, comments 44 and editorials (Brembs, Button, & Munafò, 2013; Casadevall & Fang, 2014; Della Sala & 45 Crawford, 2007; DePellegrin & Johnston, 2015; Lehmann, Jackson, & Lautrup, 2006; Reich, 46 2013; Seglen, 1997; Simons, 2008; Werner, 2015). Notably, the JIF has also been explicitly 47 referred to as a tool to decide how to distribute funds across institutions, for example in Germany 48 (DFG, 2004), and thereby affects policy making on a much larger scale. 49 "For the calculation of the performance-based bonus of the unit providing the service 50 (department or clinic) the original publications may be used with the unweighted impact 51 factor of the publication organ, in the sense of a step-wise introduction of quality criteria. 3 bioRxiv preprint doi: https://doi.org/10.1101/108027; this version posted February 20, 2017. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. 52 Thereby, a first- and last authorship may be considered with one third each and the 53 remaining third can be distributed across all remaining authors […]."1 54 Besides such explicit usage of the JIF for evaluating scientific excellence, the JIF also 55 implicitly affects other measures which have been suggested to better approximate the quality of 56 a scientist's work or of a specific study (e.g. the h-index, Hirsch, 2005 and the Relative Citation 57 Ratio (RCR), Hutchins, Yuan, Anderson, & Santangelo, 2015). For example, there is some 58 evidence that the number of citations of an article is influenced by the JIF of the journal where 59 the article was published, regardless of the quality of the article itself (Callaham, Wears, & 60 Weber, 2002; Cantrill, 2016). This implies that measures that are based on the citations of the 61 individual articles are still influenced by the JIF of the publication organ. With the many different 62 ways of how the JIF can influence decision making in academia, it is not surprising that empirical 63 data now demonstrate the JIF to be one of the most powerful predictors for academic success 64 (Van Dijk, Manor, & Carey, 2014).We could recently show that some scientists may have 65 adapted to these reward principles in their environment by showing a greater reward signal in the 66 brain’s reward structures in the prospect of an own high impact publication (Paulus, Rademacher, 67 Schäfer, Müller-Pinzler, & Krach, 2015). 68 In line with the rising initiatives to prevent the use of the JIF for evaluating the quality of 69 science (see e.g. the DORA initiative, Alberts, 2013, Cagan, 2013 or see the report of the German 70 Science Council, 2015), we have considerable doubts that the arguments in support of using the 1 "Für die Berechnung der LOM [leistungsorientierte Mittel; remark of authors] der jeweiligen leistungserbringenden Einheit (Abteilung bzw. Klinik) kann im Sinne einer stufenweisen Einführung von Qualitätskriterien die Bewertung erfolgter Original-Publikationen unter Verwendung des ungewichteten Impact Faktor der jeweiligen Publikationsorgane (JIF) erfolgen. Dabei können Erst- und Letztautorschaft mit je einem Drittel berücksichtigt werden; das verbleibende Drittel kann auf alle übrigen Autoren verteilt werden […]." (German Research Foundation, 2004, p. 15). 4 bioRxiv preprint doi: https://doi.org/10.1101/108027; this version posted February 20, 2017. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. 71 JIF for measuring scientific excellence are justified. In this comment we want to look at the 72 problem of using the JIF from a different perspective and carefully (re)evaluate the arguments for 73 its use as an estimate of scientific quality. Thereby, we hope to better understand the beliefs about 74 the JIF that influence decisions in academia and the implications of policies that use the JIF to 75 assess and remunerate scientific quality. Beyond the specific case of the JIF, this exercise might 76 also help to specify more general misconceptions when using journal based properties to evaluate 77 science, in order to overcome incentive structures based on journal based metrics altogether. 78 Deductive fallacy when using the JIF 79 A basic belief when using the JIF for evaluating the quality of a specific manuscript 80 seems to be that (1) if a paper is published in a high impact factor journal (p) then the paper is of 81 high quality (q) 2. Why would scientists believe this? A straightforward reason is the idea that it is 82 more difficult to publish in a high impact factor journal because higher standards of research 83 quality and novelty have to be passed in order to be accepted. The average number of citations of 84 a journal's articles within in a specific time period signals the average breadth of interest in these 85 articles during that time period, which can of course be affected by many factors other than 86 research quality. But as a first approximation, let us suppose that belief (1) is the case. What can 87 we conclude from it? 88 If we see a paper published in a high impact factor journal, we could then draw the 89 deductively valid inference of modus ponens (MP: if p then q, p, therefore q )3 and conclude that 2 When we speak of "high" and "low" impact in this paper, the arguments we make are independent of whether "high" and "low" refer to the absolute JIF of a journal, or to the JIF relative to a specific research domain. 3 Here p and q stand for arbitrary propositions. For example, p might stand for "This paper is published in a high impact factor journal" and q for "This paper is of high quality". 5 bioRxiv preprint doi: https://doi.org/10.1101/108027; this version posted February 20, 2017. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. 90 the paper is of high quality.
Recommended publications
  • Assessing School and Teacher Effectiveness Using HLM and TIMSS Data from British Columbia and Ontario
    Avoiding Ecological Fallacy: Assessing School and Teacher Effectiveness Using HLM and TIMSS Data from British Columbia and Ontario By Yichun Wei B. Eng. University of Hunan M. Sc. University of Hunan A Thesis submitted to the Faculty of Graduate Studies of The University of Manitoba in partial fulfillment of the requirements of the degree of DOCTOR OF PHILOSOPHY Faculty of Education University of Manitoba Winnipeg, MB. Canada Copyright © 2012 by Yichun Wei ABSTRACT There are two serious methodological problems in the research literature on school effectiveness, the ecological problem in the analysis of aggregate data and the problem of not controlling for important confounding variables. This dissertation corrects these errors by using multilevel modeling procedures, specifically Hierarchical Linear Modeling (HLM), and the Canadian Trends in International Mathematics and Science Study (TIMSS) 2007 data, to evaluate the effect of school variables on the students’ academic achievement when a number of theoretically-relevant student variables have been controlled. In this study, I demonstrate that an aggregate analysis gives the most biased results of the schools’ impact on the students’ academic achievement. I also show that a disaggretate analysis gives better results, but HLM gives the most accurate estimates using this nested data set. Using HLM, I show that the physical resources of schools, which have been evaluated by school principals and classroom teachers, actually have no positive impact on the students’ academic achievement. The results imply that the physical resources are important, but an excessive improvement in the physical conditions of schools is unlikely to improve the students’ achievement. Most of the findings in this study are consistent with the best research literature.
    [Show full text]
  • There Is No Fallacy of Arguing from Authority
    There is no Fallacy ofArguing from Authority EDWIN COLEMAN University ofMelbourne Keywords: fallacy, argument, authority, speech act. Abstract: I argue that there is no fallacy of argument from authority. I first show the weakness of the case for there being such a fallacy: text-book presentations are confused, alleged examples are not genuinely exemplary, reasons given for its alleged fallaciousness are not convincing. Then I analyse arguing from authority as a complex speech act. R~iecting the popular but unjustified category of the "part-time fallacy", I show that bad arguments which appeal to authority are defective through breach of some felicity condition on argument as a speech act, not through employing a bad principle of inference. 1. Introduction/Summary There's never been a satisfactory theory of fallacy, as Hamblin pointed out in his book of 1970, in what is still the least unsatisfactory discussion. Things have not much improved in the last 25 years, despite fallacies getting more attention as interest in informal logic has grown. We still lack good answers to simple questions like what is a fallacy?, what fallacies are there? and how should we classifY fallacies? The idea that argument from authority (argumentum ad verecundiam) is a fallacy, is well-established in logical tradition. I argue that it is no such thing. The second main part of the paper is negative: I show the weakness of the case made in the literature for there being such a fallacy as argument from authority. First, text-book presentation of the fallacy is confused, second, the examples given are not genuinely exemplary and third, reasons given for its alleged fallaciousness are not convincing.
    [Show full text]
  • Physics 496 Introduction to Research Lecture 2.0: Tools for the Scientific Skeptic (Lance Cooper, Laura Greene, Tony Liss, Doug Beck)
    PHYS 496, Spring 2014 02/14/2014 D.H. Beck Physics 496 Introduction to Research Lecture 2.0: Tools for the Scientific Skeptic (Lance Cooper, Laura Greene, Tony Liss, Doug Beck) Critical Evaluation Scientific papers and research presentations, when well done, are very convincing. How do you know if they are correct? How do you know if your own research is correct? Answer: Apply logic. Critique the arguments made to arrive at the conclusions. “Science is simply common-sense at its best; that is, rigidly accurate in observation and merciless to fallacy in logic.“ (The Crayfish, 1880). “Darwin’s Bulldog” Thomas Henry Huxley, biologist 1825-1895 Copyright © 2014 Board of Directors of the University of Illinois 1 PHYS 496, Spring 2014 02/14/2014 D.H. Beck Aside: Oxford Evolution Debate Held at the Oxford University Museum seven months after the 1859 publication of The Origin of Species. Bishop Samuel Wilberforce to Huxley: “On which side do you claim your descent from a monkey, your grandmother or your grandfather?” Huxley: “I would not be ashamed to be descended from a monkey. But I would be ashamed to be descended from a man who uses his great gifts to obscure the truth!” The Scientific Method 1. Observe and describe a phenomenon or group of phenomena. 2. Formulate an hypothesis to explain the phenomena. In physics, the hypothesis often takes the form of a causal mechanism or a mathematical relation. 3. Use the hypothesis to predict the existence of other phenomena, or to predict quantitatively the results of new observations. 4. Perform experimental tests of the predictions.
    [Show full text]
  • The Argument Form “Appeal to Galileo”: a Critical Appreciation of Doury's Account
    The Argument Form “Appeal to Galileo”: A Critical Appreciation of Doury’s Account MAURICE A. FINOCCHIARO Department of Philosophy University of Nevada, Las Vegas Las Vegas, NV 89154-5028 USA [email protected] Abstract: Following a linguistic- Résumé: En poursuivant une ap- descriptivist approach, Marianne proche linguistique-descriptiviste, Doury has studied debates about Marianne Doury a étudié les débats “parasciences” (e.g. astrology), dis- sur les «parasciences » (par exem- covering that “parascientists” fre- ple, l'astrologie), et a découvert que quently argue by “appeal to Galileo” les «parasavants» raisonnent souvent (i.e., defend their views by compar- en faisant un «appel à Galilée" (c.-à- ing themselves to Galileo and their d. ils défendent leurs points de vue opponents to the Inquisition); oppo- en se comparant à Galileo et en nents object by criticizing the analo- comparant leurs adversaires aux gy, charging fallacy, and appealing juges de l’Inquisition). Les adver- to counter-examples. I argue that saires des parasavant critiquent Galilean appeals are much more l'analogie en la qualifiant de soph- widely used, by creationists, global- isme, et en construisant des contre- warming skeptics, advocates of “set- exemples. Je soutiens que les appels tled science”, great scientists, and à Galilée sont beaucoup plus large- great philosophers. Moreover, sever- ment utilisés, par des créationnistes, al subtypes should be distinguished; des sceptiques du réchauffement critiques questioning the analogy are planétaire, des défenseurs de la «sci- proper; fallacy charges are problem- ence établie», des grands scien- atic; and appeals to counter- tifiques, et des grands philosophes. examples are really indirect critiques En outre, on doit distinguer plusieurs of the analogy.
    [Show full text]
  • 35. Logic: Common Fallacies Steve Miller Kennesaw State University, [email protected]
    Kennesaw State University DigitalCommons@Kennesaw State University Sexy Technical Communications Open Educational Resources 3-1-2016 35. Logic: Common Fallacies Steve Miller Kennesaw State University, [email protected] Cherie Miller Kennesaw State University, [email protected] Follow this and additional works at: http://digitalcommons.kennesaw.edu/oertechcomm Part of the Technical and Professional Writing Commons Recommended Citation Miller, Steve and Miller, Cherie, "35. Logic: Common Fallacies" (2016). Sexy Technical Communications. 35. http://digitalcommons.kennesaw.edu/oertechcomm/35 This Article is brought to you for free and open access by the Open Educational Resources at DigitalCommons@Kennesaw State University. It has been accepted for inclusion in Sexy Technical Communications by an authorized administrator of DigitalCommons@Kennesaw State University. For more information, please contact [email protected]. Logic: Common Fallacies Steve and Cherie Miller Sexy Technical Communication Home Logic and Logical Fallacies Taken with kind permission from the book Why Brilliant People Believe Nonsense by J. Steve Miller and Cherie K. Miller Brilliant People Believe Nonsense [because]... They Fall for Common Fallacies The dull mind, once arriving at an inference that flatters the desire, is rarely able to retain the impression that the notion from which the inference started was purely problematic. ― George Eliot, in Silas Marner In the last chapter we discussed passages where bright individuals with PhDs violated common fallacies. Even the brightest among us fall for them. As a result, we should be ever vigilant to keep our critical guard up, looking for fallacious reasoning in lectures, reading, viewing, and especially in our own writing. None of us are immune to falling for fallacies.
    [Show full text]
  • What Can Ecological Studies Tell Us About Death? Yehuda Neumark
    Neumark Israel Journal of Health Policy Research (2017) 6:52 DOI 10.1186/s13584-017-0176-x COMMENTARY Open Access What can ecological studies tell us about death? Yehuda Neumark Abstract: Using an ecological study design, Gordon et al. (Isr J Health Policy Res 6:39, 2017) demonstrate variations in mortality patterns across districts and sub-districts of Israel during 2008–2013. Unlike other epidemiological study designs, the units of analysis in ecological studies are groups of people, often defined geographically, and the exposures and outcomes are aggregated, and often known only at the population-level. The ecologic study has several appealing characteristics (suchasrelianceonpublic-domain anonymous data) alongside a number of important potential limitations including the often mentioned ‘ecological fallacy’.Advantagesand disadvantages of the ecological design are described briefly below. Keywords: Ecological fallacy, Ecological studies, Epidemiology, Mortality, Study design Main text In this journal, Gordon et al. [4] employ an ecological "The aim of epidemiology is to decipher nature with re- design to demonstrate variations in mortality patterns spect to human health and disease, and no one should across districts and sub-districts of Israel during the underestimate the complexities of epidemiological research" five-year period of 2008–2013. Standardized mortality [1]. To achieve this lofty and complex goal, the epide- ratios (SMRs) reveal a 25% excess of kidney disease re- miologic investigative toolbox contains various study lated deaths in the Haifa district, for example, while the methodologies including individual-based experimental risk of death from influenza/pneumonia is 25% lower designs (e.g., randomized controlled trial), individual- there than the national average.
    [Show full text]
  • A Statistical Guide for the Ethically Perplexed Preamble Introduction
    A Statistical Guide for the Ethically Perplexed Lawrence Hubert and Howard Wainer Preamble The meaning of \ethical" adopted here is one of being in accordance with the accepted rules or standards for right conduct that govern the practice of some profession. The profes- sions we have in mind are statistics and the behavioral sciences, and the standards for ethical practice are what we try to instill in our students through the methodology courses we offer, with particular emphasis on the graduate statistics sequence generally required for all the behavioral sciences. Our hope is that the principal general education payoff for competent statistics instruction is an increase in people's ability to be critical and ethical consumers and producers of the statistical reasoning and analyses faced in various applied contexts over the course of their careers. Introduction Generations of graduate students in the behavioral and social sciences have completed mandatory year-long course sequences in statistics, sometimes with difficulty and possibly with less than positive regard for the content and how it was taught. Prior to the 1960's, such a sequence usually emphasized a cookbook approach where formulas were applied un- thinkingly using mechanically operated calculators. The instructional method could be best characterized as \plug and chug," where there was no need to worry about the meaning of what one was doing, only that the numbers could be put in and an answer generated. This process would hopefully lead to numbers that could then be looked up in tables; in turn, p-values were sought that were less than the magical .05, giving some hope of getting an attendant paper published.
    [Show full text]
  • Critical Thinking, Common Fallacies, and Leadership
    Critical Thinking, Common Fallacies, and Leadership Judge Hal Campbell, Ph.D. One of my all-time favorite movies was Tom Clancy’s, Hunt for Red October. Within that movie there is a scene that I found fascinating within my capacity as a university professor teaching logic and statistics. The scene involves a meeting of top level decision makers who are gathered around a table in the basement of the Whitehouse, and after a briefing given by Jack Ryan pertaining to the design, construction, and launch of a new Russian submarine, the National Security Advisor who is chairing the meeting asks Admiral Greer (the character played by James Earl Jones) what conclusions he has made. The Admiral responded, “Sir, the data support no conclusions as of yet”. I thought this statement was one of the more profound expressions that I had ever heard in any movie. What an astonishing and refreshing response I thought to myself. He was exactly right that it was premature, given the limited information available at the time, to base any decision about the Russian’s intent, mission, or objectives. The data simply didn’t support any conclusion, yet there at the conference table, sat a gaggle of top-level executives who were actively engage in supposition and speculation resulting in the fabrication all sorts of boogie-men, end of day’s scenarios, and rationale for a first strike option that needed to be recommended to the President. As I have pointed out in previous articles, critical thinking, logic, and reasoning are not taught as a separate academic discipline within our K-12 school system.
    [Show full text]
  • Checking Your Arguments: 2
    Checking Your Arguments: 2 The kind of logic that is used in constructing an essay is not dependent on specialized learning or some esoteric technique. It is the kind of logic that normal people use every day in making decisions and in trying to persuade other people to share their opinions. But every word of a written argument is generally scrutinized very carefully by its readers, so special care is needed to keep logical fallacies from ruining the argument. By getting into the habit of searching the first draft of your papers for logical fallacies, you can not only improve your writing but also refine the logic that you use in everyday discussions and decisions. Here is a check-list of some of the logical "fallacies" (or mistakes) that even good writers may easily fall into. The use of logical fallacies often results in faulty conclusions; it can lead people into terrible difficulties, not just in writing, but in making real-life decisions. A fallacy may, however, appear in an argument for a perfectly correct idea; it may, in that case, discredit even that correct idea--but it will certainly discredit the author. Suppose, for instance, that someone argued that roses, like human beings, are living things, that there is a similarity between plants and people, and that, therefore, because all plants eventually die, we can expect that all people will eventually die, too. Now, it's perfectly true that all people will die; the problem is that the argument that is used to prove this (an argument by analogy) is so fallacious that it could never, in itself, establish the fact of human mortality.
    [Show full text]
  • Common Reasoning Mistakes
    Common Fallacies (mistakes of reasoning) The fallacy fallacy • There is danger even in the study of fallacies. This study involves identifying certain patterns of reasoning as fallacies. Each pattern has a name. E.g. an argument that attacks a person is ad hominem. But ad hominem arguments are not always fallacies! • Rejecting an argument as a (named) fallacy, based on its pattern alone, is a fallacy that we might call the fallacy fallacy. • In general, an ad hominem is only legitimate when attacking an argument from authority. • But not all such attacks on authority are legitimate. They can be made on irrelevant grounds. Irrelevant ad hominem E.g. Einstein’s physics was attacked on the basis of Einstein being Jewish. Thomas Powers, Heisenberg’s War, p. 41 Fallacy? • Alliance leader Stockwell Day argues that Canada should increase its military expenditure now, by at least 20%, in order to continue to meet our NATO obligations five years from now. But Day is a fundamentalist who thinks the universe is only 6,000 years old! Clearly his view can be dismissed. • Probably a fallacy. If Day is presenting evidence (e.g. based on future NATO obligations) then an ad hominem response is automatically fallacious. Even if Day is arguing from authority, then his (religion based) views about the age of the earth are likely not relevant to his views about military needs. • Mr. Wilson, in his letter of January 16, argues that it would be counter-productive to yield to the demands of the hostage takers. He does not, I take it, have a son or daughter among the hostages.
    [Show full text]
  • Logic and the Common Law Trial
    University of Kentucky UKnowledge Law Faculty Scholarly Articles Law Faculty Publications Summer 1994 Logic and the Common Law Trial Richard H. Underwood University of Kentucky College of Law, [email protected] Follow this and additional works at: https://uknowledge.uky.edu/law_facpub Part of the Legal Profession Commons, and the Litigation Commons Right click to open a feedback form in a new tab to let us know how this document benefits ou.y Recommended Citation Richard H. Underwood, Logic and the Common Law Trial, 18 Am. J. Trial Advoc. 151 (1994). This Article is brought to you for free and open access by the Law Faculty Publications at UKnowledge. It has been accepted for inclusion in Law Faculty Scholarly Articles by an authorized administrator of UKnowledge. For more information, please contact [email protected]. Logic and the Common Law Trial Notes/Citation Information American Journal of Trial Advocacy, Vol. 18, No. 1 (Summer 1994), pp. 151-199 This article is available at UKnowledge: https://uknowledge.uky.edu/law_facpub/261 Logic and the Common Law Trial Richard H. Underwoodt The only certain ground for discovering truth is the faculty of discriminat- ing false from true.... Otherwise, I can assure you, you will be led by the nose by anyone who chooses to do it, and you will run after anything they hold out to you, as cattle do after a green bough.' I. Introduction Throughout history, right up to the present day, there have been persistent, if somewhat contradictory, attempts by members of the legal guild to attribute to the practice of law some of the characteristics of religion and science.' Fortunately, references to the trial lawyer as "the priest in the [T]emple of [J]ustice" 3 have pretty much disappeared from the literature.
    [Show full text]
  • The Science of Software Engineering the Science of Software Engineering
    The Science of Software Engineering Marvin Zelkowitz Department of Computer Science University of Maryland College Park, Maryland [email protected] ESEM - 2009 1 ESEM – October 2009 A Need for a The Science of Software Engineering Marvin Zelkowitz Department of Computer Science University of Maryland College Park, Maryland [email protected] ESEM - 2009 2 ESEM – October 2009 1 Organization of talk Some personal comments on how I arrived at the theme of this talk What are the issues in developing a science of software engineering? What’s next? ESEM – October 2009 3 So what have I been doing for the past 40 years? Most of my professional life has been at the University of Maryland, teaching and doing research in the general area of software engineering. But those who know me, know that I have three other areas of great interest. ESEM – October 2009 4 2 One is attending science fiction conventions ESEM – October 2009 5 A second is my interest in model railroading Layout obviously unfinished. ESEM – October 2009 6 3 A third is that I consider myself a professional skeptic ESEM – October 2009 7 A third is that I consider myself a professional skeptic I belong to an organization of skeptics. Ha! Ha! ESEM – October 2009 8 4 A third is that I consider myself a professional skeptic That’s No! not true! You can’t do that! It’s all bogus! ESEM – October 2009 9 A third is that I consider myself a professional skeptic What does this really mean? And how does this relate to software engineering? This is the general theme of the rest of this talk.
    [Show full text]