COMPUTER GENERATED PAPERS AS a NEW CHALLENGE to PEER REVIEW Malea Dunne

Total Page:16

File Type:pdf, Size:1020Kb

COMPUTER GENERATED PAPERS AS a NEW CHALLENGE to PEER REVIEW Malea Dunne Montana Tech Library Digital Commons @ Montana Tech Graduate Theses & Non-Theses Student Scholarship Spring 2019 COMPUTER GENERATED PAPERS AS A NEW CHALLENGE TO PEER REVIEW Malea Dunne Follow this and additional works at: https://digitalcommons.mtech.edu/grad_rsch Part of the Communication Commons COMPUTER GENERATED PAPERS AS A NEW CHALLENGE TO PEER REVIEW by Malea Dunne A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Technical Communication Montana Tech 2019 ii Abstract Computer generated papers (CGP) pose a serious problem to academic integrity and publishing. The problem began with SCIgen. Created in 2005 by MIT students, SCIgen is a software program that generates papers with simulated content. In 2014, we learned that more than 120 CGP passed through the peer review process, were published in well-known academic journals, and had to be retracted. I conducted research into the journal editing and peer review process to discover more about this problem and how it might be remedied. I conducted interviews with five journal editors from across the world, coded the information, and performed a thematic analysis. My thesis concludes with recommendations to control the CGP problem, including: increased awareness on the part of journal editors, CGP detection software, improving due diligence on the part of reviewers, and addressing the publish or perish paradigm that drives desperate faculty to compromise academic integrity by submitting CGP to journals. Keywords: Computer Generated Papers (CGP), Peer Review, Publish or Perish, Academic Integrity, Academic Dishonesty, Journal Article Publishing, Journal Editors, Data Sharing iii Dedication For my Mom and Dad Thank you for pushing me and giving me all your love and support. iv Acknowledgements I would like to thank the many different people who contributed to the work on my master’s thesis and their constant advice and support: Graduate Committee: Dr. Pat Munday, Professor of Science & Technology Studies, Montana Tech, Graduate Committee Chair Dr. Glen Southergill, Associate Professor of Technical Communication, Montana Tech, Graduate Committee member Dr. Dawn Atkinson, Assistant Professor Writing Director, Montana Tech, Graduate Committee member Dr. Charie Faught, HCI Associate Professor, Montana Tech, Graduate Committee member To the entire faculty of Professional and Technical Communication To my family and friends I would not have successfully completed my Master’s Thesis without you. v Table of Contents ABSTRACT ...................................................................................................................................... II DEDICATION ................................................................................................................................. III ACKNOWLEDGEMENTS ................................................................................................................. IV GLOSSARY OF TERMS ................................................................................................................... VII 1. INTRODUCTION ................................................................................................................................. 1 2. LITERATURE REVIEW ........................................................................................................................... 3 2.1. Historical Background of Computer Generated Papers ............................................3 2.1.1. Academic Integrity and Computer Generated Papers ............................................................... 4 2.1.2. Ethical Concerns with Computer Generated Papers ................................................................. 6 2.1.3. Plagiarism within Computer Generated Papers ........................................................................ 8 2.1.4. Academic Publishing .................................................................................................................. 9 2.2. Peer Review Process .................................................................................................9 2.2.1. History of Peer Review .............................................................................................................. 9 2.2.2. Contemporary Problems with Peer Review ............................................................................. 10 2.2.3. Peer Review as a Method to Detect Computer Generated Papers ......................................... 11 2.2.4. Editor’s Perspective on Computer Generated Papers ............................................................. 12 2.2.5. Open-Access Journals and New Approaches to Peer Review .................................................. 13 3. METHODOLOGY ................................................................................................................................ 14 3.1. Qualitative Methodology ........................................................................................ 14 3.2. Interview Methodology ........................................................................................... 15 3.3. Participant Observation .......................................................................................... 15 3.4. Thematic Analysis .................................................................................................... 17 4. THEMATIC ANALYSIS ........................................................................................................................... 18 4.1. Introduction ............................................................................................................. 18 vi 4.2. Awareness ............................................................................................................... 18 4.3. Becoming Aware ..................................................................................................... 19 4.4. Issue ......................................................................................................................... 19 4.5. Opportunities for Improvement............................................................................... 20 4.6. Line of Defense ........................................................................................................ 22 4.7. Rejection Rate ......................................................................................................... 23 4.8. Retraction of Papers ................................................................................................ 23 4.9. Pressure to Publish .................................................................................................. 25 4.10. Publication Ethics .................................................................................................... 26 4.11. COPE ........................................................................................................................ 27 4.12. Scanning Tool .......................................................................................................... 28 4.13. Conclusion ............................................................................................................... 28 5. CONCLUSION & RECOMMENDATIONS .................................................................................................... 30 6. REFERENCES CITED ............................................................................................................................. 33 6.1. Interview References ............................................................................................... 33 6.2. Published Articles .................................................................................................... 33 7. APPENDICES ..................................................................................................................................... 44 7.1. Appendix A: IRB Approval Form ............................................................................... 44 7.2. Appendix B: Interview Information: Participants, Dates, Responses ....................... 46 7.3. Appendix C: Interview Questions ............................................................................. 47 7.4. Appendix D: Interview Request ................................................................................ 48 7.5. Appendix E.1: Interview Transcription (Email) Joaquim Jorge ................................. 49 7.6. Appendix E.2: Anonymous Interview Transcription (Email) E1 ................................ 52 7.7. Appendix E.3: Anonymous Interview Transcription (Email) E2 ................................ 54 7.8. Appendix E.4: Interview Transcription David Grier .................................................. 56 7.9. Appendix E.5: Interview Transcription Tamara Welschot ....................................... 68 7.10. Appendix F: Thematic Analysis Coding .................................................................... 75 vii Glossary of Terms Term Definition Computer Generated Papers: Online software program created to produce papers SciDetect: Detection software to notice generated papers SCIgen: Automated paper generator Plagiarism: Stealing another’s words or work Academic Dishonesty (AD): Any form of cheating occurring in academia Open-access Journal: Research published and can be retrieved free of cost Gibberish: Nonsense Gobbledygook: Nonsense Integrity: Quality of being honest Hoax: Deception SOFL: Structured Object-Oriented Formal Language Journal Impact Factor: Measurement to see importance of
Recommended publications
  • Click Here to View Or Download the Proceedings
    Computational Creativity 2007 Foreword The International Joint Workshop on Computational Creativity began life as two independent workshop series: the Creative Systems Workshops and the AISB Symposia on AI and Creativity in the Arts and Sciences. The two series merged in 2004, when the 1st IJWCC was held in Madrid, as a satellite workshop of the European Conference on Case Based Reasoning. Since then, two further satellite worshops have been held, at the International Joint Conference on Artificial Intelligence in Edinburgh, in 2005, and at the European Conference on Artificial Intelligence in Riva del Garda, in 2006. This workshop constitutes the workshop’s first attempts at independent existence, and the quality of the papers submitted suggests that the time is now ripe. This workshop received 27 submissions, all of which were subjected to rigorous peer review (at least 3 reviewers to each paper), and 17 full papers and 3 posters were accepted (one poster was subsequently withdrawn). We believe this volume represents a coming of age of the field of computational creativity. It contains evidence of not only improvements in the state of the art in creative systems, but also of deep thinking about methodology and philosophy. An exciting new development is the inclusion, for the first time, of a session on applied creative systems, demonstrating that the field is now ready and able to impinge on broader artificial intelligence and cognitive science research. As co-chairs, we would like to thank the programme committee and reviewers, our able local assistants, Ollie Bown and Marcus Pearce, and all those who submitted papers to make this a really exciting event.
    [Show full text]
  • What Is Peer Review?
    What is Peer Review? Scholarly peer review is the process of evaluation of scientific, academic, or professional work by others working in the same field. The process is designed to ensure that academic publications make substantial contributions to the state of knowledge about a subject and meet minimum standards of reliability. Fundamentals Is it worth it to do the extra work to find peer reviewed sources? When researching, it is important to look at articles recently published in top tier peer Keep in Mind reviewed or refereed journals to understand the current issues and developments of a Many scholarly journals only peer review original topic. research. The standards of peer review for editorials, literature review/survey articles, and How can I tell if an article is peer book reviews may be less stringent (or non- reviewed? existent) even if they are published in refereed Follow these steps: journals. • Check the UlrichsWeb database for the journal (is it refereed, is the content type listed as academic/scholarly). • If the journal is not in UlrichsWeb, try googling it. On the publisher’s website, look for the About page or submission guidelines. These pages will usually explicitly say whether a journal is peer reviewed or refereed and often will describe the process. The Process What is “blind” peer review? Usually the refereeing process involves what is called “blind” peer review. This is when the reviewers do not know who wrote the article they are reviewing. This encourages reviewers to evaluate the sources according to the norms of their community rather than the reputation of the author.
    [Show full text]
  • How Frequently Are Articles in Predatory Open Access Journals Cited
    publications Article How Frequently Are Articles in Predatory Open Access Journals Cited Bo-Christer Björk 1,*, Sari Kanto-Karvonen 2 and J. Tuomas Harviainen 2 1 Hanken School of Economics, P.O. Box 479, FI-00101 Helsinki, Finland 2 Department of Information Studies and Interactive Media, Tampere University, FI-33014 Tampere, Finland; Sari.Kanto@ilmarinen.fi (S.K.-K.); tuomas.harviainen@tuni.fi (J.T.H.) * Correspondence: bo-christer.bjork@hanken.fi Received: 19 February 2020; Accepted: 24 March 2020; Published: 26 March 2020 Abstract: Predatory journals are Open Access journals of highly questionable scientific quality. Such journals pretend to use peer review for quality assurance, and spam academics with requests for submissions, in order to collect author payments. In recent years predatory journals have received a lot of negative media. While much has been said about the harm that such journals cause to academic publishing in general, an overlooked aspect is how much articles in such journals are actually read and in particular cited, that is if they have any significant impact on the research in their fields. Other studies have already demonstrated that only some of the articles in predatory journals contain faulty and directly harmful results, while a lot of the articles present mediocre and poorly reported studies. We studied citation statistics over a five-year period in Google Scholar for 250 random articles published in such journals in 2014 and found an average of 2.6 citations per article, and that 56% of the articles had no citations at all. For comparison, a random sample of articles published in the approximately 25,000 peer reviewed journals included in the Scopus index had an average of 18, 1 citations in the same period with only 9% receiving no citations.
    [Show full text]
  • Citation Performance of Indonesian Scholarly Journals Indexed in Scopus from Scopus and Google Scholar
    pISSN 2288-8063 Sci Ed 2018;5(1):53-58 eISSN 2288-7474 https://doi.org/10.6087/kcse.119 Case Study Citation performance of Indonesian scholarly journals indexed in Scopus from Scopus and Google Scholar Lukman Lukman1, Yan Rianto1, Shidiq Al Hakim1, Irene M Nadhiroh2, Deden Sumirat Hidayat3 1Research Center for Informatics, Indonesian Institute of Science, Cibinong; 2Research Center for Science and Technology Development Studies, Indonesian Institute of Science, Jakarta; 3Research Center for Biology, Indonesian Institute of Science, Cibinong, Indonesia Abstract Citation frequency is an important factor for estimating the quality of a scientific journal, and the number of citations that an academic paper receives is often used as a measure of its scientific impact. This study aimed to characterize the citation performance of scientif- ic journals published by Indonesian publishers that have been indexed in Scopus by ana- lyzing the number of citations available in the Scopus database and Google Scholar. The results of the study identified 30 Indonesian journals that have been Scopus-indexed, of which 22 were listed in SCImago Journal Rank up to October 2017. Journals in the engi- neering field were the most cited, with 2,427 citations, including 930 self-citations. A large proportion of the citations were of recently-founded journals. The mean proportional dif- ference in the citation frequency between Scopus and Google Scholar was 14.71%. Keywords Citation analysis; Google Scholar; Scholarly journal performance; Scopus Received: November 14, 2017 Accepted: January 9, 2018 Correspondence to Lukman Lukman [email protected] Introduction ORCID Scopus is a multidisciplinary database, with 67 million records (as of August 2017) and more Lukman Lukman http://orcid.org/0000-0001-9633-6964 than 22,794 peer-reviewed journal titles in the life sciences, social sciences, health sciences, and Yan Rianto physical sciences.
    [Show full text]
  • How to Review Papers 2 3 for Academic Meta-Jokes
    How to Review Papers 2 3 For academic meta-jokes: https://www.facebook.com/groups/reviewer2/ 4 “How not to be Reviewer #2 (https://amlbrown.com/2015/11/10/how-not-to-be-reviewer-2/)” - Ashley Brown, U. of Utaha Who is the Reviewer #2? ✘ “Literally, Reviewer 2 is the anonymised moniker given to the second peer to review a research paper. In common parlance, Reviewer 2 can be summarised as possessing the following qualities: ○ Grumpy, aggressive ○ Vague, unhelpful, inflexible ○ Overbearingly committed to a pet discipline ○ Overly focused on a particular methodology ○ Unwilling to give the benefit of the doubt ○ Unwilling to view the authors of a submitted paper as peers” 5 6 What is peer review? ✘ Evaluation of work by one or more people of similar competence to the producers of the work (=peers) – Wikipedia ✘ Scholarly peer review helps editors and program chairs to decide whether to publish/accept specific work ✘ Reviewers work for free (also editors and chairs) – perhaps the weirdest part of any academic’s job to laymen’s eyes ✘ It has appeared 300 years ago (Philosophical Transactions of Royal Society, editor Henry Oldenburg 1618-1677) 7 Why should we do peer review? ✘ Ensures and even improves the quality, value, and integrity of scientific communication ✘ Only the experts in the same domain can provide sufficiently detailed yet impartial opinions ✘ If you do not care about quality of other’s work, eventually everyone will stop caring about your work as well 8 Why should we do peer review? ✘ Scientific ideals aside, certain events just
    [Show full text]
  • Location, Event&Q
    # from what/ where which how why who for MOBILE versi on click here when who who where when index source "location, event" "phys, pol, med, doc" detail physical detail political name "9/11 Truth Interactive Spreadsheet Click on dow n arrow to sort / filter, click again to undo." Top 100 / compilations entity entity detail country / state date Item .. right-click on li nk to open in new tab 1 "Francis, Stephen NFU" WTC physical Controlled demolition Explosive experts "Overwhelming evidence indicates that a combination of n uclear, thermitic and conventional explosives were used in a controlled demoliti on of the WTC on 9/11. Nanothermite contributed but does not have sufficient det onation velocity to pulverize the WTC into dust. Architects & Engineers for 9/11 Truth is leading gatekeeper trying to deflect Israel's role. See Cozen O'Connor 9/11 lawsuit." pic "9/11 Truth, anti-Zionists" Engineers / Scie ntists "U.S., Israel, SA, Britain" 2 "Francis, Stephen NFU" "WTC, Pentagon, PA" political False flag Cabal "The cabal: U.S., Britain, Saudi Arabia and Israel execu ted the 9/11 false flag attack in order to usher in a new 'war on terror' along with the Iraq and Afghanistan wars and fullfil the PNAC's 'Full Spectrum Dominan ce' of the Middle East and its resources ... all have roots that go back to Zion ist / Nazi Germany, the Cold War ... 9/11 was a planned step." lnk Intel ag encies "Cabal: US, UK, Israel & SA" Mossad / Sayeret Matkal "U.S., Israel, S A, Britain" 3 "Fox, Donald" WTC 1-2 physical "Mini Neutron, Fissionless Fusio n" Controlled demolition "VeteransToday: Fox, Kuehn, Prager, Vike n,Ward, Cimono & Fetzer on mini neutron bombs discuss all major WTC theories micr o nuke (neutron) most promising comparatively low blast effects, a quick blast o f radiation that doesn't linger, a series of shape charged mini-neutron bombs we re detonated from top to bottom to simulate a free fall collapse.
    [Show full text]
  • Tortured Phrases: a Dubious Writing Style Emerging in Science Evidence of Critical Issues Affecting Established Journals
    Open Call for Investigation Tortured phrases: A dubious writing style emerging in science Evidence of critical issues affecting established journals Guillaume Cabanac · Cyril Labbé · Alexander Magazinov Version: July 12, 2021 Abstract Probabilistic text generators have been used to produce fake scientific papers for more than a decade. Such nonsensical papers are easily detected by both human and ma- chine. Now more complex AI-powered generation techniques produce texts indistinguish- able from that of humans and the generation of scientific texts from a few keywords has been documented. Our study introduces the concept of tortured phrases: unexpected weird phrases in lieu of established ones, such as ‘counterfeit consciousness’ instead of ‘artificial intelligence.’ We combed the literature for tortured phrases and study one reputable journal where these concentrated en masse. Hypothesising the use of advanced language models we ran a detector on the abstracts of recent articles of this journal and on several control sets. The pairwise comparisons reveal a concentration of abstracts flagged as ‘synthetic’ in the journal. We also highlight irregularities in its operation, such as abrupt changes in editorial timelines. We substantiate our call for investigation by analysing several individual dubious articles, stressing questionable features: tortured writing style, citation of non-existent liter- ature, and unacknowledged image reuse. Surprisingly, some websites offer to rewrite texts for free, generating gobbledegook full of tortured phrases. We believe some authors used rewritten texts to pad their manuscripts. We wish to raise the awareness on publications containing such questionable AI-generated or rewritten texts that passed (poor) peer review. Deception with synthetic texts threatens the integrity of the scientific literature.
    [Show full text]
  • A History and Development of Peer-Review Process
    Annals of Library and Information Studies Vol. 66, December 2019, pp. 152-162 A history and development of peer-review process Siladitya Jana Assistant Librarian, Indian Institute of Science Education and Research Kolkata, Mohanpur 741 246, India, Email: [email protected] Received: 24 July 2019; accepted 28 November 2019 The paper shows the importance of peer review process in the scholarly communication system and discusses both the closed and the newly emerging open peer review models. It also examines the peer review system at the scholarly academies or societies in their nomination systems for prizes, rewards, etc. It also discusses the various facets of the newly developed open peer review models now prevalent in various journals. The paper may help to understand and appreciate the role played by peer review in the scholarly communication system and the efforts being made to make it more transparent. Keywords : Double-blind peer review; Open access; Open peer review; Peer review; Scholarly communication; Single-blind peer review Introduction learned organisations to publish any material submitted by its members, etc. These factors resulted Peer review is the quality control mechanism in the in the uneven growth of peer review as a means of entire ecology of the scholarly communication 3 system. International Committee of Medical Journal controlling the quality of the publishable materials . Recent reports of peer review fraud also raise Editors (ICMJE) defines peer review as the critical 4,5,6 assessment of manuscripts submitted to journals by concerns about its efficacy and future . 1 experts who are usually not part of the editorial staff .
    [Show full text]
  • Gender Bias in Research Teams and the Underrepresentation of Women in Science 2 Roslyn Dakin1* and T
    bioRxiv preprint doi: https://doi.org/10.1101/741694; this version posted August 24, 2019. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. Gender bias in research teams and the underrepresentation of women in science 2 Roslyn Dakin1* and T. Brandt Ryder2,3 4 1 Department of Biology, Carleton University, Ottawa, ON, K1S 5B6, Canada 6 2 Bird Conservancy of the Rockies, Fort Collins, Colorado, 80525, USA 3 Migratory Bird Center, Smithsonian Conservation Biology Institute, National Zoological Park, 8 Washington, DC 20013, USA 10 * Correspondence to: [email protected] 12 Abstract Why are females still underrepresented in science? The social factors that affect career choices 14 and trajectories are thought to be important but are poorly understood. We analyzed author gender in a sample of >61,000 scientific articles in the biological sciences to evaluate the factors 16 that shape the formation of research teams. We find that authorship teams are more gender- assorted than expected by chance, with excess homotypic assortment accounting for up to 7% of 18 published articles. One possible mechanism that could explain gender assortment and broader patterns of female representation is that women may focus on different research topics than men 20 (i.e., the “topic preference” hypothesis). An alternative hypothesis is that researchers may consciously or unconsciously prefer to work within same-gender teams (the “gender homophily” 22 hypothesis). Using network analysis, we find no evidence to support the topic preference hypothesis, because the topics of female-authored articles are no more similar to each other than 24 expected within the broader research landscape.
    [Show full text]
  • Hundreds of Gibberish Papers Still Lurk in the Scientific
    News in focus says Coudert. He had previously posted about says she was surprised and angry to learn that of Toulouse, France, came up with a new idea: the Nature story on Twitter; McNutt replied, scientists continued to collaborate with the searching for key grammatical phrases char- urging him to take action. astronomer, pointing out that manuscripts acteristic of SCIgen’s output. Last May, he and “The NAS has chosen a policy that is very posted on the arXiv preprint server in the past Cabanac searched for such phrases in millions weak and that protects them in a way,” says six months still listed Marcy as a co-author. of papers indexed in the Dimensions database. Coudert. “Where is the justice for women pushed out of After manually inspecting every hit, the The academy has said in the past that it does the field if people continue to work with him?” researchers identified 243 nonsense arti- not have the resources for formal investiga- Some of those papers point to Berkeley as cles created entirely or partly by SCIgen, tions, apart from for internal NAS business. Marcy’s affiliation. A Berkeley spokesperson they report in a study published on 26 May The group relies on publicly documented (G. Cabanac and C. Labbé J. Assoc. Inf. Sci. investigations carried out by other organiza- “Where is the justice for Technol. https://doi.org/gj7b8h; 2021). These tions to begin inquiries into its members. articles, published between 2008 and 2020, The NAS informed Coudert of Marcy’s ter- women pushed out of the appeared in various journals, conference pro- mination last month; the chemist says it is a field if people continue ceedings and preprint sites, and were mostly in preliminary step in the right direction.
    [Show full text]
  • A New Method and Metric to Evaluate the Peer Review Process of Scholarly Journals
    Pub Res Q (2014) 30:23–38 DOI 10.1007/s12109-013-9339-y A New Method and Metric to Evaluate the Peer Review Process of Scholarly Journals Adam Etkin Published online: 6 December 2013 Ó The Author(s) 2013. This article is published with open access at Springerlink.com Abstract This paper will propose a new system that produces a metric that is an indicator of the level of peer review conducted prior to the publication of scholarly material. A mathematical algorithm which incorporates weighted values of the roles within the peer review process is created. The h-index, which is a value that ‘‘measures the productivity and impact of a scientist or scholar,’’ is also incorporated. Each round of review is weighted using the square root as later rounds of peer review are often less rigorous than earlier rounds. Once a paper is accepted the calculation is conducted and a ‘‘peer review evaluation’’ metric, or ‘‘Peer Review Evaluation Score’’ (pre-SCORE) is available. It is clear that such a metric would prove beneficial to all engaged parties (authors, publishers, readers, libraries). Authors will know that their work is being evaluated by a trustworthy publication and by experts in their field. Legitimate, ethical publishers will be recognized as such. Readers would have the ability to filter out material which was not properly vetted, and libraries/consortia would have further assurance that their limited funds are spent wisely. Future studies to see if there is a correlation between the pre-SCORE and Impact Factor or highly cited material is possible.
    [Show full text]
  • Ad Hominem Arguments in the Service of Boundary Work Among Climate Scientists
    SISSA – International School for Advanced Studies Journal of Science Communication ISSN 1824 – 2049 http://jcom.sissa.it/ RECEIVED: October 12, 2011 PUBLISHED: February 15, 2012 Article Ad hominem arguments in the service of boundary work among climate scientists Lawrence Souder, Furrah Qureshi ABSTRACT: Most accounts of an ideal scientific discourse proscribe ad hominem appeals as one way to distinguish it from public discourse. Because of their frequent use of ad hominem attacks, the Climategate email messages provoked strong criticisms of climate scientists and climate science. This study asks whether the distinction between public and scientific discourse holds in this case and thus whether the exclusion of ad hominem arguments from scientific discourse is valid. The method of analysis comes from the field of informal logic in which argument fallacies like the ad hominem are classified and assessed. The approach in this study focuses on a functional analysis of ad hominem—their uses rather than their classification. The analysis suggests three distinct functional uses of ad hominem remarks among the Climategate emails: (1) indirect, (2) tactical, and (3) meta-. Consistent with previous research on ad hominem arguments in both public and scientific discourse, these results reinforce the common opinion of their fallacious character. Only the remarks of the last type, the meta- ad hominem, seemed to be non-fallacious in that they might help to preempt the very use of ad hominem attacks in scientific discourse. “In this case, de Freitas is such a poor scientist that he may simply disappear.” (Tom Wigley, Climategate email, 1051230500.txt)1 “Pielke wouldn't understand independence if it hit him in the face.” (Phil Jones, Climategate email, 1233249393.txt) Introduction2 As scientists-in-training, Chris de Freitas and Roger Pielke, Sr., may have suffered appropriately the sting of such remarks from an overbearing advisor on their dissertation committees in a moment of impatience.
    [Show full text]