<<

Cites & Insights Crawford at Large Libraries • Policy • Technology • Media Volume 18, Number 8: November 2018 ISSN 1534-0937 Policy Ethics Originally a full-issue mega-essay,then intended as a medium-length “essay” in five or six parts—and, eventually, back to a full-issue essay. Maybe “Pol- icy” is the wrong flag, but it’s been years since I’ve used it, while “Intersec- tions” has been overused. Much of this relates to journals in general and OA in particular, including some aspects of the whole “predatory” thing. You might or might not note the absence of two “list” terms in this roundup. That’s deliberate. Whether or not I fully buy into the warning raised by the Houghtons, I respect the point and will strive to use “Beall- listed” (decidedly pejorative) and “DOAJ” rather than color-related terms. The “Grievance” Hoax All fairly recent and centered around one event, as publicized in the first item below. I’ve been involved in tweetthreads about this because I’m be- coming more comfortable saying that hoax scholarly papers are unethical. Period. I believe this case may be more unethical than most, but they’re all unethical: deliberately attempting to pollute the stream of scholarship in order to Make A Point. If I believe a bank has substandard theft-prevention mechanisms or that banks aren’t where you should keep your money,those are opinions. If I rob a bank, get caught, and say I was Making A Point, I suspect lawyers would suggest pleading insanity. It may not be a perfect analogy, but I think it’s an apt one. So don’t expect “objective” reportage here: I have an opinion. Remember, The Sting featured criminals as protagonists, deliberately trying to steal somebody’s money. Heroic or not, it was a crime.

Academic Grievance Studies and the Corruption of Scholarship This “47 minute read” (everybody reads at the same rate?) by Helen Pluck- rose, James A. Lindsay and appeared October 2, 2018 at Areo. What’s Areo?

Cites & Insights November 2018 1 Areo is an opinion and analysis digital magazine focused on current affairs — in particular: humanism, culture, politics, human rights, sci- ence, and free expression. Areo publishes thoughtful essays from a variety of perspectives compati- ble with broadly liberal and humanist values. It places particular priority on evidence and reason-centered pieces. Our contributors are intellectu- ally, professionally and ideologically diverse and include liberals, con- servatives, socialists, libertarians, atheists, Christians, Muslims, Jews, and more. As much as possible, Areo aims to avoid polarizing tribalistic stances and prioritizes intellectual balance, charity, honesty and rigor. We believe in the unfettered freedom to explore, think, and challenge ideas and concepts, and we’re intent on taking part in the conversations that will shape our tomorrow. Pluckrose is the editor. It’s a magazine, not a peer-reviewed journal, but since Cites & Insights is also a non-peer-reviewed thing, that isn’t a strike against it. I spent more time skimming through articles at Areo than I’d intended, and I’d say whether or not it’s “liberal” depends on your definition of the word and its current applications. There are a lot of items opposing what they call identity politics; that would almost seem to be an editorial passion. But that should be neither here nor there, right? Maybe, maybe not. Back to the article—a long one (Word says 11,728 words not including author bios and several thousand words worth of comments). It begins: Something has gone wrong in the university—especially in certain fields within the humanities. Scholarship based less upon finding truth and more upon attending to social grievances has become firmly estab- lished, if not fully dominant, within these fields, and their scholars in- creasingly bully students, administrators, and other departments into adhering to their worldview. This worldview is not scientific, and it is not rigorous. For many, this problem has been growing increasingly ob- vious, but strong evidence has been lacking. For this reason, the three of us just spent a year working inside the scholarship we see as an in- trinsic part of this problem. The second and third sentences there are fairly blunt. Universities should not be concerned with “social grievance”: that seems to be the message, and while I’d probably agree for the math department, I’m a lot less con- vinced that social studies and the humanities should ignore social issues. We spent that time writing academic papers and publishing them in respected peer-reviewed journals associated with fields of scholarship loosely known as “cultural studies” or “identity studies” (for example, ) or “critical theory” because it is rooted in that post- modern brand of “theory” which arose in the late sixties. As a result of this work, we have come to call these fields “grievance studies” in shorthand because of their common goal of problematizing aspects of

Cites & Insights November 2018 2 culture in minute detail in order to attempt diagnoses of power imbal- ances and oppression rooted in identity. Maybe that paragraph’s as much as I need to quote directly—given the lib- eral use of scare quotes and the magazine’s overall bent, it’s pretty clear that the aim is to discredit whole fields rather than specific journals. Not quite clear enough? How about this: We undertook this project to study, understand, and expose the reality of grievance studies, which is corrupting academic research. Because open, good- conversation around topics of identity such as gender, race, and sexuality (and the scholarship that works with them) is nearly im- possible, our aim has been to reboot these conversations. We hope this will give people—especially those who believe in liberalism, progress, modernity, open inquiry, and social justice—a clear reason to look at the identitarian madness coming out of the academic and activist left and say, “No, I will not go along with that. You do not speak for me.” Maybe that’s all internally consistent to you; I’m less enlightened. Their methodology was to submit “outlandish or intentionally bro- ken” papers to “top journals”—defective papers that were close enough to norms for the field to be accepted while still including “some little bit of lunacy or depravity.” This process is the one, single thread that ties all twenty of our papers to- gether, even though we used a variety of methods to come up with the various ideas fed into their system to see how the editors and peer review- ers would respond. Sometimes we just thought a nutty or inhumane idea up and ran with it. What if we write a paper saying we should train men like we do dogs—to prevent rape culture? Hence came the “Dog Park” pa- per. What if we write a paper claiming that when a guy privately mastur- bates while thinking about a woman (without her consent—in fact, without her ever finding out about it) that he’s committing sexual violence against her? That gave us the “Masturbation” paper. What if we argue that the reason superintelligent AI is potentially dangerous is because it is being programmed to be masculinist and imperialist using Mary Shelley’s Frank- enstein and Lacanian psychoanalysis? That’s our “Feminist AI” paper. What if we argued that “a fat body is a legitimately built body” as a foun- dation for introducing a category for fat bodybuilding into the sport of pro- fessional bodybuilding? You can read how that went in Fat Studies. In other cases, they “magnified” what they saw as problems in a field or used other techniques. The team wrote 21 papers. Seven were accepted. “The papers them- selves span at least fifteen subdomains of thought in grievance studies, in- cluding (feminist) gender studies, masculinities studies, queer studies, sexuality studies, psychoanalysis, critical race theory, critical whiteness theory, fat studies, sociology, and educational philosophy.”

Cites & Insights November 2018 3 While the authors view writing intentionally phony papers as “a de- fensible necessity of investigation,” when people started asking about their fictional author and institution this became “outright lying,” which both- ered them. Calibration here: Making up statistics, deliberately falsifying situations—that’s “defensible necessity.” Supporting the lies of using a nonexistent author and institution when called on it—that’s “outright ly- ing.” I’m reminded of Tom Lehrer’s “The Irish Ballad.” There’s so much more here that I’d like to comment on that I need to restrain myself. Most of the article is summaries of the 21 articles and highly selective quotes from peer reviews. The conclusions go on for some length, with these three paragraphs being (perhaps) central: Based on our data, there is a problem occurring with knowledge production within fields that have been corrupted by grievance studies arising from crit- ical constructivism and radical skepticism. Among the problems are how topics like race, gender, sexuality,society,and culture are researched. Perhaps most concerning is how the current highly ideological disciplines undermine the value of more rigorous work being done on these topics and erodes con- fidence in the university system. Research into these areas is crucial, and it must be rigorously conducted and minimize ideological influences. The fur- ther results on these topics diverge from reality,the greater chance they will hurt those their scholarship is intended to help. Worse, the problem of corrupt scholarship has already leaked heavily into other fields like education, social work, media, psychology, and sociology, among others—and it openly aims to continue spreading. This makes the problem a grave concern that’s rapidly undermining the legitimacy and reputations of universities, skewing politics, drowning out needed conversations, and pushing the culture war to ever more toxic and existential polarization. Further, it is affecting activism on behalf of women and racial and sexual minorities in a way which is counterproductive to equality aims by feeding into right-wing reaction- ary opposition to those equality objectives. What do we hope will happen? Our recommendation begins by calling upon all major universities to begin a thorough review of these areas of study (gender studies, critical race theory, postcolonial theory, and other “theory”-based fields in the humanities and reaching into the so- cial sciences, especially including sociology and anthropology), in or- der to separate knowledge-producing disciplines and scholars from those generating constructivist sophistry. We hope the latter can be re- deemed, not destroyed, as the topics they study—gender, race, sexual- ity, culture—are of enormous importance to society and thus demand considerable attention and the highest levels of academic rigor. Further, many of their insights are worthy and deserve more careful considera- tion than they currently receive. This will require them to adhere more honestly and rigorously to the production of knowledge and to place

Cites & Insights November 2018 4 scholarship ahead of any conflicting interest rather than following from it. This change is what we hope comes out of this project. Maybe. And maybe the supposed goals of this project balanced out the ethics of submitting phony papers (and wasting tens or hundreds of hours of peer review and editor time). Gee, I remember that computer science journals published dozens of papers generated by a spoofing program—so computer science must be deeply corrupt. Or not. This one’s not about OA. I checked the journals involved. They come from Taylor & Francis, SAGE, Springer, Wiley,Elsevier, two university presses and one small publisher. None of them are Gold OA. Indeed, if you can claim that various stings actually discredit certain OA publishers, then at least two or three of these major toll publishers have been discredited and should be avoided by serious scholars. That is, of course, an absurd statement—as is “discrediting” OA publishers by getting a bad paper into one journal. Were the papers all so ludicrous that they should have been rejected out of hand? I’m not enough of a specialist to say, although the summaries of one or two don’t seem that outlandish. Was the whole project deeply unethical? Yes, I believe it was. But let’s see what others have to say…

What the ‘grievance studies’ hoax is really about This piece, posted by Alison Phipps on October 4, 2018 at her site genders, bodies, politics, is a freely accessible, slightly longer version of a piece pub- lished the same day at Times Higher Education. After a paragraph establishing what this is about, we get the heart of Phipps’ commentary—and since she says it’s Open Access and it is both succinct and clear, I’ll quote the remainder: Pluckrose et al claim to be ‘left-leaning’ scholars who position them- selves against what they pejoratively call ‘grievance studies’ (a term which, whether they intend it to or not, evokes a canon of right-wing ‘anti-victimism’). ‘Grievance studies’ encompasses a variety of disci- plines including sociology, anthropology, gender studies and critical race studies. Their key target is described as ‘social constructivism’, which seems to consist of any attempt to demystify categories usually defined as ‘natural’ (so they actually mean social constructionism). Some of the tenets they take issue with are: the idea that gender ine- qualities are not to do with biology; the idea that obesity is a ‘healthy and beautiful body choice’; specific theories such as standpoint episte- mology; and specific methodologies such as autoethnography. There’s nothing wrong with academics holding each other up to scru- tiny – it’s healthy and necessary. But despite their claim to be engaging in ‘good-faith’ critique, it’s clear that Pluckrose, Lindsay and Boghossian actually aim to undermine fields they have political – not scholarly – objections to. First, there is plenty of scholarship within ‘grievance studies’ which does not take a social constructionist perspective, and

Cites & Insights November 2018 5 plenty outside it which does. Secondly, as they have targeted only jour- nals in ‘grievance studies’ fields and not others, there is no way to know whether the problems they identify are specific or more general across the sector (a glance at Retraction Watch suggests the latter). Indeed, despite their professed mission to restore methodological rigour where they feel it’s lacking, their own study incorporates no control group (not to mention the complete lack of research ethics). Most of the hoaxed journals are gender studies ones, and Boghossian and Lindsay have targeted gender studies before. This was with a hoax piece entitled ‘The Conceptual Penis as Social Construct’, submitted to a journal which turned out to be pay-to-publish. It seems, then, that these three may be harbouring some grievances themselves. The current hoax features papers which are certainly outlandish. But Pluckrose et al admit they were not able to achieve a ‘conceptual penis’ style hoax with the journals they targeted this time, and had to produce much ‘less obvious’ papers which, in many cases, involved inventing da- tasets and citing relevant literature. Furthermore, some of the papers are simply based in premises (e.g. social constructionism) or political prin- ciples (e.g trans inclusion) that the hoax authors find hard to accept. For instance, a paper entitled ‘An Ethnography of Breastaurant Masculinity’ argues that establishments such as Hooters help to construct problematic forms of masculinity (whereas Pluckrose et al seem to think that men are just biologically programmed to like looking at breasts). In their descrip- tion of the aims of this particular hoax, they say, ‘to see if journals will publish papers that seek to problematize heterosexual men’s attraction to women’. Well, yes – problematising heterosexual attraction is a key premise on which gender studies scholarship is based. Like the hoax itself, their reporting of it is also riddled with misrepre- sentation. Editors of one of the targeted journals tell me that the paper submitted to them was recorded as a desk reject and did not go out to reviewers and was not, as the authors claim, given a revise and resub- mit. Michael Keenan notes that another paper was rejected by the jour- nal three times, with very critical reviewer commentary, but Pluckrose et al describe the journal’s response as ‘warm’ and place this alongside details of a paper which was accepted, which is very mislead- ing. They also report they received four invitations to peer review other papers ‘as a result of their exemplary scholarship’, but neglect to men- tion whether these were merely auto-generated from a list of previous submitters to the journals in question. The exposure of the hoax ends with a demand that all major universi- ties review various areas of study (gender studies, critical race theory, postcolonial theory and other disciplines such as sociology and anthro- pology) ‘in order to separate knowledge-producing disciplines and scholars from those generating constructivist sophistry.’ This is a

Cites & Insights November 2018 6 chilling statement which will certainly feed right-wing attacks on gen- der studies such as those which have recently happened in Hungary, as well as the targeting of feminist and critical race scholars by the ‘alt’- right. Pluckrose et al claim this is not their intention, but given their various misrepresentations, you’ll forgive me if I don’t believe them. As a scholar in ‘grievance studies’ myself, I think the hoax says more about conditions in the sector than anything else. Pressure to publish has created an increasing volume of submissions (and arguably also a drop in standards). Unpaid peer review often has to be squeezed in be- tween swelling workload demands. If we’re truly worried about aca- demic rigour, we might want to start there. Alternatively, we could think less about the flaws of ‘grievance studies’ and more about how academic work has contributed to legitimate grievances by bolstering neoliberal economic reforms or neo-imperialist foreign policy. To me, that’s corruption of scholarship. I’m not a scholar at all, much less one in “grievance studies” (and I think Phipps is right to scare-quote that term), but this all strikes me as reason- able. I’d also suggest reading the Michael Keenan post linked to here, which dissects portions of the article and begins with this summary: tl;dr: this latest academic journal hoax is over-hyped and the reporting on it is terrible I’m not linking to or commenting on all of what may have been a one-week sensation. But there are other notes worth pointing out, I believe. ara wilson thread I’m reluctant to link to threads and even more reluctant to quote significantly from them, but the ones linked to here are all open threads that, as far as I can tell, anyone can see and follow. This one, for example, began around 3 p.m. on October 3, 2018, actually starting as a retweet- with-comment pointing to a thread that’s very laudatory of the hoax (although some later commenters are less so). Wilson points out some of the problems with the hoax article itself, while agreeing that the journals might review their review processes. zeynep tufekei thread Tufekei’s thread begins early on October 5, 2018 (citing the Keenan post noted above) and begins with a clear and concise opinion of the hoax itself: That hoax is oversold, authors *very* deliberately misrepresenting what they did. If you fake data and get published, that’s fraud, not hoax. If you cherrypick pick a nice sentence from review that rejects paper, that’s not a successful hoax. In additional tweets, Tufekei calls it “a misleading stunt” but also praises Sokal’s old stunt, and generally seems to think hoaxing is just fine as long

Cites & Insights November 2018 7 as it’s reported honestly. Some later commenters get into interesting dis- cussions, with one pretty much damning certain fields and one correctly noting that peer review begins by assuming honesty and good faith—and Tufekei offering a slightly harsher summary: These people just have an axe to grind against some fields. Fine, make a conceptual argument. That’s okay. This is fraud passing as hoax through deliberate and dishonest misrepresentation of the write-up, and journalists/sympathizers not checking. Not a good look.

Akbar Rasulov thread This thread begins at 2:58 a.m. on October 4, 2018, and is primarily a blog post in tweetthread form—one that doesn’t exactly praise the hoax. It begins: THREAD. A few thoughts about this new Sokal-esque “hoax” episode. (1) So a group of professionally trained obsessives with a chip on their shoulder, after expending considerable time, energy, and other re- sources, have managed to hack the academic publishing “system”… and ends: Bottom line: if in [this scenario], you are thinking “good on the prank- sters” or “those shite journals got what they deserved for having such lax standards”, then not only are you glorifying the bully, you are also essentially laughing at a victim of online fraud who got defrauded by a malicious liar because the latter just had that fratboy moment. Read the rest. Other than noting that I believe the hoaxters were trying to discredit the fields more than the journals, I have little to add.

Michael Hendrix thread This relatively brief but pungent thread begins at 9:04 a.m. on October 4, 2018 with a retweet from Virginia Hughes (which is noting a Buzzfeed ar- ticle—see next item) and this comment (note that Hendricks is a neurobi- ologist at McGill): I still don’t get the difference between submitting sincerely fraudulent work to aid your career and submitting performatively fraudulent work to aid your career. Both exploit trust and the assumption of good faith for professional gain. He follows with a graph noting that the biggest journals in the hard sci- ences also tend to have the most retractions, and a note about a seemingly- odd claim in a recent paper at one top hard-science journal. Another scientist tries to defend the hoax, saying that this kind of fraud is beneficial because it exposes “systemic fraud,” but rejects the no- tion that fraudulent papers being published in, say, cell biology journals would mean cell biology was fraudulent. The response is a classic double standard.

Cites & Insights November 2018 8 Here’s What Critics Say About That Big New Hoax On Gender Studies That’s the headline for a fairly good (and fairly long) piece by Virginia Hughes and Peter Aldhous published at 11:48 a.m. on October 4, 2018 at BuzzFeedNews. The tease: The hoaxers say that gender, race theory, and sexuality studies are cor- rupting academia. Critics say the experiment was itself a biased sham. The article is not helped by this: The article was praised by some. “Is there any idea so outlandish that it won’t be published in a Critical/PoMo/Identity/”Theory” journal?” Harvard psychologist tweeted. Jeffrey Beall, a retired ex- pert in academic publishing at the University of Colorado Denver, told BuzzFeed News that the hoax worked partly because academia is biased toward the political left. “This is what happens when you have one- party rule in higher education,” he said. That Beall blames this on “one-party rule” is scarcely surprising; that the writers regard him as an “expert in academic publishing” is just sad. More useful: Yes, these critics said, there are major problems with peer review in stud- ies of gender, race, and other facets of identity politics — just like psy- chology, nutrition, and many other fields of scientific research. But the trio made no attempt to test whether the fields they call “grievance stud- ies” have a particularly egregious problem. They set out from the get-go to prank these disciplines but not others. “For all of the hoaxers’ emphasis on scientific rigor, their experiment doesn’t have a control,” said Sarah Richardson, a professor of the history of science and of studies of women, gender, and sexuality at Harvard Uni- versity.“By their own standards, we can’t scientifically conclude anything from it.” The rest of the article includes some interesting comments about “leading” journals and how fields are defined—and some odd but interesting feed- back from one of the hoaxers. Because the trio fabricated data (including fake assessments of the genitals of nearly 10,000 dogs), purposely mimicked the language of genuine pa- pers, and responded positively to reviewers’ comments to get papers pub- lished, some critics suggested the hoax had more in common with notorious examples of scientific fraud than prior examples of spoof papers. But then, I wonder about the ethics of most prior examples as well…

What an Audacious Hoax Reveals About Academia Did I mention Yascha Mounk earlier? Here’s his October 5, 2018 piece at The Atlantic—and it’s fair to say he’s positive about the hoax. He praises

Cites & Insights November 2018 9 the Sokal hoax and bemoan the fact that the “discredited” field still influ- ences academia, and leaves little doubt as to his own stance: Generally speaking, the journals that fell for Sokal Squared publish re- spected scholars from respected programs. For example, Gender, Place and Culture, which accepted one of the hoax papers, has in the past months published work from professors at UCLA, Temple, Penn State, Trinity College Dublin, the University of Manchester, and Berlin’s Hum- boldt University, among many others. The sheer craziness of the papers the authors concocted makes this fact all the more shocking… He tosses in “dreck” and is absolutely certain that a “canine rape culture” paper is invidious nonsense—oh, but the real problem with these fields is that they can lead to doubt about real scholarship: There are many fields of academia that have absolutely no patience for nonsense. While the hoaxers did manage to place articles in some of the most influential academic journals in the cluster of fields that focus on dealing with issues of race, gender, and identity,they have not penetrated the leading journals of more traditional disciplines. As a number of aca- demics pointed out on Twitter, for example, all of the papers submitted to sociology journals were rejected. For now, it remains unlikely that the American Sociological Review or the American Political Science Review would have fallen for anything resembling “Our Struggle Is My Struggle,” a paper modeled on the infamous book with a similar title. As you know,no respectable medical journal would have published a paper linking autism to vaccinations, just as no respectable “glamour” journal would have published a paper claiming arsenic-based life. Because, you know, those fields have no patience for nonsense. Good to know. Mounck dismisses suggestions that support of the hoax is “a coordi- nated attack from the right” by saying that the hoaxers “describe them- selves as left-leaning liberals.” I think of some of #45’s self-descriptions and wonder about the assumed veracity of self-description. Mounck’s a lecturer on government; he should know better. Mounck’s conclusion: It would, then, be all too easy to draw the wrong inferences from Sokal Squared. The lesson is neither that all fields of academia should be mis- trusted nor that the study of race, gender, or sexuality is unimportant. As Lindsay,Pluckrose, and Boghossian point out, their experiment would be far less worrisome if these fields of study didn’t have such great relevance. But if we are to be serious about remedying discrimination, racism, and sexism, we can’t ignore the uncomfortable truth these hoaxers have re- vealed: Some academic emperors—the ones who supposedly have the most to say about these crucial topics—have no clothes.

Cites & Insights November 2018 10 If that’s another way of saying that some academics are unsound or not as good as they should be, true, but we didn’t need unethical stunts to know that.

What the “Grievance Studies” Hoax Actually Reveals Daniel Engber, writing on October 5, 2018 at Slate, has a somewhat differ- ent take in this careful examination of the hoax: The headline-grabbing prank has more to do with gender than with academia. The link-heavy article covers a lot of ground, with this summary: But if you really take the time to read through Pluckrose, Lindsay, and Boghossian’s 11,650-word essay, along with the bogus papers they pro- duced, you’ll find the project fails to match its headline presentation. The hoaxers’ sting on academia is supposed to have exposed the “sophistry” and “corruption” that exist across a broad array of research fields—those built around the “goal of problematizing aspects of culture in minute de- tail in order to attempt diagnoses of power imbalances and oppression rooted in identity.” The authors call these “grievance studies” and say their disregard for objective truth has yielded to a widespread “forgery of knowledge.” Yet these grandiose conclusions overstate the project’s scope and the extent of its success. They also serve as cover, in a way, for what appears to be the authors’ lurking inspiration: not their problems with the scholarship of grievance, but with that of gender. Engber analyzes the papers and the discussion of them in the hoax essay,not- ing that the essay seems to dwell on articles that were rejected—because help- ful reviewers sometimes said respectful things. (That reviewer’s tweetthread is itself worth reading, including this response to someone’s claim that the re- viewer should have recognized that the paper was “totally insane”: Good luck employing your gut-check standard in any hard science. That’s not how actual scholarship works. Almost all of it is counterin- tuitive, so something sounding silly is a bad criterion for real research- ers to reject. The system isn’t built around malicious hoaxes. Ya think?) Engber notes that one of the seven accepted “papers” was a collection of bad poetry accepted by a journal called Poetry Therapy: Let’s be clear: This was bad poetry. (“Love is my name/ And yours a sweet death.”) But I’m not sure its acceptance sustains the claim that entire fields of academic inquiry have been infiltrated by social con- structivism and a lack of scientific rigor. Another accepted essay, about the nature of satire, strikes Engber as plau- sible—and two others are scholarly essays, not research papers. That leaves three, the ones that have gotten the most attention, and each of the three was presented as the result of entirely faked empirical

Cites & Insights November 2018 11 research. Engber believes that, if the data had been real, at least one and maybe all of the papers could have had legitimate value. It’s true that Pluckrose, Lindsay, and Boghossian tricked some journals into putting out made-up data, but this says nothing whatsoever about the fields they chose to target. One could have run this sting on almost any empirical discipline and returned the same result. We know from long experience that expert peer review offers close to no protection against outright data fraud, whether in the field of gender studies or cancer research, psychology or plant biology, crystallography or con- densed matter physics. Even shoddy paste-up jobs with duplicated im- ages and other slacker fakes have made their way to print and helped establish researchers’ careers. So what if these hoaxers did the same for fun? These examples haven’t hoodwinked anyone with sophistry or sat- ire but with a simple fabrication of results. That peer review can’t catch fraudulent results should surprise nobody: how can it without redoing the study? Then there’s the usual hoax problem: papers are only submitted to would-be targets, without any controls: Even if we push the made-up-data papers to the side, those results are still quite grave: Twenty-five percent of bullshit papers made their way through peer review. But what, exactly, does it prove? It would be nice to know how often counterfeit research makes its way into the journals of adjacent fields; e.g., ones that touch on race, gender, and sexuality, but are uncorrupted by radical constructivism and political agendas. Sadly, we may never know, because the field of humanities hoaxing ap- pears to suffer from several of the flaws it aims to expose. For one thing, it’s politically motivated, in the sense that its practitioners target only those politicized research areas that happen to annoy them. For an- other, it’s largely lacking in scientific rigor. Most (but not all) hoax pro- jects lack meaningful controls, and they’re clearly subject to the most extreme variety of publication bias. That is to say, we only hear about the pranks that work, even though it’s altogether possible that skeptic- bros are writing bogus papers all the time, submitting them to academic journals, and ending up with nothing to show for their hard work. How many botched Sokal-style hoaxes have been tucked away in file drawers and forgotten because they fail to “prove” their point? I like Engber’s concluding paragraphs, but I’ll leave them for you.

What the ‘Grievance Studies’ Hoax Means I’ll close with an October 9, 2018 composite article from a source I nor- mally omit because it’s paywalled except for a few articles per month: the Chronicle of Higher Education.

Cites & Insights November 2018 12 If you’re able to access the composite article (I call it that because it’s composed of several brief signed essays), it’s worth doing. Briefly:  Yascha Mounk doubles down on his support for the hoaxers and disdain for the fields being attack.  Carl T. Bergstrom heads his essay “A Hollow Exercise in Mean-Spir- ited Mockery”; calls the project ethically indefensible and straight- up academic misconduct; and says it’s uninformative. I like his analogy: “Publishing a bad-faith paper based on fraudulent data proves nothing more about the state of a research field than pass- ing a bad check proves about the health of the financial system.” As he notes, peer review is simply not designed to detect fraud: he says the purpose is “first and foremost to improve manuscripts.”  The first-one-side-then-the-other approach follows with “In De- fense of Hoaxes” by Justin E.H. Smith, and I admit that I find his examples and argument as irrelevant and unconvincing as I find his conclusion, well…what can I say? Smith closes: “Any academic who thinks hoaxing as such is unethical or nugatory is a dull and petty functionary, and evidently has no interest in participating, or reveling, in the ongoing life of ideas.”  Next up: “A Limited Intellectual Vision” by Natalia Mehlman Petrzela. She seems to say that the hoaxers performed a real service and that peer review needs to improve—but also that they’re arrogant and have limited intellectual vision. She feels that the hoax is likely to lead to more defunding and devaluing of the humanities in general.  “A Strange Start to Peer Reviewing” is David Schieber’s comment (more formal than one noted earlier) about his very first peer review (as a grad student), a review that recommended rejection but also tried to offer constructive comments. That review was subjected to selective quotation in the hoax essay: “This selective use of my comments seemed disingenuous. They were turning my attempt to help the au- thors of a rejected paper into an indictment of my field and the jour- nal I reviewed for, even though we rejected the paper.” Worth noting: in retrospect, Schieber still believes in constructive reviews.  Heather E. Heying is having none of this in “Exposing the Madness of Grievance Studies,” and, well, I don’t know what to say.  Finally, Laurie Essig and Sujata Moorti say “Only a Rube Would Believe Gender Studies Has Produced Nothing of Value.” It’s clearly stated and worth reading.

Cites & Insights November 2018 13 Stings and Peer Review The hoax discussed above appears to have been an effort to discredit dis- ciplines rather than specific journals. Herewith a few items from the last couple of years about other scholarly article stings and cases of problem- atic peer review.

Peer review post-mortem: how a flawed aging study was published in Nature Hester van Santen published this on December 9, 2016 at nrc.nl. It begins by recounting the start of a lecture by demographer Jean-Marie Robine on the increasing life span of humans: But after about five minutes, he could not avoid talking about a recent article in Nature by three complete unknowns in the field. It was pub- lished on 5 October, under the title Evidence for a limit to human lifespan. Following a demographic analysis, three geneticists from New York con- cluded that people will never grow older than approximately 115. “We have colleagues”, began Robine with palpable irony, “who suggest that, contrarily to what we’d expect, we are facing a strong limit [to human lifespan].” Only during the Q&A did a PhD student say it out loud: “Frankly, this is the weakest paper I have read in a top journal.” Robine did not comment. Perhaps it was because his own name was at the bottom of the publication. Nature thanks J.-M. Robine for his peer-review. Then there are comments from some leading demographers, all essentially saying the Nature article was bad science. One, Jim Vaupel, had this to say: “They just shovelled the data into their computer like you’d shovel food into a cow.” How could this article appear in one of the world’s leading scientific journals? It’s a question which is regularly asked in the academic world, but never answered. Not, for example, in response to the widely ridi- culed article about the ‘arsenic bacteria‘ in Science (2011), which pur- portedly uses arsenic as a building block. Not after the big scandal about STAP stem cells in Japan (2014), which began with an article in Nature that was later withdrawn. And certainly not with regard to the many uncontroversial publications in top journals. As with most (but no longer all) journals, Nature keeps its peer review process “in strict confidence,” but van Santen discussed the situation with two of the three peer reviewers and the authors. According to Nature, the peer-review process is rigorous and independ- ent, with the claims of each paper being weighed against the many oth- ers. But my conversations yielded a very different, disconcerting

Cites & Insights November 2018 14 picture. The authors said their reviewers had a better grasp of the ma- terial than they did themselves. The reviewers knew that. They had criticisms, yet they did not scrutinise a large part of the analysis at all. And Nature let it slide. The corresponding author is a geneticist. The other two biologists “spe- cialize in trawling through large volumes of DNA data”—and when a cas- ual discussion turned to the question of “whether the oldest people are actually still getting older,” they decided to look into it. Which involved looking at supercentenarians, people aged 110 or older. There’s a lot more here, and it’s interesting reading. For example, Na- ture’s list of 11 primary areas for reviewers to consider does not include sta- tistics and methodology (those are in the “if time is available”) but do include five about novelty and importance, such as “Is the paper likely to be one of the five most significant papers published in the discipline this year?” (It matters less whether it’s good science than whether it’s BIG science?) It gets more interesting: after receiving extensive review comments, Nature rejected the paper…and then changed its mind. The authors revised the paper, adding scores of additional graphs and doing additional analy- sis—but not changing the conclusion. Apparently, the reviewers didn’t spend much time reviewing the new analysis, and the article was pub- lished, accompanied by a positive editorial commentary written by the one peer reviewer who’d always agreed with the conclusion. This one wasn’t a sting, but it does say something about peer review and the editorial processes at “glamour journals.” Worth reading.

The open access “sting” by Science, three years on This brief piece, by Zen Faulkes on December 23, 2016 at NeuroDojo, looks back at the Bohannon sting, looking at four of the papers that made it through production. I will note this comment before the case studies: He was not the first, nor has he been the last, to set out to punk crappy journals with obviously bad papers. It’s practically a scientific genre in its own right now. Indeed. In any case, here’s what happened:  In one journal, there’s no retraction notice—but also no article. In- stead, there’s a gap in page numbers.  In another case, the whole journal has disappeared.  Same in the third case.  In the fourth case, there is an explicit retraction notice, with this statement:

Cites & Insights November 2018 15 JBPR has been a victim of bogus submissions; and this paper is one of those and is hereby retracted. The editor in chief takes full responsibil- ity for accepting this bogus manuscript for publication in JBPR. We sin- cerely assure readers that something like this will not occur again. I have to agree with Faulkes in one point: The last line makes me raise my eyebrows a bit. No journal can assure readers that they won’t make this mistake again. It’s just not possible to have a 100% failsafe fraud detection system. (Note: the ethics issues here are with Bohannon, not Faulkes. I see no eth- ical issues in tracing what’s happened to published articles and journals— what would those issues be?)

One weird trick that would kill predatory journals Same author, different date: Zen Faulkes, April 10, 2017, NeuroDojo. Faulkes notes another sting success, says “People continue to be (in my mind) disproportionately upset about junk journals,” and goes on to offer a solution: The main reason that junk journals can fool people (even some in rel- atively sophisticated academic environments in an industrialized na- tion) is that they can claim to be peer-reviewed. There is no simple way to know if a journal is peer reviewed, because those critical pre-publi- cation reviews are normally confidential. My “not at all novel” solution for how we could kill off junk journals is: Publish the reviews. Just the content of the review, not necessarily the identity of the review- ers… Real journals have the reviews to publish. Junk journals will have no re- views they can publish. The effort spent generating plausible fake reviews seems to be far too high for a junk journal to keep up the charade for long. With that one change, whether a journal is truly peer reviewed (or not) is easily verifiable. [Excerpted slightly.] As you may have guessed, I believe that a lot of sup- posedly “predatory” journals aren’t, but I also agree that a journal that claims peer review, but doesn’t actually do peer review, is a junk journal. (“Junk” is also a more appropriate word than “predatory.”) Seems like a good idea. It should, of course, apply to all journals, whether OA or not.

Hoax With Multiple Targets This piece, by Scott Jaschik on May 22, 2017 at Inside Higher Ed, is about the sting or hoax that preceded the ‘grievance’ hoax discussed at length in

Cites & Insights November 2018 16 the first section of this roundup: “The Conceptual Penis as a Social Con- struct” by Peter Boghossian and James A. Lindsay. They’re pretty clear about their intent: “We intended to test the hypothesis that flattery of the academic left’s moral architecture in general, and of the moral orthodoxy in gender studies in particular, is the overwhelming determiner of publication in an academic journal in the field. That is, we sought to demonstrate that a desire for a certain moral view of the world to be validated could overcome the critical assessment required for legitimate scholarship. Particularly, we suspected that gender studies is crippled academically by an overriding almost-religious belief that maleness is the root of all evil. On the evidence, our suspicion was justified.” The report notes some early pro-hoaxer comments as well as some less enthusiastic follow-up, e.g.: Ketan Joshi, an Australian scientist and consultant, wrote on his blog that it is important to remember that many scientists have published hoax articles in science journals -- and that humanities disciplines are not the only ones vulnerable to such attacks. Further, he wrote that “a single instance isn’t sufficient evidence to conclude that an entire field of research is crippled by religious man-hating fervor, and that anyone pushing that line is probably weirdly compromised.” Then there’s the actual path to publication. It was submitted to one journal published by Taylor & Francis (on behalf of a scholarly society). That jour- nal rejected it…and suggested that they try another T&F journal, which accepted it. The sociology blog Orgtheory.net wrote of the connections between Taylor & Francis, NORMA and Cogent Social Science. “So get this: If your article gets rejected from one of our regular journals, we’ll auto- matically forward it to one of our crappy interdisciplinary pay-to-play journals, where we’ll gladly take your (or your funder’s or institution’s) money to publish it after a cursory ‘peer review.’ That is a new one to me. There’s a hoax going on here, all right. But I don’t think it’s gender studies that’s being fooled.” I originally wrote “this is a long article” based on the scroll bar—but that’s wrong. The article’s succinct, but it’s followed by 147 comments, and those do go on. And on…

Peering Into Peer Review This May 24, 2017 “Library Babel Fish” column by Barbara Fister at Inside Higher Ed is a commentary stemming from the sting just described. Fister recounts the basic story and the slight complication of cascading journals.

Cites & Insights November 2018 17 It’s another Sokal moment proving that gender studies and much of social science is foolish, postmodernism is balderdash, and academic publish- ing too full of jargon, theory,and lax standards. Man-hating feminists and climate change got thrown into the parody,too, to demonstrate the dom- inance of “morally fashionable nonsense,” which I guess is a way of say- ing “political correctness” without all the political baggage. Fister notes some points in the long history of “pointing and laughing” (academic hoaxes and stings). What can we conclude from these kinds of hoaxes? There are plenty of people out there who are happy to take your money. The only folks who were chastened by these hoaxes are the two legitimate publishers – Springer and IEEE – who got sloppy as they included fake conference abstracts in their subscription databases. The rest don’t have reputa- tions at stake. They’re scammers. Of course they will accept nonsense papers, so long as you pay them. They might have a bridge to sell you as well, or a fortune awaiting you in a Nigerian bank. The lesson isn’t that peer review is broken or that open access is a fraud. It’s simply that you need to think before you send your research off to a publisher you’ve never heard of – or before you assume all lines in every CV rep- resent genuine peer-reviewed research. Fister notes a post by Ketan Joshi, who also noted that the notorious vac- cine/autism Wakefield paper was peer-reviewed. “Wakefield’s discredited theory has led to the deaths of children. So far as I know, nobody has died because of an article published by a scam faux-journal website.” Indeed. (Fister also notes, which I had forgotten, that the Sokal hoax was not a peer review sting: it wasn’t peer reviewed.) Then there’s the other side: the two scholars who took a dozen pub- lished psychology articles, changed the author names and institutions, and resubmitted them to the same journals that published them: “in most cases they were rejected, citing methodological flaws.” Fister believes the demand for academics to be “insanely productive” as measured by publications is part of the problem, and concludes: I’m not sure how to put a stop to this unhelpful acceleration of pub- lishing demands, but the idea that publications are the measure of the worth of academics is a hoax that has serious consequences. A handful of comments; nothing to add about them.

Journal retracts 16-year-old paper based on debunked autism-vaccine study This Retraction Watch item from October 16, 2018 is somewhat related to the Wakefield paper discussed (briefly) above—or, more specifically, a pa- per that relied on the Wakefield paper. And it only took sixteen years to retract this paper, which begins:

Cites & Insights November 2018 18 Vaccinations may be one of the triggers for autism. Substantial data demonstrate immune abnormality in many autistic children consistent with impaired resistance to infection, activation of inflammatory re- sponse, and autoimmunity. Impaired resistance may predispose to vac- cine injury in autism. Naturally, the second article gets cited by antivaxxers, all the more so after the Wakefield paper was finally withdrawn. “Predatory” Never Sleeps “Predatory” journals involve all sorts of ethical issues: the ethics of badly- supported bad-journal lists (and damning a journal for one article or a publisher for one journal), the ethics of stings to “expose” bad journals— and, of course, the ethics of falsely claiming to be peer-reviewed or hiding article charges or claiming to have editorial board members who never agreed to be on the editorial board. Here are a few more items in this broad category—two overlooked in previous roundups, the rest fairly recent.

Even top economists publish in predatory journals, study finds I’m not sure how I missed this October 27, 2016 story in Retraction Watch, especially since I appear in the comment stream. It reports on another silly “study” that (a) assumes that appearing on Beall’s Lists actually means a journal or publisher is predatory, (b) looks at authors of articles in those journals, (c) draws conclusions. In this case, the conclusions are that—well, see the title. Specifically, 27 of the “most eminent economists (within the top 5% of their field)” published nearly 5% of their articles in Beall-listed journals. Or, rather, those are the findings based on evaluating 39 Beall-listed journals that appear in the Research Papers in Economics database (RePEc). I’m not sure how you qualify to be among the most eminent 5% of economists; perhaps it’s as objective as qualifying to be a Beall-listed journal? There is, of course, more in the article (including the apparently mandatory quote from Beall), and 22 comments—which I find interesting because a string of comments on why the lists are questionable or useless, sometimes with specifics, are followed by a disagreement that boils down to “Beall says they’re bad, and he must know.” My own take on the findings of this study may best be summed up by repeating a paragraph from the October 2018 issue: Thousands of scholars in many specific fields, most with doctor- ates and many faculty in prestigious institutions, have used their own professional judgment to submit articles to journals that one (now-retired) anti-OA librarian considers unworthy,usually without providing evidence.

Cites & Insights November 2018 19 Which, to me, says a lot more about the lists than it does about the scholars.

Fake Medical Journals Are Spreading, And They Are Filled With Bad Science This article, by Steven Salzberg on January 3, 2017 at Forbes, may be about “predatory” journals of a different sort, but not about OA. Instead, it’s about journals that cover forms of treatment that Salzberg (a professor at Johns Hopkins) regards as pseudoscience—specifically acupuncture. (In a way, it’s odd that he didn’t choose homeopathy, which seems to me to be much more soundly established as worthless than acupuncture—and, no- tably, just as Elsevier publishes one of the three acupuncture journals dis- cussed, it publishes at least one homeopathy journal). It’s an interesting article, beginning with the Journal of Magical Medi- cine as a mythical example: Imagine for a moment that a publisher created a Journal of Magical Med- icine under the rubric of a large, respectable publisher–for example, Elsevier. After assembling an editorial board of academics from legiti- mate universities who believed in magical medicine, the journal started soliciting and reviewing papers. The editors would send any submis- sions for review to other academics who believed in magic, and over time a steady stream of peer-reviewed papers would emerge. A large network of magical medicine “experts” would develop, reviewing and citing each others’ papers, and occasionally writing review articles about the state of the art in magical medicine. Dr. Harriet Hall has accurately called this “tooth fairy science.” I prefer to call it “cargo cult science,” because its practitioners mimic the be- havior of real scientists without actually understanding science itself. Of course, no self-respecting scientist would want to waste his or her time reading garbage, so most scientists would simply ignore the Jour- nal of Magical Medicine. But it would persist anyway. He turns to three actual examples, one each from Elsevier, BMJ and Springer. He quotes a professor who asserts that these journals—or alter- native medicine journals in general—invite authors to suggest reviewers, “who subsequently are almost invariably appointed to do the job… As a result, most (I estimate around 80%) of the articles that currently get pub- lished on alternative medicine are useless rubbish.” Salzberg has little patience with acupuncture claims: I’ve been advised that I should treat acupuncturists and acupuncture be- lievers with respect, and that I should accept that they have some legiti- mate claims because some patients like them. I’ve tried this approach, and it doesn’t work: acupuncture continues to spread, in part because of very badly done studies that often misrepresent their findings. I’ve read

Cites & Insights November 2018 20 enough acupuncture studies for one lifetime, and acupuncture doesn’t work for anything. It’s nothing more than an elaborate, theatrical placebo, performed by quacks to convince unwitting victims patients that the ac- upuncturists have something real to offer. They don’t. Trying to refute the claims of acupuncture (see here and here, and again here, for example) is like playing whack-a-mole. Proponents claim it cures dozens (or hundreds) of different conditions, and each time a claim falls flat, they simply make up a new one. Acupuncture also car- ries small but real risks to patients, who can suffer infections and some- times much worse, such as punctured lungs. I guess the ethical issue here is this: If Elsevier and BMJ and Springer be- lieve that acupuncture is fake medicine, is it ethical for them to publish these journals?

Predatory publishers threaten to consume public research funds and undermine national academic systems—the case of Brazil This report on a study, by Marcelo S. Perlin, Takeyoshi Imasato and Denis Borenstein, appeared on September 6, 2018 at the LSE Impact Blog. Here’s the abstract: An unintended consequence of the open access movement, predatory publishers have appeared in many countries, offering authors a quick and easy route to publication in exchange for a fee and usually without any apparent peer review or quality control. Using a large database of publications, Marcelo S. Perlin, Takeyoshi Imasato and Denis Boren- stein analyse the extent of this problem throughout the entire Brazilian academic system. While predatory publications remain a small propor- tion of the overall literature, this proportion has grown exponentially in recent years, with both early-career and established scholars found to have authored papers published in predatory venues. The inclusion of predatory publications in national journal quality rankings has been a key factor in this increase. Difficulties appear right in the first paragraph: The maxim “publish or perish” is more relevant than ever, now in evi- dence all over the world. As a consequence, academic publishing is booming, with demand to publish in scientific journals having in- creased exponentially in recent years. This prompted the launch of a succession of new journals, a large number of which operate according to an open access (OA) model whereby the cost of publication is trans- ferred from the reader to the author. Um. This is about Brazil, right? Which had 1,059 active DOAJ-listed OA journals in 2017 (that is, journals that published articles in 2017), pub- lishing 51,227 articles: Brazil is one of the most active OA countries. But consider the last clause above: “a large number of which operate according

Cites & Insights November 2018 21 to an open access (OA) model whereby the cost of publication is trans- ferred from the reader to the author.” Only 8% of the active Brazilian OA journals in 2017 charged APCs— the percentage is actually going down over time. Those journals were more active, but still only published 18% of the articles (the paid-article percent- age is also declining over time). As for “recently,” what few APC-charging OA journals there are in Brazil mostly started in 2011 or before—and there are so few that trends don’t mean much. So the last sentence of the lead paragraph is not only erroneous in general (OA does not imply author-pays) but almost completely invalid for Brazil in particular. The full study is, of course, in a subscription journal; I did not accept the kind offer to read it for a mere $39.95. The authors are quite proud of the “reliable, replicable, statistical-based methods” used to look at some 2.3 million articles in a Brazilian information system. How did they arrive at the list of predatory journals? Need you ask? They did offer three levels of “predatory identification,” with Level 2 for Beall-listed journals that aren’t in DOAJ and Level 3 for the subset of those journals that lack impact factors. And here’s the graph showing how huge the problem is: for 2014, the last year studied, Level 3 journals accounted for something like 1.05% of all 2014 articles. But it’s a scary, scary graph, because the percentage in 2011 was down around 0.3%. When looking at the profiles of the researchers publishing in these ven- ues, the results were striking. Contrary to our initial expectations, those to have published significantly in predatory venues are experi- enced scholars, many years into their careers, and with many previ- ous publications. The idea that young researchers, vulnerable due to their inexperience, are the victims of predatory publishers is simply not corroborated by the data. We cannot, however, attest to whether or not the researchers were fully aware of the practices of these journals at the time of submitting their work. Most concerning about these results is that funding to pay the publishing fees of predatory journals may come from research grants awarded by governmental agencies; part of a vi- cious circle in which experienced researchers increase their number of publications in order to become more competitive when applying for grants, and subsequently use the funds obtained to publish more papers in predatory journals. [Emphasis added.] As always seems to be the case, they fail to even consider the possibility that these experienced scholars in many disciplines might know more about the journals than one now-retired librarian. See the boxed paragraph earlier in this roundup. Brazil has a journal ranking database, Qualis—and, not surprisingly, when Beall-listed journals are in the database they attract more articles.

Cites & Insights November 2018 22 Since the authors never scare-quote “predatory” and only once acknowledge the possibility that the lists may be less than perfect, it’s not surprising that their conclusion is dire: The message from our research is clear: predatory journals are not yet undermining the academic system of Brazil, but may do so in the future. As we can see in Figure 1, the proportion of the research literature made up of predatory journals is increasing at an alarming rate. We provide strong evidence to suggest Qualis is a key factor in why we see such an increase. If not identified and combatted, predatory publishers may con- sume important research funds at the expense of the scientific endeavour. Sad as I find the article, I’m a bit encouraged by one of the two comments, from Williams Nwagwu, who says in part: Doesn’t your finding that experienced scholars, many years into their ca- reers, and with many previous publications publish in the so called preda- tory journals caution you about using the language? Are you implying that these class of authors are ignorant about meaning of journal quality? This submission simply does not address any concrete concern except probably entrenched anger against the growth of science in the developing areas.

Are Creative Commons Licenses Overly Permissive? The Case of a Predatory Publisher This brief article by Phil Clapham appeared August 31, 2018 in BioScience; it’s public domain (and freely available) because Clapham is a US Govern- ment employee. That means I could legally use the piece in whole or in part, including commercial republication, without even citing Clapham— but to do so would be unethical. Here’s the lead: Recently, a publisher named Syrawood put out a book entitled Inte- grated Study of Marine Mammals (Bullock 2017). The book is not cheap: It is priced at $150 on the publisher’s website (although it is available at lower cost from Amazon.com and other online sources). This volume is supposedly edited by a woman named Suzy Bullock, whose suspiciously vague biography states that she “serves as a senior professor … in a public research university in New Zealand.” The re- cipient of “several prestigious awards,” Bullock is supposedly on the editorial boards of “leading international scientific journals” and has published “numerous articles, papers, and books.” Despite this impres- sive résumé, there is no trace of this person on the Internet. The book consists of 20 articles taken from PLOS One and brief preface and “permissions” sections by the supposed editor. However, the original citations have been stripped from the first page and footers of every article.

Cites & Insights November 2018 23 PLOS One uses CC-BY, so—were it not for that last sentence—there would be nothing illegal or unethical (in my opinion) about the book: CC-BY means others can use your work for profit. But failure to attribute properly means this book is in violation of the CC-BY license and, thus, the copyright in the articles; it also makes the publication unethical. None of which has anything to do with the CC-By license: the book violates the license. Clapham tells us about the person who seems to be Syrawood and several other publishing houses engaged in the same process. He sees two issues: The two central issues here are whether the publishers are actually in breach of the Creative Commons license, and whether these licenses are too permissive to begin with. I’m honestly not sure where the second issue comes from, except that Clapham doesn’t like CC BY as it stands. The publishers are clearly in vio- lation of the license. Somehow Clapham manages to work up from this to an attack on “.” The closing paragraphs: Predatory publishing is undoubtedly on the rise. We all receive endless emails from spurious journals attempting to persuade us to submit ar- ticles on a pay-to-publish basis. Tests of the integrity of these institu- tions have revealed often comically low standards that have resulted in dogs being accepted onto editorial boards (my own dog is on one such board, and a friend’s is currently on seven) or pet-authored papers on ridiculous topics being published with no scrutiny (e.g., Wünderlandt et al. 2018). Repackaging original material without proper attribution is just the latest tactic within this unethical and parasitic field. The great majority of scientists likely support open access and the wide distribution of information; indeed, it is very much in our interest to have our work disseminated as broadly as possible. However, a license that permits the abuse of that work for commercial or other purposes serves no one well. I suggest it is time for open access journals to reassess and perhaps tighten the legal framework under which they publish. I won’t get into the ethics of adding your dog to an editorial board; I guess that’s just a harmless amusement. I’m not sure how republishing with at- tribution constitutes “abuse” of the work—but what I know?

The illicit and illegitimate continued use of Jeffrey Beall’s “predatory” open access black lists This article by Jaime A. Teixeira da Silva appeared March 27, 2018 in the Journal of Radical Librarianship. Here’s the abstract: For several years, a US librarian, Jefrey Beall, blogged about problems he perceived in open access (OA) journals and publishers. During that time,

Cites & Insights November 2018 24 many academics also felt that there were serious and legitimate issues with the scholarly nature of several OA journals and publishers. Beall rapidly gained popularity by recording his impressions on a personal blog, and created two controversial black lists of OA journals and pub- lishers that he felt were unscholarly. Beall’s black lists were well received by some, but also angered many who felt that they had been listed un- fairly, or who were not entitled to a fair challenge to become delisted. Beall seemed determined to show that the numbers of “predatory” OA journals and publishers were increasing annually, and even began to ad- vocate for the formal use of his black lists as policy,encouraging academ- ics not to publish in those journals or publishers. Institutes were also encouraged to use Beall’s black lists to prevent their academics from en- gaging in a free choice of publishing venue. That posture, antithetic to freedom of choice, may have harmed many academics and budding pub- lishers. In mid-January of 2017, Beall shut down his blog, without warn- ing. This was followed by considerable commotion among publishers, academics and their institutes that had relied on Beall’s black lists for guidance. A post-publication peer review of Beall’s black lists, Beall’s ad- vocacy, and the potential damage that they have caused, has only now begun. Reasons why these black lists are academically illegitimate, and arguments why their continued use is illicit, are provided. I won’t go through the article itself. There’s a section about DOAJ that relies on an article that I don’t believe says what’s implied (that is, that DOAJ actually used Beall’s lists to purge journals—as I read the article referred to, it says nothing of the sort). I certainly agree that continued use of a dis- credited and abandoned set of lists as the basis for research (on anything other than what’s on the list) is at least ethically questionable.

The “problem” of predatory publishing remains a relatively small one and should not be allowed to defame open access So say Tom Olijhoek and Jon Tennant in this September 25, 2018 piece at the LSE Impact Blog. And maybe that’s almost enough to say, but here are some excerpts: Imagine you want to investigate the quality of restaurants. You know beforehand there are bad restaurants. So you set up your investigation by going to a number of bad restaurants of bad reputation. What do you find? You find that a number of restaurants are really bad, an inev- itable conclusion. You even find that people of standing and reputation have visited these restaurants on occasion. Would the conclusion here be that all restaurants are bad? Several in- vestigations of this kind have looked into the problem of “predatory” or “questionable” publishers, the most famous being the heavily criti- cised and deeply flawed “sting operation” by John Bohannon in Science magazine. In science speak, this is called doing an experiment without

Cites & Insights November 2018 25 an appropriate control group, usually sufficient for research to be desk rejected for being fundamentally flawed. The latest such investigation, led by an international group of journalists, revealed something already widely known: in a number of countries, a relatively small number of “fake” papers have been submitted to, and published by, relatively few known-to-be-questionable journals that en- gage in deceptive publishing practices. The investigation built on this ex- isting knowledge, and found that many of the journals to have accepted these articles had also published authors of name and fame, something which had often been overlooked before. It was said that in Germany,the main example used in the investigation, more than 5,000 researchers had published in such predatory or questionable journals, and the investiga- tion in the UK also yielded the names of 5,000 researchers.A report of the investigation (unfortunately only available to view if you sign up for a two-week trial) showed a figure of geographical distribution of preda- tory publishers, without any attribution. The figure was taken from a highly-cited article by Cenyu Shen and Bo-Christer Björk that was pub- lished in 2015, but without appropriate reference. The article quotes my “meticulous research” coming to a figure of 135,000 articles (in 2014) rather than the 420,000-article Shen/Björk estimate— but that quote links to my blog post offering a preliminary report. In Gray Open Access 2012-2016, the January 2017 Cites & Insights, I refined that to show cases where Beall had actually made a case for a journal being ques- tionable, not just added it to the list without justification. The results? 29,947 articles: considerably less than one-tenth as many as in the Shen/Björk “study,” and perhaps 1.4% of all scholarly articles for 2014. These authors are also saying that some recent Beall-list-based studies may have other problems: Publications about parts of this investigation are still appearing, and the popular press, including TV and radio, has paid a lot of attention to this international collaboration. In many cases, however, it does not appear that the source data or methods were widely shared with these media outlets, and at present they are not public. Indeed, one journalist in- volved, when asked for the data supporting this media campaign to be shared, responded that the data could not be shared for legal reasons, despite also stating that the information is otherwise widely available online through web-scraping techniques. It seems strange that journal- ists appear not to want any form of independent verification of their work, given this is exactly one of the issues they are challenging within the scientific enterprise. It is utterly incomprehensible that scientists accept this kind of inves- tigation as sound. The methods appear opaque and flawed, at least partly plagiarised, the data are inaccessible and unverifiable, and often reported on without independent journalistic scrutiny; all things we

Cites & Insights November 2018 26 expect of any rigorous, research-based investigation, and especially one to gain international media attention of this scale. There’s more, but that’s all I’ll comment on.

Format Aside: Applying Beall’s Criteria to Assess the Predatory Nature of Both OA and Non-OA Library and Information Science Journals This interesting study by Joseph D. Olivarez, Stephen Bales, Laura Sare and Wyoma vanDuinkerken appears in College & Research Libraries 79:1 (2018). C&RL is a no-fee gold OA journal. The abstract: Jeffrey Beall’s blog listing of potential predatory journals and publish- ers, as well as his Criteria for Determining Predatory Open-Access (OA) Publishers are often looked at as tools to help researchers avoid pub- lishing in predatory journals. While these Criteria has brought a greater awareness of OA predatory journals, these tools alone should not be used as the only source in determining the quality of a scholarly jour- nal. Employing a three-person independent judgment making panel, this study demonstrates the subjective nature of Beall’s Criteria by ap- plying his Criteria to both OA and non-OA Library and Information Science journals (LIS), to demonstrate that traditional peer-reviewed journals could be considered predatory. Many of these LIS journals are considered as top-tier publications in the field and used when evaluat- ing researcher’s publication history for promotion and tenure. This is not a study of Beall’s list: it’s a consideration of the criteria he claimed to use in adding to the lists: The authors of this interpretive analysis (hereafter referred to as the “analysts”) believe that users need to be cautioned about the subjective nature of the Criteria. They recommend discretion because the criteria used are so general that an evaluator might label a scholarly journal as “predatory” when it is not or may disregard the article as being without merit solely because it is published in a journal found on Beall’s List. Promotion and tenure committees may even look to this list when re- viewing tenure candidate publications, thereby influencing future re- search. Although Beall noted that such committees need to “decide for themselves how importantly or not to rate articles published in these journals in the context of their own institutional standards and/or geo- cultural locus,” there is the potential to misconstrue and misapply the Criteria in terms of its objectivity. Indiscriminate application could af- fect promotion and tenure decisions, especially those decisions con- cerning faculty members in emerging fields of scholarship where needs for publishing outlets are met with OA journals. In this study, the analysts argue that the individual elements of Beall’s Criteria are so general in design that evaluators might broadly apply

Cites & Insights November 2018 27 them as measures of illegitimacy to any refereed academic journal, re- gardless of publishing model. By applying Beall’s Criteria to a list of both OA and non-OA well-regarded refereed academic LIS journals, this study demonstrates the subjective nature of his Criteria. While the Criteria may serve as an aid to researchers’ selecting publication outlets, this Criteria is presently not adequate for making such determinations because its elements focus more on the publisher and/or journal attrib- utes rather than the quality of articles published by the outlet. This is a carefully-done, thoroughly-documented study,looking at 87 journals in library and information science listed in Journal Citation Reports—seven of them being OA. Six journals were removed for various legitimate reasons. Of the remaining 81—notably including 74 subscription journals— 45 failed at least one of Beall’s criteria and 18 failed more than one. Maybe that’s all that needs to be said (unless you believe that most subscription LIS journals with impact factors are “predatory”)—except that, even if the criteria were objective, Beall rarely bothered to note which criteria any given journal or publisher failed. Here’s the last paragraph of the conclusions: This study shows the subjective nature of Beall’s Criteria, as well as the subjective nature of the lists created from the application of the Criteria. Such criteria provide a starting point for a discussion on predatory as- pects of academic publishing. Nevertheless, as librarians, our duty is to refrain from offering up these lists as the final word on predatory jour- nals. Rather, it is our responsibility to (1) use such lists and criteria as tools for teaching faculty to be proactive about evaluating what journals to publish in and (2) to ensure that newer journals, which are often OA, are not disqualified unfairly from consideration as part of quality scholarly output. This latter point is especially important if promotion and tenure committees use Beall’s Criteria without considering the sub- jective nature of their application, or without also including supple- mental evaluative measures of journal quality. The authors of this paper concur with Berger and Cirasella that librarians “are key stakeholders in scholarly and professional conversations reimagining various aspects of scholarly communication,” and the present study emphasizes this role in the area of predatory publishing.

‘Journalologists’ use scientific methods to study academic publishing. Is their work improving science? I’m including this piece by Jennifer Couzin-Frankel, which appeared Sep- tember 19, 2018 at Science, because I’ve previously been bemused that “journalology” seems to be housed in medical centers, not as part of jour- nalism or publishing. If you’re wondering the same thing, you may find this article enlight- ening. For sure, the original “journalologist,” Drummond Rennie, doesn’t

Cites & Insights November 2018 28 mince words about one of the top medical journals, at least as it was in the late 1970s: He wasn’t impressed by what he found. JAMA “was in an utter sham- bles. It was a laughing stock,” he recalls. “Papers were ridiculous. They were anecdotes following anecdotes.” And he felt it wasn’t just JAMA— it was much of medical publishing. Some trials read like drug company ads. Patients who dropped out or suffered side effects from a drug were not always counted. Other papers contained blatant errors. Yet, all of these studies had passed peer review, Rennie says. “Every one had been blessed by the journal.” Whew. Read the article. Despite its casual use of “predatory” without scare quotes, it’s an interesting read—and I now see why “journalology” could be considered an odd part of medicine. Open Access A miscellany of ethics-related and OA-related items that aren’t about stings, hoaxes and “predatory” nonsense.

This Rant is for Social Scientists Where better to begin than with one of my favorite librarians and writers, Barbara Fister, here in her “Library Babel Fish” blog on September 29, 2016 at Inside Higher Ed. Fister begins with praise for Evicted: Poverty ad Profit in the American City by Matthew Desmond. I won’t quote the praise, but part of it is for Desmond’s ability to be both scholarly and journalistic, to present findings in an accessible and interesting manner. As Fister says, it’s not easy, and… Not only is it not easy, but we’re schooled to write in an inaccessible style, as if our ideas are somehow better if written in a hard-to-decipher script that only the elite can decode because if people who haven’t been schooled that way can understand it, it’s somehow base and common, not valuable enough. If you’re able to read this message, welcome! You’re one of us. The rest of you are not among the elite, so go away. Even worse, we think our hazing rituals around publication and valida- tion are more important than the subjects of our research, who couldn’t afford to read it even if we chose to write in a manner that didn’t require an expensive decoder ring with a university seal on it. We say “it’s for tenure” or “that’s the best journal” and think that’s reason enough to make it impossible for people without money or connections to read it. Here comes the ethics part: I don’t know how else to put this: it’s immoral to study poor people and publish the results of that study in journal run by a for-profit company that charges more for your article than what the household you studied

Cites & Insights November 2018 29 has to buy food this week. I cannot think of any valid excuse for pub- lishing social research this way. So as not to quote the whole thing, I’ll note that she suggests ways to make your research accessible to everybody, admits that using them may be a little extra work, and then says: But if you actually think your research matters, if you think research could make people’s lives better, if you use the phrase “social justice” when you describe your work, you should take that time. It’s unethical not to. Yep.

Hindawi’s Decision to Leave the STM Association This is by Paul Peters, posted February 3, 2017 on Hindawi’s blog. In case you aren’t aware of Hindawi, it’s one of the largest OA publishers: in 2017, it had 515 DOAJ-listed journals with 16,766 articles and a weighted aver- age fee of $1,550—notably, that 2017 figure represents a considerable de- cline from previous years: 18,457 in 2016; 21,709 in 2015; 28,591 in 2013 and 24,672 in 2012. Hindawi is headquartered in Egypt. In any case, it’s a big operation and it’s been around for a while. And in early 2017 it made a decision: Just as the benefits of Open Access are apparent, so are the challenges posed by the transition to this new publishing paradigm. Re-channeling funds currently spent on subscriptions into open access, preserving the financial sustainability of learned and professional societies, and ensur- ing the transparency and rigor of the scholarly publishing process are but a few of the major obstacles. These barriers cannot be overcome by any individual publisher on its own. They require publishers to coordinate with each other, as well as with the broader ecosystem in which they operate, which includes re- searchers, funders, universities, and policymakers. It is in coordinating these sorts of efforts that trade associations can have the greatest im- pact, addressing the systemic challenges standing in the way of the in- dustry’s progress. Unfortunately, trade associations do not always embrace this role as fa- cilitators of change, as they get trapped defending legacy models on which their members have long depended. While this behavior is not malicious, it is nevertheless detrimental to the health of the industry. Efforts to resist change result in confrontational rather than cooperative relationships between the players in the system. After ten years of membership in the International Association of STM Publishers, Hindawi has made the difficult decision to terminate its mem- bership in the association. This decision has come as a result of STM’sover- whelming focus on protecting business models of the past, rather than

Cites & Insights November 2018 30 facilitating new models that Hindawi believes are both inevitable and nec- essary in order for scholarly publishers to continue contributing towards the dissemination of scholarly research in the years to come. Hindawi will continue engaging with the STM Association in the hopes that it can embrace the challenge of tackling the obstacles that stand in the way of a transition to Open Access. If and when that happens, Hindawi will happily seek to reestablish its membership in the associ- ation. Until then we no longer believe that we can continue our mem- bership in good conscience. I’m not sure additional comments are required, unless there were hidden agendas at play. Paul Peters, CEO of Hindawi, was formerly on the STM Association’s Board of Directors; when this item was posted, he was presi- dent of the Open Access Scholarly Publisher’s Association, OASPA. I mav have—I do have—thoughts about the ethics of charging an av- erage of over $1,500 per article, but that’s a whole ‘nother essay.

Paywall Watch Here, I’m pointing out the blog itself rather than any specific post. The mission is “reporting wrongly paywalled articles”—that is, articles for which an APC has been paid (or which should otherwise be OA, e.g. arti- cles by US government employees) but which are paywalled. When this happens, it’s always an “error” and eventually gets cor- rected—but it keeps happening. As I write this, the most recent item is an August 23, 2018 case of an article in an Elsevier journal. An August 20, 2018 item—an article in a different Elsevier “hybrid” journal—says the issue’s been brought to Elsevier’s attention three times with no resolution. While the article mentioned on August 23 shows up as properly OA on October 17, 2018, the August 20 paper is still paywalled as of October 17. It’s not just Elsevier, of course. Another post references Wiley-pub- lished articles—and in the first case I checked, it seemed to show an un- locked OA symbol but required login/registration for access. That’s not OA. An earlier Wolters Kluwer article seems to show up not with true OA but with a “read on publisher’s site”—but that may just be clumsy implemen- tation, since it does appear that a PDF is available. There were several cases at Oxford University Press. Sure, most (but not all) articles I checked are now properly OA—but these are not small-time under-funded publishers who don’t really under- stand how to post articles online. They’re big, successful publishers charg- ing big fees to make articles OA; I see no ethical excuse for even one day’s delay in carrying out the agreement the author paid for. The blog is worth checking from time to time—and, of course, let them know if you make the mistake of publishing in a “hybrid” journal, paying the fee, then find that readers will be asked to pay again.

Cites & Insights November 2018 31 Bwahahah… There’s an article entitled “The moral economy of open access.” To quote from the abstract: This article argues that the movement towards OA rests on a relatively stable moral episteme that positions different actors involved in the economy of OA (authors, publishers, the general public), and most im- portantly, knowledge itself. The analysis disentangles the ontological and moral side of these claims, showing how OA changes the meaning of knowledge from a good in the economic, to good in the moral sense. This means OA can be theorized as the moral economy of digital knowledge production. Guess why I’m not naming the authors or linking to it or using the article title as a subheading. Go ahead, guess. You got it: “One day online access to download article for $36.00.” Bwahahah…

A new journal in combinatorics In this June 4, 2018 post at Gowers’s Weblog, Timothy Gowers announces a new journal and specifically calls it an “ethical journal”: that is, one that charges neither readers nor authors. The new journal is an overlay journal on arXiv; the minimal costs are being covered by the library at Queen’s University in Ontario. The post is worth reading, not only if you’re interested in the journal but also for the surrounding discussion. For example: The reason for setting it up is that there is a gap in the market for an “ethical” combinatorics journal at that level — that is, one that is not published by one of the major commercial publishers, with all the well known problems that result. We are not trying to destroy the commer- cial combinatorial journals, but merely to give people the option of avoiding them if they would prefer to submit to a journal that is not complicit in a system that uses its monopoly power to ruthlessly squeeze library budgets. There’s a lot more and it’s both well-stated and useful. Pay particular atten- tion to his discussion of APCs. Various Issues That’s a nicer way to put it than “leftovers” or “miscellaneous garbage”— and what’s here isn’t garbage, but didn’t seem to cluster neatly elsewhere. Maybe that’s why some of these are very old.

Wikileaks: Where the Hole is Big Enough to Drive a Truck Through Ryan Deschamps posted this on November 30, 2010 at Punctuated Equilib- rium—and it’s worth citing because Deschamps saw Wikileaks for what it

Cites & Insights November 2018 32 was, back when too many folks were touting it as a wonderful thing. Well, maybe he didn’t spot it as a Russian attempt to sow chaos in the U.S. and help elect incredibly unqualified and dangerous candidates, but he did use “ethics” as the first keyword to describe the post and recognize that Wik- ileaks was unethical from the start. The first two paragraphs: When I first heard about Wikileaks, I felt that possibly they were providing a much needed ‘heads up’ to the public on important Inter- national concerns such as the wars in Iraq and Afghanistan. When I heard about the recent cable releases, I thought they caught the United States in some particularly heinous territory with their International Policy — something that represented a serious shift from the norms of behavior that the country’s citizens would expect from the people who represent them abroad. Instead, it’s just a leak of cables. Stories of Omar Khadaffi oogling vo- luptuous Ukranian blonds. CSIS members complaining about lawyers. Frank opinions about Russian dignitaries. All great stuff to sell news- papers and boost the ego of the ‘leakers’ but nothing representing an international emergency. Given this lack of urgency, it is my opinion that Wikileaks did the wrong thing when they leaked this information. There is no ethical standard that I can apply that justifies their actions here. Let’s go over some of the tests. The remainder of the post offers a good discussion of when otherwise- unethical actions might be justified, and concludes that Wikileaks doesn’t meet the criteria. He concludes: In short, there is no real ethical justification in my mind for leaking these documents to the public, only a half-baked and obnoxious inter- net ideology. It was a wrong-minded action and it should be punished in my view. Fortunately for the people involved — people who are by no means the vulnerable people John Rawls wanted us to consider — they will be punished in a country that believes in ethical treatment of their citizens and fair trials. For shame. When you link to this essay—which you should—you may note that the scroll bar suggests you’ve only read about one-sixth of what’s there. That’s right: there are 18 comments, and the comments and responses are con- siderably longer than the post itself. I won’t attempt to summarize the com- ments, but can’t avoid quoting the start of the final one: Assange is a journalist. Someone gave him information. He published it. That’s what they do.

Cites & Insights November 2018 33 Assange was a computer programmer turned provocateur. If the deliberate pacing of the probably-Russian-supplied HRC leaks during the 2016 cam- paign aren’t convincing enough, I don’t know what would be.

You Say It’s Your Birthday This article by Paul Collins appeared July 11, 2011 in Slate. The tease: “Does the infamous “Happy Birthday to You” copyright hold up to scrutiny” Collins begins with a lovely anecdote about musical copyright and ethics, unless you prefer “copyfraud” as a term: Take pity on Florida musician Bobby Kent: He’s a man trying to make a buck in the wrong era. In April, Kent filed a lawsuit over sports teams using that immortal fanfare: “Da-da-da-DA-da-DAA … Charge!” As old as it sounds, Kent claimed he wrote it in 1978 while serving as the mu- sical director for the San Diego Chargers—and he had a 1980 copyright filing to back it up. Kent’s ploy almost worked: But after his claim got picked up by the me- dia, it suffered a withering assault from every corner of the Internet. NPR listeners recalled hearing the fanfare in 1960s episodes of The Flint- stones—a memory quickly confirmed through YouTube clips. Wikipedia entries since 2007 attributed the song to 1940s University of Southern California composer Tommy Walker—a contention bolstered by a link to Sports Illustrated’s online archive, which featured Walker’s own story of the tune’s composition with co-writer Dick Winslow. Rooting through the Los Angeles Times on Google News Archive also reveals account after account from the 1950s of the song, not to mention a 1960 complaint by Walker that people were ripping him off. Finally, from USC itself, and posted on Scribd for all to see, came the final crushing blow: copyrighted 1955 sheet music for “Trojan Warriors, Charge!” The only cavalry fanfare Bobby Kent should play, it seems, is “Retreat.” Then we get to the heart of it: the fact that online resources are making it easier to attack copyfraud ,and the status of “Happy Birthday to You,” which Warner Music apparently collected more than $2 million a year in fees for. The basic melody was written in 1893 and is in the public domain (the song was “Good Morning to All”—but, if you believe Warner, splitting the first note so that “good” could become “happy” made the “new” melody copyrightable. There follows some interesting historical research, much of which suggests that the newer copyright shouldn’t be valid. For example, sheet music for the “Happy Birthday” version dating back to 1915—which would put the song in the public domain. (Both early PDFs show the lyrics for “Good morning…” but the tune’s identification explicitly also says

Cites & Insights November 2018 34 “Happy Birthday to You.”) Collins manages to trace the song in its “Birth- day” form back to at least 1900. Postscript: The story continued after 2011—and, after some lawsuits, Warner/Chappell, the new name, accepted a final court judgment that the song was in the public domain, and paid $14 million to cover invalid li- cense payments over several years.

Some things never change, and still don’t work Moving from music copyfraud to libraries and publisher ethics, here’s an excellent post by Jenica Rogers on May 8, 2013 at Attempting Elegance. On Monday I got a call from a publisher asking me to check on the re- newal status of several periodicals. This is an old tactic; we don’t work directly with publishers, we work with a subscription agent, and when we cancel, the publisher often calls the library asking if we’ll please go check to see if we really truly meant to cancel that because surely we meant to renew? We never meant to renew. But it’s a shaming tactic, and one that relies on librarians to be the kind of eager-to-please business “partner” who says, “Oh, dear, that must have been an accident.” I’m not that librarian. Also, I’m the Director, not Collection Development Coordinator, at least update your records before you call… So then Rogers gets email referring to the call and sending her the list of cancelled titles—and implying that Rogers asked for such a list. Her re- sponse is clear and perhaps more gracious than the email deserved, includ- ing this close: Generally speaking, I have always viewed calls from publisher sales staff asking about the status of a subscription as cold calls in which you are attempting to “encourage” me to renew a subscription we have cancelled. I see no reason to view this call differently, and would appreciate it if you never call me without details of a legitimate financial concern again. The person sending the email apologized and said she worked for an out- sourced call center, and could only promise that the call center wouldn’t bug her again, not that the publisher wouldn’t. If you still think that by and large the publishers are our partners, and that they have anything but their own best financial interests in mind, please think again. They are not. They are not our partners, and they are not acting in the best interests of library users. They are vendors with whom we have a business relationship based on money. In this case, just one more example of that, a publisher is paying an external company to make guilt and confusion-based sales calls to libraries in

Cites & Insights November 2018 35 an attempt to overturn our collections decisions. If this was about in- ternal bookkeeping of subscriptions and sales, the call to “clean up” the records would come from in-house. That’s not what’s happening: this is not an internal control or customer-relations exercise. This is sales, and it’s dirty sales, too, based in an assumption that we will question our cancellation decision when asked about it directly. No. I won’t. You shouldn’t either. Don’t honor these calls. Don’t listen to them. Don’t spend your time following up on a sales pitch you didn’t ask for, and which directly contravenes your reasoned and rational decisions about your subscriptions and collections. Don’t play their game. Don’t let them set the terms. No further comment needed.

Why I Started Paying for Music, Movies, Newspapers and Magazines Again This one’s a little strange, especially since I’m not sure the ethical lesson I would apply is the one Paul Cantor intends in his September 20, 2013 piece at Medium. It seems to be a Damascene situation, with Cantor bragging about es- sentially stealing hundreds of CDs while he was in high school and college, and calling himself a “sensible person” for moving to BitTorrent later for his entertainment needs. Oh, and of course he found a way to break paywall. His conversion to being an ethical person who pays for his entertain- ment seems to have deep spiritual undertones: Earlier this year, the New York Times tech team— who must have been asleep at the wheel this whole time, or been hired away by Buzzfeed— sowed up the hole in the paywall. This great newspaper that invests hundreds of thousands of dollars in original reporting and creating great content that I consume while sitting on my ass and doing nothing productive, I could no longer read for free. Fuck! It took me all of 10 minutes to sign up for a subscription. And he’s gotten accustomed to spending very small sums of money for other forms of entertainment and enlightenment as well, and thinks others should. He also assumes everybody else’s morals are the same as his are or were: “When you know you can get access to something for free or at a discount, chances are you’ll go for that option, even if you know that on a very basic moral level it’s wrong.” As far as I can tell, he’s saying that we’re all basically thieves, certainly including himself, but with legal entertainment so cheap, you really should do the right thing. The close, after noting that a month’s worth of Netflix or Spotify or whatever probably costs less than a fancy drink:

Cites & Insights November 2018 36 You probably won’t remember that drink. It’s one of many you’ll most likely drink during your lifetime. But a great piece of music or a movie or some breaking news— those things can be life-altering. The people who produce these products deserve to be paid. I’m finally down with forking over my money to support them. Because it’s cheap and con- venient and really, why not? So maybe it’s not quite Saul-become-Paul, but these are different times.

The “phantom reference:” How a made-up article got almost 400 citations Here’s an odd one, posted on Retraction Watch on November 14, 2017. Here’s a mystery: How did a nonexistent paper rack up hundreds of citations? The paper: “The art of writing a scientific article.” The journal: Journal of Science Communications. It’s a bit hard to locate, despite being cited almost 400 times—because it doesn’t exist. It was created to illustrate Elsevier’s ci- tation format (which, as with most, was appalling at the time, making sense when every character was at more of a premium than comprehension): Van der Geer, J., Hanraads, J.A.J., Lupton, R.A., 2010. The art of writing a scientific article. J Sci. Commun. 163 (2) 51-59. The citation appeared in Elsevier’s Guide for Authors—and it’s still there, but the citation style has improved a lot: Van der Geer, J., Hanraads, J. A. J., & Lupton, R. A. (2010). The art of writing a scientific article. Journal of Scientific Communications, 163, 51–59. https://doi.org/10.1016/j.Sc.2010.00372 As a grumpy old reader, I’d still prefer to see full author names, but at least this is an improvement. (Don’t bother following the DOI: it yields a “DOI Not Found” page.) All of which is a bit irrelevant. How could a nonexistent paper be cited 400 times? Anne-Wil Harzing investigated: Harzing found that nearly 90% of the citations were for conference pro- ceedings papers, and nearly two-thirds of these appeared in Procedia conference volumes, which are published by Elsevier. When examining some of the papers more closely, Harzing found “most citations to the phantom reference occurred in fairly low-quality con- ference papers,” and were written by authors with poor English. She said she suspects that some authors may not have understood that they were supposed to replace the template text with their own or may have mistakenly left in the Van der Geer reference while using the template to write their paper. There may be minimal quality control for these conference papers, says Harzing; still, she found that the phantom ref- erence did appear in about 40 papers from established journals.

Cites & Insights November 2018 37 So there it is. Oddly, Harzing doesn’t find it troubling: Although 400 citations sounds significant, Harzing put the number in context: Out of nearly 85,000 Procedia conference papers, the phantom reference appeared in less than 0.5% of articles: “Whilst unfortunate, one might consider this to be an acceptable ‘mar- gin of error’. One of the 29 comments states what I’d consider the ethical issue suc- cinctly: “The best way to avoid phantom references is to cite only papers you actuality read.” But when another calls it “dereliction of duty,” there’s a response that essentially says that, with team scholarship, nobody can expect that any “author” is actually responsible for the words in the article. Another says, in essence, that it’s a waste of time to get “every tiny detail right” in a paper. Another comment, from a nursing journal editor, is fairly damning in an entirely different way: As an editor for a major nursing journal, I edited dozens of research articles and reviews and checked every source. I have found countless instances of erroneous or blatantly wrong citations. Researchers often get their own math wrong. Since most medical and nursing journals don’t edit, I can only imagine the extent of error–whether purposeful deceit or more innocent errors–in the literature. [Emphasis added.] So the high subscription prices and APCs don’t pay for copy-editing?

Lessons from the Facebook Fiasco Back to library ethics to close this section, with Barbara Fister’s April 15, 2018 “Library Babel Fish” post at Inside Higher Ed. The tease: Apparently libraries may be learning all the wrong ones. If you’re a librarian, read this essay. If you care about libraries, read this essay. If you’ve already read it, what the heck, read it again. Maybe that’s all I need to say. The hook is a new subsystem for inte- grated library subsystems that can “enhance” library services by keeping track of who’s read what. Stop right there. If you think that’s a great idea, you’re wrong. Go read Fister’s essay. Librarians like to think privacy is an essential feature of freedom of in- quiry, and keeping track of what you’ve read in the past once you’ve returned a book is none of our business, even if it makes us seem less convenient or user-friendly than all of the other systems that remember things for us. It’s too bad that we can’t tell you whether you’ve read John Sandford’s Extreme Prey already. (It’s understandably hard to keep track when all 26 books in the series have the word “prey” in the title.) That inconvenience is outweighed by possibility that your reading history

Cites & Insights November 2018 38 could be used to hurt you or another of our library users. Reading should never be used as evidence against you – and the freedom to read widely shouldn’t be chilled by concerns that your reading records may be stored and used against your will. Think that can’t happen? Think again. Sure, we were scorned by officials after 9/11 who said “the government doesn’t care about what James Patterson novel you read” and “if you haven’t done anything wrong, you don’t have anything to hide.” Well, I’ve written articles about sexual violence and serial murder in crime fiction. Some of my reading records could look mighty suspicious if someone suspected I was up to no good. What if what you’ve been read- ing was obtained and published by someone who had a grudge and wanted to publicly provoke a reaction? Could get messy. Luckily, that can’t happen if those records aren’t kept. That second quote, “If you…,” is one that instantly puts me on edge: it’s an excuse for the most intrusive forms of surveillance—and what’s “wrong” depends on who’s looking. That said, a lot of librarians think library privacy is an outdated, slightly ridiculous obsession, given literally billions of people have signed up for social media platforms and are used to getting algorithmic recom- mendations. Aren’t libraries letting our patrons down if we don’t per- sonalize our information offerings algorithmically? Won’t we seem useless and outdated? Those librarians are wrong, and they’re violating ALA’s Code of Ethics. I’ve been there (when the FBI came around, in the early 1970s) and I’ve done that (seen to it that there were no past records of who read what), even though I’m not a librarian. (Old story: at the time, California didn’t yet have a library records privacy protection law, but UC Berkeley’s counsel said we weren’t obligated to keep past records…so we got rid of them. Immediately. They would have been very difficult to plow through, but impossible is bet- ter than difficult.) Libraries should not be storing patron reading records other than what’s currently out. Period. If patrons want the convenience of having their own records, they can easily store them themselves. You might also read the comments. I think one commenter may be partly right in suggesting “lots of librarians” may be an overstatement— but I’ve encountered high-profile librarians who certainly act as though this was their belief. Society Publishers and For-Profit Publishers This is a new version of a little essay that was dropped from the October 2018 Cites & Insights for space reasons (I try very hard to have issues come

Cites & Insights November 2018 39 out at pretty much exactly an even number of pages, the last page reason- ably full). After not including it, I encountered online chatter that made me wish I had. So here goes, such as it is. The question: Are society publishers inherently more ethical or “bet- ter” than for-profit publishers? Let’s put it another way: Should libraries and funding agencies be more willing to pay very high subscription prices or APCs (or both) for society journals than commercial journals? I think the proper answer is No. Especially when the prices are substantially higher than the costs. For a very long time, I’ve said—repeatedly—that it is unreasonable for societies to expect libraries to underwrite society activities, other than journal publishing, through journal-publishing revenues. Unless, of course, the society is in the library field—and since most peer-reviewed journals from the world’s largest library association are gold OA with no APCs, the exception may not hold. (These are mostly ALA divisional jour- nals; before they were gold OA, they had absurdly inexpensive subscrip- tion prices at either personal or institutional rates.) The counter-argument is that those library subsidies—and that’s what they are—pay for scholarships and other presumably worthwhile society activities. Of course, they also pay for staff salaries and help lower mem- bership prices, or at least they can. If a scholarly society requires outside subsidies to stay in existence, maybe it’s outlived its usefulness. If it can make the case that universities should be subsidizing its activities, those subsidies should come directly from the departments involved—not from library budgets. As to the comparative ethics of not-for-profit (“nonprofit” is such a tricky term…) and for-profit publishers charging similar prices? A differ- ent discussion, but my initial response would be the same. Masthead Cites & Insights: Crawford at Large, Volume 18, Number 8, Whole # 217, ISSN 1534-0937, a periodical of libraries, policy, technology and media, is written and produced by Walt Crawford. Comments should be sent to [email protected]. Cites & Insights: Crawford at Large is copyright ©2018 by Walt Crawford: Some rights reserved. All original material in this work is licensed under a Creative Commons At- tribution 4.0 International License. URL: citesandinsights.info/civ18i8.pdf

Cites & Insights November 2018 40