University of Nevada, Reno

Great is our Sin: Neuroprivilege in Modern Discourse

A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in English

By

Lauren Jeanne DeGraffenreid

Dr. Lynda Olman/Dissertation Chair

December, 2019

Copyright by Lauren Jeanne DeGraffenreid 2019 All Rights Reserved

THE GRADUATE SCHOOL

We recommend that the dissertation prepared under our supervision by

Entitled

be accepted in partial fulfillment of the requirements for the degree of

, Advisor

, Committee Member

, Comm ittee Member

, Committee Member

, Graduate School Representative

David W. Zeh, Ph.D., Dean, Graduate School

i

ABSTRACT

Disability and Neurorhetorics have taken significant strides in unpacking disabling language and discourse within modern autism studies. The rhetoric of science has determined multiple useful strategies for isolating the mechanisms by which these discursive paradigms operate. However, less attention has been devoted to the origin of theories suggesting that observed differences in autistic behavior necessarily indicate deficiencies in cognitive ability. Rigorous work demonstrating propagation and dissemination of these concepts through time within disciplinary publishing is needed to expose how flawed ontological perspectives on neurotype can integrate themselves into neuroscientific practice. This work employs lexical and visual enthymeme analysis to explore how driving, value-laden premises behind disablist language can become accepted as legitimate scientific ‘facts’ considered foundational within a discipline.

ii

TABLE OF CONTENTS

I. Introduction…..…………………………………….…………………………………………………………...1 II. Methods a. Tools i. Lexical…………………………………..……………………..……………………………….51 ii. Visual…………………………….…………………..…………………………..……….……75 b. Literature……………………………………………………………….…………………..……….……84 III. Analysis a. Kanner vs. Asperger………………….………………………………..……….……………..…….93 b. ………………………………………………………………….……………….……108 c. Executive Function Theory…………..………………….……………….………………..……138 d. Discussion.………………………………..……………………………………………..…….……….172

iii

LIST OF TABLES

I. Table 1: Sample Enthymemes………………………………………………………...………………..……33 II. Table 2: Static Enquiry………………………………………………………..………………………………….39 III. Table 3: Stasis and Enthymeme Analysis for Morton……………………………..………….…..68 IV. Table 4: Stasis and Enthymeme Analysis for Asperger…………………….……………………101 V. Table 5: Stasis and Enthymeme Analysis for Baron-Cohen, Frith, and Leslie 1985..110 VI. Table 6: Visual Enthymeme Analysis for Baron-Cohen, Frith, and Leslie 1985……...113 VII. Table 7: Stasis, enthymeme, and nomotic analysis for Baron-Cohen et al. 1995…..115 VIII. Table 8: Visual enthymeme analysis for Baron-Cohen et al. 1995………..……………...119 IX. Table 9: Static, enthymeme, and nomotic analysis for Baron-Cohen and Lombardo 2011…………………………………………………………………………………………………..………………..121 X. Table 10: Static, enthymemic, and nomotic analysis for Stravopoulos and Carver 2014…………………………………………………………………………………………………………..………..129 XI. Table 11: Visual enthymeme analysis for Stravopoulos and Carver 2014……………..133 XII. Table 12: Static, enthymemic, and nomotic analysis for Fitzpatrick et al. 2017…….135 XIII. Table 13: Visual enthymemic analysis for Fitzpatrick et al. 2017…………………………..138 XIV. Table 14: Static, enthymemic, and nomotic analysis for Just et al. 2007…..………….141 XV. Table 15: Static, enthymemic, and nomotic analysis for Kleinhans et al. 2010……..152 XVI. Table 16: Visual enthymemic analyis for Kleinhans et al. 2010…………………………....161 XVII. Table 17: Visual enthymeme analysis for Williams et al. 2010………………..…..……….166 XVIII. Table 18: Static, enthymemic, and nomotic analysis for Kiep and Spek 2017……....168 XIX. Table 19: Visual enthymeme analysis for Kiep and Spek 2017…………..………………...172

iv

LIST OF FIGURES

I. Figure 1: Modified Toulmin Method………………………………………………..….…………………42 II. Figure 2: Procedure for determining nomotic inheritance…………………..………………...54 III. Figure 3: Procedure for analyzing visual enthymemes and extracting their premises……………………………………………………………………………………………………..……...…60 IV. Figure 4: Autism Diagnostic Trends. “Autism counts.” K. Weintraub. Nature. November 2011……………………………………………………………………………………………………..76 V. Figure 5: Autistic Testing Battery Scores. “Anxiety and social deficits have distinct relationships with amygdala function in autism spectrum disorder.” Herrington, John. Social Cognitive and Affective Neuroscience. June 2016………………………………..78 VI. Figure 6: fMRI Neural Activation Regions. “Autism, the superior temporal sulcus and social perception.” Zilbovicius et al. Trends in Neuroscience. July 2006………..…….….80 VII. Figure 7: Study Design. Source: “Does the autistic child have a ‘theory of mind’?” Simon Baron-Cohen, Ruth Campbell, Alan M Leslie, Uta Frith. Cognition. 1985……112 VIII. Figure 8: Study Design. Source: “Are Children with Autism Blind to the Mentalistic Significance of the Eyes?” Simon Baron-Cohen, Ruth Campbell, Annette Karmiloff- Smith, Julia Grant, Jane Walker. British Journal of . November 1995……………………………………………………………………………………………………117 IX. Figure 9: Handicap Table. Source: “Are Children with Autism Blind to the Mentalistic Significance of the Eyes?” Simon Baron-Cohen, Ruth Campbell, Annette Karmiloff- Smith, Julia Grant, Jane Walker. British Journal of Developmental Psychology. November 1995…………………………………..……………………………………………………………...117 X. Figure 10: fMRI false-color data visualization and explanatory caption. Source: “The Role of the Self in Mindblindness in Autism.” Simon Baron-Cohen and Michael V. Lombardo. Consciousness and Cognition. March 2011………………………………………...123 XI. Figure 11: Predictive game design. Source: “Reward anticipation and processing of social versus nonsocial stimuli in children with and without autism spectrum disorders.” Journal of Child Psychiatry and Psychology. May 2014…………………….…130 v

XII. Figure 12: EEG waveform analysis. Source: “Reward anticipation and processing of social versus nonsocial stimuli in children with and without autism spectrum disorders.” Journal of Child Psychiatry and Psychology. May 2014……………………….131 XIII. Figure 13: False-color EEG topographic analysis maps. Source: “Reward anticipation and processing of social versus nonsocial stimuli in children with and without autism spectrum disorders.” Journal of Child Psychiatry and Psychology. May 2014……....131 XIV. Figure 14: Statistical analysis of autistic social and motor battery responses. Source: Social Motor Synchronization: Insights for Understanding Social Behavior in Autism.” Paula Fitzpatrick, Veronica Romero, Joseph L. Amaral, Amie Duncan, Holly Barnard and Michael J. Richardson. Journal of Autism and Developmental Disorders. July 2017…………………………………………………………………………………………………………………..137 XV. Figure 15: Activation data and visualizations. “Functional and Anatomical Cortical Underconnectivity in Autism: Evidence from an fMRI Study of an Executive Function Task and Corpus Callosum Morphometry.” Marcel Adam Just, Vladimir L. Cherkassky, Timothy A. Keller, Rajesh K. Kana and Nancy J. Minshew. Cerebral Cortex. April 2007………………………………………………………………………………………………..142 XVI. Figure 16: Activation data and visualizations. “Functional and Anatomical Cortical Underconnectivity in Autism: Evidence from an fMRI Study of an Executive Function Task and Corpus Callosum Morphometry.” Marcel Adam Just, Vladimir L. Cherkassky, Timothy A. Keller, Rajesh K. Kana and Nancy J. Minshew. Cerebral Cortex. April 2007………………………………………………………………………………………………..143 XVII. Figure 17: Corrected test scores. “Executive Functions in Autism and Asperger’s Disorder: Flexibility, Fluency, and Inhibition.” Natalia Kleinhans, Natacha Akshoomoff & Dean C. Delis. Developmental Neuropsychology—June 2010……………………………..148 XVIII. Figure 18. Contrasting profiles among study subjects. “Executive Functions in Autism and Asperger’s Disorder: Flexibility, Fluency, and Inhibition.” Natalia Kleinhans, Natacha Akshoomoff & Dean C. Delis. Developmental Neuropsychology—June 2010………………………………………………………………………………………..…………………………..152 XIX. Figure 19. Performance on the Trail Making Test. “Executive Functions in Autism and Asperger’s Disorder: Flexibility, Fluency, and Inhibition.” Natalia Kleinhans, Natacha vi

Akshoomoff & Dean C. Delis. Developmental Neuropsychology—June 2010…………………………………………………………………………………………………………………….153 XX. Figure 20. Performance on Color-Word Interference Test. “Executive Functions in Autism and Asperger’s Disorder: Flexibility, Fluency, and Inhibition.” Natalia Kleinhans, Natacha Akshoomoff & Dean C. Delis. Developmental Neuropsychology— June 2010………………………..…………………………………………………………………………………..153 XXI. Figure 21. Performance on Verbal Fluency Test. “Executive Functions in Autism and Asperger’s Disorder: Flexibility, Fluency, and Inhibition.” Natalia Kleinhans, Natacha Akshoomoff & Dean C. Delis. Developmental Neuropsychology—June 2010…………………………………………………………………………………………………………………….154 XXII. Figure 22. Performance on Design Fluency Test. “Executive Functions in Autism and Asperger’s Disorder: Flexibility, Fluency, and Inhibition.” Natalia Kleinhans, Natacha Akshoomoff & Dean C. Delis. Developmental Neuropsychology—June 2010…………………………………………………………………………………………………………………….154 XXIII. Figure 23. Performance on Design Fluency Test. “Executive Functions in Autism and Asperger’s Disorder: Flexibility, Fluency, and Inhibition.” Natalia Kleinhans, Natacha Akshoomoff & Dean C. Delis. Developmental Neuropsychology—June 2010…………………………………………………………………………………………………………………….163 XXIV. Figure 24. Performance on Design Fluency Test. “Executive Functions in Autism and Asperger’s Disorder: Flexibility, Fluency, and Inhibition.” Natalia Kleinhans, Natacha Akshoomoff & Dean C. Delis. Developmental Neuropsychology—June 2010………..164 XXV. Figure 25: Activation data and visualizations. “Executive functioning in men and women with an autism spectrum disorder.” Michelle Kiep and Annelies A. Spek. International Society for Autism Research. April 2017………………………………..………..170

1

“If the misery of our poor be caused not by the laws of nature, but by our institutions, great is our sin.” –; The Voyage of the Beagle; 1839

-

“Few tragedies can be more extensive than the stunting of life, few injustices deeper than the denial of an opportunity to strive or even to hope, by a limit imposed from without, but falsely identified as lying within.”—Stephen Jay Gould; The Mismeasure of Man; 1980.

I. INTRODUCTION

One myth pervades popular public perception of autism: that observed behavioral differences between autistics and neurotypicals equate to deficits in cognitive and/or intellectual ability. From where do these social perceptions spring? Is it possible that modern neuroscience itself might bear some responsibility for casting cognitive differences in the autistic brain as intellectual and/or social deficits, as compared to the neurotypical brain?

Certainly, there is little doubt that modern fMRI, electroencephalography, and behavioral studies reveal significant differences in brain activation and cognitive patterns between autistic and neurotypical children when exposed to problem-solving batteries and simulated social stimuli. However, could the interpretation of those results be unilaterally casting these observed variations as pathological deficiencies in need of therapeutic correction? Social perceptions regarding autistic cognitive deficit typically fall into two categories: intellectual and social. Is it possible that fMRI , electroencephalogram mapping, and eye movement responses to 2

problem-solving batteries are being used to demonstrate autistic deficiency (a judgment based

on value), more than autistic difference (conclusions based on empirical observation)? Autism is currently conceived of as a spectrum, in which individuals can be high or low functioning.

Nevertheless, like the one-drop rule of 20th century racial classification, being anywhere ‘on the

spectrum’ often imparts a sense of categorical lack—albeit by degrees. If these perceptions do exist among the neuroscientific community (as opposed to the public at large), how might they be presented within the literature? How would such biased perceptions gain acceptance and be propagated among the scientific community, which usually adheres to stringent internal standards regarding experimental design and data analysis? Answering this question, to all outward appearances, cannot be easy. Significant rhetorical inroads have certainly been made regarding analyzing how science (including neuroscience) is rhetorical, how science makes use of rhetorical principles in order to illustrate and persuade, and how science can unintentionally create a totalizing mythology of disability for those who diverge mentally and physically from socionormative averages. However, in most of these rhetorical studies, conclusions are drawn from a wide variety of case studies —often involving discourse from many different scientific disciplines— as opposed to tending to numerous studies from a single subfield or in service of a singular scientific theory For example, Fahnestock’s Rhetorical Figures in Science draws from late

20th century neuroscience, 19th century ethology, and early 20th century evolutionary theory.

Alan Gross’s Starring the Text: The Place of Rhetoric in Science Studies considers Newtonian physics, 19th century taxonomy, and 16th century astronomy. Presumably this can be attributed to rhetorician interest in crafting broad theoretical frameworks before exploring minutiae; however, practicing scientists often attribute this scattershot approach to a lack of scientific 3

understanding. Most rhetorical also use a variety of rhetorical, literary, and even philosophical tools—such as Gross’s discourse analysis or Latour’s anthropological methodologies—in order to reach their conclusions Additionally, owing to a lack of scientific training, few rhetoricians or disability scholars will approach scientific journal articles directly; opting instead to analyze longer theoretical treatises devoid of data or statistical analysis. This study, however, will build on existing rhetorical theories and scholarship in order to attempt to develop a unified, straightforward, mechanistic approach for ascertaining whether popular (mis)perception and even personal bias can encroach upon and become enmeshed in scientific literature, propagating in an unconscious way between scientists over multiple generations. To test these tools, I will examine several strains of literature considered foundational in modern autism research—including work from its first known theorists— to determine if problematic mid 19- century attitudes toward autism may have persisted to the modern day, masquerading as legitimate empirical ‘truth’ and resulting in problematic theories regarding autistic cognition and neurological development. We will begin by mapping these theories, breaking them down into their constituent rhetorical components, and then exploring their significance and social consequences. This is very much a post-truth project; this however, does not detract from the usefulness of rhetorical theory in exploring the discursive components that cause scientists to adopt values and premises as fact.

In order to approach the nuts and bolts of our methodology, we must first present background regarding disability rhetoric and neurorhetoric, two relatively new areas of humanistic enquiry seeking to examine how everyday language and cultural practices affect how non-normative states of being (such as blindness and deafness) are constructed as inherently 4

positive or negative. Many of these avenues involve medical nomenclature, which is key to the social construction of autism. Most notably, the use of an autism spectrum makes it even easier to suggest that a normative state of being is the most desirable state of being: a shifting series of boxes of various sizes makes it easier to contain an otherwise sovereign entity; partial excision of agency is easier to justify than total erasure. Thus, a ‘high-functioning’ autistic, a person with hearing loss, or an amputee is only partially non-functional. Phrases like ‘high- functioning autistic’ serve to denote a person who is close to normal, but still not. ‘High- functioning’ implies a hierarchy topped by ‘fully-functioning’ neurotypicals, who, quite conveniently, happen to be the same folks defining the boundaries of the non-normative condition. As Mitzi Waltz points out, the ‘positive’ visual rhetoric of charity and public works literature affords a prime example: by portraying the disabled functioning ‘normally’ alongside normative peers, medicalized society is “strongly stating that activities associated with

‘normalcy’ are preferable to those associated with disability” (Waltz 221). Thus, the ultimate objective of anyone occupying a nonstandard category on the human taxonomic hierarchy must be to eventually occupy a locus as close to the normative box as possible. It also means that those occupying non-standard categories must intentionally seek to become invisible by hiding their disabilities; only by blending in can they hope to move up to the normative box. In

“Becoming Visible: Lessons in Disability,” Brenda Jo Brueggeman et al. illustrate how

‘enablement’ becomes synonymous with ‘erasure’; requesting ‘accommodation,’ on the other hand, becomes synonymous with placing oneself into an altered, non-normative taxonomy far from the desired locus (373). Brueggemann also cautions that spectrums allow us to indulge

“our culture’s long-standing obsessions with definitive causes and effects” (Deaf Subjects: 5

Between Identities and Places; 12). If a person is only mildly abnormal, it becomes easier to

demonize the dastardly genetic or environmental factors that caused the deviation; covertly

inflicting rhetorical violence by degrees instead of overtly declaring that all neurodivergent qualities are inviable is far more socially acceptable. Despite the undeniable presence of observed differences between those labeled as autistic and those who lack this designation, the way we talk about difference is shaded by their quest for identification of flaws, rather than neutral variation.

Knowledge of our respective loci—literally, our medical history as well as our genetic predispositions—influences our entire state of being. Interestingly, enough, a loci is another word for a topos, a stock commonplace formula popularized by Aristotle and used by rhetors in creating an argument (such as cause & effect, or comparison). Interestingly enough, our genetic loci can, when medicalized, serve as the basis for arguments regarding our own social worth.

Bodily and psychologically, humans are either fully abled or disabled by degrees; our places within these shifting boxes are always subtly reinforced by rhetorical encouragements of researchers and medical professionals touting behavioral “cures,” “therapies,” “mitigations,” and “aids.” Ultimately, however, you’re still either abled or disabled. Cynthia Leiwicki-Wilson,

Jay Dolmage, and Paul Heilker summarize the ableist/disablist paradigm this way:

Disability studies holds that mainstream culture often behaves in an ableist way:

assuming that disability is inherently bad, that a disability is a deficit, justifying

intolerance and stigma, that it should be cured or overcome; assuming that

people with disabilities can be spoken and acted for, and allowing individuals to

make these assumptions by claiming a position as ultimately not-disabled and 6

therefore unmarked and entitled to diagnose and stigmatize others. Ableist

positioning is thus normative. (314)

In short, able people have the privilege of defining disabled people; the worst position to occupy

is the powerless disabled position. Neurorhetoricians Melanie Yergeau and Paul Heilker point

out that rhetoric is the instrument which shapes these ableist/disablist identities and positions

us within (or without) established power structures They reiterate that medical knowledge about autism is still wildly speculative and largely untestable; thus, in “the continuing absence of stable scientific or medical knowledge about autism, we need to shine a bright and persistent light on how brazenly rhetorical any utterance, especially any highly visible utterance [about autism]” about autism really is—and, equally important, “on how rhetorical any silence about neurotypicality really is” (“Autism and Rhetoric”; 265). We should ask why “accommodation [for the disabled] is thought of as something that always needs to be created, something that has a cost” (Dolmage; Disability Rhetoric; xi), as opposed to a natural human social inclusivity. We must ask “Why, when one group asks to shift modes [learning styles and frameworks], or for more information to be given across more than one mode, are they deficient, asking for something special? Why then, when another group [neurotypicals] shifts or repeats modes, are they constructed as Super?” (45). The use of disability and neurorhetoric allows us to view this power structure sideways, to reverse-engineer its faulty foundations. Again, we must understand that this inherent fear of being labelled as other triggers many silences and failures to self-identify as disabled (or to even consider potential non-normative self-identifications).

For many, a diagnosis of physical or neurological disability creates a complete crisis of identity.

For this reason, many autistic adults chose to conceal their diagnoses from their employers, 7

romantic partners, family, and friends (Cage and Troxell-Whitman 1899). To avoid being labeled as other and risk social rejection and ostracism is an instinct that stretches back to the dawn of human existence. This, too, works to reinforce neuronormative standards in scientific research.

But to explore why this crisis of personal etiology exists, we need to examine the origins of the ableist paradigm. Dolmage locates it squarely in an unlikely place: the modern university.

In Academic Ableism: Disability and Higher Education (2017), he explores the rise of the

American eugenics movement (which was only curtailed after the wholesale slaughters of the

Holocaust came to light on an international stage). Even today, this “legacy of academic sorting works to strongly ground inferences about social worth in biological formulae, using science to suggest that differences between people are predetermined, genetic immutable” (27). After all, if a disability is a foregone conclusion, we don’t have to worry so much about mediating it. If we can justify inferiority or even social unviability as deriving from unfortunate but immutable biological, social, and economic roadblocks to success, we can logically justify any sort of segregation, eradication, or even sterilization. To this day, the emphasis on average as normal and therefore ideal persists: The long-standing popularity of bell curves as means of demonstrating what is average/healthy and what is divergent/unhealthy speaks to this very human tendency. Other technologies can be used in place of bell curves to demonstrate the same principles:

In simpler terms…we are using neuromyths and neurorhetorics whenever we

are given colored brain maps, whenever connections are drawn between types

of people, types of thinking, and parts of brains, this is all wrapped up in

academic ableism; ideas about which kinds of brains are normal and the 8

commitment to mark out brains as abnormal, in the desire to place people on

steps above and below one another. (83)

The normate brain is still considered an ideal, reinforced by subtle but pervasive ableist rhetorics that work to delineate and define the borders of human potential. In response, disability and neurorhetoricians began to develop their own rhetorical paradigms within communities that generated their own ableist rhetoric in an effort to combat the pervasive disablist paradigm. In establishing their own alternative rhetorical etiology, they threw up a valid cognitive challenge to existing hegemonic orders. They wielded these alternative rhetorics like

Sophists, stacking them against normative rhetorics and inviting the reader to choose which better reflected reality. The kicker, of course, is that both perspectives are contingently valid, depending on how one intends to use them. Thus, both are equally invalid: ableism (common employment and propagation of an ableist perspective) is a purely rhetorical, purely circumstantial, purely social phenomenon—just like disablism (the employment and propagation of a disablist perspective). Deciding between them is a conscious human choice.

So how did rhetoricians expose these privileged perspectives? First, the radical deaf community abandoned the designator ‘person with deafness’ in favor of being, simply and unabashedly, ‘deaf’. The move’s simplicity belies its power: being deaf is a sovereign ontological state. Being a person ‘with deafness,’ on the other hand, qualifies that person’s essential personhood. After all, qualifiers usually suggest something bad. After all, “We talk about left- handed people, not ‘people with left-handedness’, and about musical and athletic people, not

‘people with athleticism’, nor about ‘people with musicality’” (Sinclair, qtd in Silberman 445).

The autistic community quickly followed suit: instead of being ‘people with autism’, they 9

brazenly began to identify simply as autistic. But they didn’t stop there. They turned the

medical and diagnostic paradigm on its head, selecting a psychiatric designation for normative, non-autistic people: Silberman records that “the most enduring…neologism was the term neurotypical, used as a label for nonautistic people” (44). Giving non-autistics a medicalized label was a powerful rhetorical move: “With its distinctly clinical air, the term (sometimes shortened to NT) turned the diagnostic gaze back on the psychiatric establishment and registered the fact that people on the spectrum were fully capable of irony and sarcasm at a time when it was widely assumed that they didn’t ‘get’ humor” (444). Several autistic rhetoricians took things one step further:

Carrying the meme to its logical extreme, an autistic woman named Laura

Tisoneik launched an official-looking website in 1998 credited to the Institute

for the Study of the Neurologically Typical. “NT Syndrome is a neurobiological

disorder characterized by preoccupation with social concerns, delusions of

superiority, and obsession with conformity” the site FAQ declared. “There is no

known cure.” (qtd in Silberman; 441)

This construction reveals the determinist and disablist influence of diagnostic rhetoric on anyone labelled as divergent: immediately we can conceive of research paradigms, evaluative batteries, and even drug treatments designed to treat their superabundance of anxiety, recurring sexual obsessions, and disruptive delusions associated with this apparently debilitating condition. 10

These explanations also help us unpack how disablism might be operating in autism literature. We’ll start with Theory of Mind. Proposed by Simon Baron-Cohen in 2002, ToM proposes that autistics cannot cooperate or communicate with others because they lack the ability to ontologically understand their own minds or the minds of others. The state of lacking a

Theory of Mind even has a convenient (and consummately disablist) name: Mindblindness.

Under this paradigm, an autistic isn’t really a person at all; they’re more of a hyper-intelligent animal. Like Clever Hans, a famous 20th century performing horse who learned to observe microchanges in his owner’s expression in order to stamp out the correct numerical answer to a math problem, autistics are smart—just not human-smart. Melanie Yergeau covers the subject extensively, noting:

Rarely will a scholar directly state claim that autistics are not human.

Nevertheless, when leading autism researchers claim ToM as ‘one of the

quintessential abilities that makes us human,’ I don’t find it a far stretch to infer

that colleagues in philosophy, psychology, or narratology believe me and my

kind less than human. (277)

The truly strange thing, as Yergeau points out, is that the theory exists despite empirical neuroscientific evidence to the contrary:

Time and again, autistic people have performed in ways that contradict the

verdict that autistics are mindblind…many autistics have passed fake-

assessments. But even in the face of such evidence, scientists refuse to cede

ground. We are claimed to pass false-belief tasks variously because we hack or 11

reason through the rules of a test rather than organically understand

conceptions of false-belief: because we are not impaired enough to properly call

ourselves autistic, and thus ToM deficits are universal but simultaneously a

matter or degree; because we have learned, through natural (albeit delayed)

development, what the test aims to measure…(276)

Yergeau’s list goes on. It takes a rhetorician to highlight the fact that neuroscientists still

adhering to ToM are using a sliding definition to justify the inhumanity of their subjects. If data

doesn’t meet their theory, they simply construct another box. And then another. And then they

split boxes, or propose that the data is flawed due to some form of prior preparation on the part

of the autistic subjects. As we will see in our literature section, autistics are accused of faking results on psychological batteries in order to appear normal; in other words, they pass the autistic Turing Test (a series of questions designed to root out sentience in Artificial Intelligence) because they know what scientists want to hear.

Neurorhetorics—a term developed by Jordynn Jack (a rhetorician) and L. Gregory

Applebaum (a neuroscientist) to describe a useful critical approach for interdisciplinary investigation of neurological enquiry (“What are Neurorhetorics?” 405)—therefore help us to keenly evaluate culturally and socially-derived aspects of scientific theory superimposing itself like a lens over raw data, making it possible to perceive firsthand how “The ideology of deficit shapes the collection and evaluation of data, and the interpretation of the data confirm the accuracy of the ideology (thus providing ideological support for the ideology of deficit)” (279). A generation of ToM adherents are seeking proof of autistic inhumanity because a theory is guiding them to look for it. As Duffy points out in “The Pathos of ‘Mindblindness’: Autism, 12

Science and Sadness in Theory of Mind Narratives,” the complete absence of data to support

ToM “means that the degree to which ToM is accepted by researchers and the general public

ultimately comes back to the rhetorical power of a story that asks us to accede to its natural,

emotional, and moral views of the world” (45). In other words, well-meaning neurotypicals are

primed to accept ToM because it supports their socially-reinforced location at the top of the proverbial hierarchy of normalcy. By building on concepts drawn from disability rhetoric, we can use neurorhetoric to evaluate scientific consideration of autism as an ontological state, as well as to observe potentially harmful rhetoric in action.

However, as handy as neuro- and disability rhetorics are with regard to explaining the broad strokes of how scientific enquiry can, if not reflexively conscious, work to disenfranchise populations, they rarely provide a mechanistic approach for tackling disciplinary issues and definitions in journal studies across multiple decades—or for subsequently unpacking and tracking the conceptual engineering inside the arguments themselves. Rhetoricians like S. Scott

Graham have in fact begun to examine entire research paradigms and the technological valuation that gives rise to disciplinary definitions and accepted evidentiary standards; this work is intended as an extension of these methodologies. Following Graham, in order to develop a workable, systematic tool capable of producing discernable and standardized outputs suitable for large-scale comparative investigation geared toward isolating the sources of problematic neurorhetorics, we must turn to a surprising place: classical and neo-Aristotelian rhetorical analyses. Using the specific elements and techniques championed by Aristotle and his intellectual descendants, we can begin to build a straightforward framework for probing journal 13

articles for bias, preconceived notions of worth or ability, and socially-reinforced notions of

privilege.

One especially pertinent example involves the use of analogy in the scientific literature.

In The New Rhetoric (1971), postmodern scholars Chaim Perelman and L. Olbrechts Tyteca

demonstrate how many modern arguments are inherently analogy-driven as a means of

appealing to a particular audience (3:3:82). Science, because it must be taught to young students who will eventually espouse it themselves, also uses analogy as a pedagogical technique to both overtly instruct and unconsciously indoctrinate. For example, fMRI neural activation is described as ‘lighting up’; an indication that a lack of such activation indicates a deficit, a literal darkness. Describing our brain as ‘wired’ likens our neural processes to computing, a process which entails both a lack of humanity and a tendency to fail from overuse.

In “Analogy and Metaphor Running Amok: An Examination of the Use of Explanatory Devices in

Neuroscience,” Slaney and Mauran argue that the presence of many translative figures “leads to certain conceptual confusions and, thus, fails to aid in clarifying the nature of those phenomena they are intended to explain” (153). But these figures of speech are critical pedagogical aids; most learning cannot be undertaken without them. But because they come from the minds of human educators such analogies are, of course, by their definition subjective; our understanding of a knowledge system like science is inherently shaped by the “creative thoughts” of our teachers (396). After all, “Arguments from analogy set two things before an audience and in doing so they facilitate comparison” and the exploration “of what they have in common

[inherently] facilitates the selection of common features” (136). This is not to say that all analogy is useless or harmful: only that it is not objective—and that it always adds conceptual 14

weight and definition to an argument. The authors also point out that verbal and visual

parallelism has a similar effect: seeing words backed up by data tables can be convincing to a

human brain specifically evolved and hard-wired to perceive and respond to repeated patterns

(124). Analogies and metaphors are generated from personal perspective; they then extend into scientific rhetoric, imparting part of that personal perspective with it.

In Shaping Written Knowledge: The Genre and Activity of the Experimental Article in

Science (1988), Charles Bazerman explains that such personal perspectives are not merely circumstantial; they are necessary elements of scientific progress. He explores the world-making potential of rhetoric by investigating the choices made by scientific rhetors: “Sometimes scientists’ rhetorical choices are self-conscious responses to perceived rhetorical problems; sometimes they are unselfconscious impromptu inventions, sometimes they are slow and imperceptible shifts” (15). After all, all scientific work is as much a product of social forces as a foray into data collection: “To this day, a successful publication must satisfy gatekeepers to get published, must defend itself against critics to maintain credibility, and must appear useful enough to readers to be cited and incorporated in future work” (143). Given these constraints, is it any wonder the most successful scientists are the most skilled rhetors? To be accepted, a scientist must necessarily provide a trimmed, manicured, framed view of their work. They must be aware of how to successfully “substitute representation for presentation” to achieve “added selectivity and control in planning and executing the empirical events,” thereby “open[ing] the door for conscious and unconscious distortions” (144). Even the structure of the modern scientific article is rhetorically motivated: the article with an expectation that a theory will, in fact, be borne out by the data significantly alters how we evaluate it (247). Overall, “The 15

large-scale trends revealed here are consistent with the traditional view that science is a rational, cumulative, corporate enterprise, but point out that this enterprise is realized only through linguistic, rhetorical, and social choices, all with epistemological consequences” (183).

Many of these consequences can completely refigure how we perceive reality.

Though she didn’t deal exclusively with science, addressing these consequences was in

part Susan Jarratt’s goal in Rereading the Sophists: Classical Rhetoric Refigured (1991). Her most important point was that the Sophists wielded the power of nomos (codes, habits, customs,

conventions) to expose the cultural underpinnings supporting conventional schools of thought.

Fourth and fifth century Sophists made careful distinction between physis (nature) and nomos

(custom-laws). While physis embodies what may be considered immutable natural conditions— somewhat akin to ‘objective’ data or ‘empirical’ observation—nomos represents human interpretation, codification, and legislation. One can frequently masquerade as another; one of the chief benefits of rhetoric is that it assists us in successfully distinguishing them. Jarratt wrote that the use of rhetoric, therefore, is “closely linked with nomoi as a process of articulating codes, consciously designed by groups of people” (42); Sophists like Gorgias were “more interested in exploring how probably arguments can cast doubt on conventional truths” (59) than divesting passer-by of their coinage. Their work aims to “call attention to the ways patterns of reasoning came to be accepted” as nomoi. The operant nomos of any discourse “determines the behavior and activities of things through convention” (53); thus, an examination of said conventions affords us the ability to “discover marginalized voices” and winnow out “the falsely naturalized logic of patriarchy” (75). With the ability to wield nomoi to reveal how human social codes and class/racial mores have actively shaped paradigms in what was long considered an 16

unassailable wall of scientific empiricism, we can dismantle the architecture of an ‘objective truth’.

Alan Gross, Joseph Harmon, and Michael Reidy continued to uncover how scientific

‘custom-laws’ operate in Communicating Science: The Scientific Article from the 17th Century to the Present (2002) by tracing the history of the scientific journal article from its feeble origins— as little more than anecdotal diary entries—to increasingly stylized and regimented essays featuring standard entry types and organization. Like Olbrechts-Tyteca and Perelman, Gross, in his continuing work, exposes how analogy adds both inference and evidence, in addition to providing clarity (Starring the Text, 2006); this added meaning is incredibly difficult to peel away from the basic argument. In The Rhetoric of Science (1990) he returns to rhetorical analysis as the Holy Grail of scientific rhetorical criticism: with its power, “science may be progressively revealed not as the privileged route to certain knowledge, but as another intellectual enterprise, an activity that takes its place beside, but not above, philosophy, literary criticism, history and rhetoric itself” (3). With varying degrees of success, they demonstrate how “the objectivity of scientific prose is a carefully crafted rhetorical invention, a nonrational appeal to the authority of reason” (45). They claim that the denial of personal interest in science is self-defeating, and generates a counterproductive ‘myth’ that science is free from human bias:

scientific reports are the product of verbal choices designed to capitalize on the

attractiveness of an enterprise that embodies a convenient myth, a myth in

which, apparently, reason has subjugated the passions. But the disciplined

denial of emotion in science is only a tribute to our passionate investment in its

methods and goals. (45) 17

Nomoi are inherently non-rational, as they are based on social custom and convention.

Admitting that culture and personal perspectives have a place in science is a sobering

proposition. Nevertheless, continuing to operate within a perceived sterile vacuum does a

disservice to the disciplines themselves; one that can prove dangerous if the right political and economic conditions are met. But still—where is the evidence? If we believe that these nomoi are operating like cryptids, roaming around the backwoods of scientific endeavor, where is the proof? More importantly, how do we go about finding compelling evidence to prove their existence? The trick now is to catch nomoi in the act of establishing themselves in scientific literature.

Rising to the challenge with Rhetorical Figures in Science (2002), Jeanne Fahnestock locates multiple rhetorical figures common to scientific discourse. The brilliance of this is that it introduces classical language plays (most with 25-cent, Latin names) and applies them to specific cases in the scientific literature, moving systematically from one figure to the next. She begins with metaphorical substitution (catachresis), to either/or constructs (antitheses), to parallelism, gradation (gradatio), physical enantiomorphs (mirror images), and even linguistic and visual polyptotoi (arguments through repetition). For example, Aristotelian antithesis is explored as “a powerful conceptual tool in the framing of arguments, particularly arguments employed in science” (58). An arguer is also capable of constructing their own rhetorical opponent: “the arguer constructs an argument to set terms in opposition, to make novel contraries out of terms and notions that were not necessarily opposed before in the audience’s thinking” (58). This exaggeration makes the arguer appear more reasonable than the unreasonable strawman he has created. By the same token, reconfiguring the nature of an existing opposition is also 18

possible, via “either pushing the terms apart into mutual exclusion or placing them as extremes

on a connecting spectrum” (58). As you might imagine, this has tremendous ramifications for

any scientific venture involving classification. Definitions can rhetorically situate everything from

species taxonomy to neurological pathology; because there is no way to objectively classify

something, the designation of a particular entity (such as autism) becomes subject to the

rhetorical skill of the scientists who first frame a particular discovery. If adopted by other

scientists, these ways of classifying and considering become their own set of facts; a foundation

that appears to be based in abstract truth, but is actually laden with consummately human

valuation. Autism can be a distinctive neurotype (thereby according agency and rights to those

who possess it) or a sliding spectrum of pathological disability (thereby according limited or no

rights or agency), all depending on who is doing the defining. This, of course, is also the purview

of gradatio (gradation): “it pushes apart by showing how many steps intervene between one and the other. Thus, rather than inevitably connecting, an intervening series can even construct terms into opposites” (97). To return to medical discourse: neurological classification designates individuals as occupying varying states of mental health. During diagnosis, individuals are split into more normative or more neurodivergent categories, based on observed behaviors. Instead of designating a sovereign ‘Autistic Neurotype’ and then figuring autistics as having different degrees of social disability, framing autism as ‘Autism Spectrum Disorder’ presents autistic humans as being categorically dysfunctional—the severity depends on how much they resemble neurotypicals at the other end of the spectrumFraming the argument this way immediately maps a disorder onto behaviors that do not fall in line with normative, neurotypical patterning.

Predictive power also adds strength to this argument: if a theory or article preserves 19

parallelism—a stylistic arrangement in which elements repeat themselves for emphasis, thereby

engendering the appearance of validation— it invites the reader to assume characteristics or

classifications at the opposite ends a spectrum; it all depends on “the ease with which [the

author’s] antimetabole [a ‘turnabout’ reversal of terms designed to stress a point] can sustain

an ellipsis [an intentional rhetorical omission]” (124). In other words, the author intentionally

leaves holes for the reader to fill. Thus, the reader becomes the argument-maker when they are

led directly toward a singular conclusion, without the author having to explicitly state anything.

The reader is not aware they have been blinkered, nor that they have themselves linked

elements together; their acceptance of the author’s argument seems like a conscious choice.

Like Aristotle, Fahnestock calls this the “fake window” (69); the clever scientist is careful to

construct his linguistic frame so that only the desired landscape/perspective can be seen by the viewer; he is then free to allow the viewer to examine any part of that window he likes— provided they can’t see beyond its bounds. This is the essence of all argument (including this one); persuasive discourse cannot exist without it.

But how is the framing built? What persuades other scientists to accept its presence?

Are they aware of the window, or has rigorous training inured them to it? If so, how does this fake window become normalized over multiples scientific generations, occasionally resulting in faulty paradigms that can dehumanize or easily be dismantled (or even ridiculed as ‘fake news’) by outside political forces? This is the primary project of this dissertation. Rhetoricians have many topoi—formulae and themes derived from classical rhetoric which can be used to de/construct the style and purpose of arguments—what we still lack is a direct, non-literary means of cutting to the heart of the beliefs, perspectives, unconscious biases, or even overt 20

agenda on which those arguments are built, as well as a means of tracing them back to their

sources. Scientists are necessarily highly social; their methodologies and goals are learned, trained, tested, approved, and then disseminated. Their use of symbols as tools is sophisticated and effective. Explaining in mechanistic ways how an ‘objective’ scientific argument in a

publication can stem from something as ‘subjective’ as a personal perspective has proved

elusive), as neo-classical methods usually appear too grounded in literary theory and reasoning

to be considered applicable to actual scientific journal articles, causing many rhetoricians to

back away from directly involving themselves in science-qua-science. Even so, rhetoricians like S.

Scott Graham and Lynda Olman have make significant inroads in this department My objective is

to build upon their work by establishing a means by which any rhetorician with a solid grounding

in scientific procedure might study science where scientific framing of issues actually occurs: in

scientific journal articles.

An important component of this analysis will involve data visualization. Rhetorical

scholars are, in fact, beginning to answer the critical question of how graphical data influences

the viewer, and their work will prove essential for framing my hands-on approach to article evaluation. A critical partnership with technical writing scholars and scientists has resulted in enhanced insight into how graphical representation of data is influenced by analogous rhetorical constraints. After all, rhetoric is just the suasory use of symbols, whether textual or graphical.

Now, too, the definition of viewer has expanded to include the general public, as the popular science market has grown steady since the late Nineties. Scholars are increasingly interested in how popular science influences public policy and understanding of science. The first studies came from an unlikely pairing: an evolutionary biologist and a rhetorician. Working as a team, 21

Stephen Jay Gould and Paul Dombrowski explore the exaggerated embryo development

sketches of Ernst Haeckel. Gould led the charge; publishing an article in Natural History

(“Abscheulich! (Atrocious!)”; 2000) exposing how Haeckel subtly fudged his drawings in order to better illustrate the persistent theory that ontology always recapitulates phylogeny. Despite

Haeckel’s ‘good’ intentions, these “Improved illustrations masquerading as accurate drawings spell much more trouble in popular books intended for general audiences lacking the expertise to separate a misleading idealization from a genuine signal from nature” (35). In 2003, Paul

Dombrowski revisited the subject, concluding that these falsifications, however well- intentioned, are fraught with very serious ethical and practical implications: treating the public like fools provides fodder for creationists; treating students as less worthy of full disclosure may prevent science from advancing to a full understanding of evolutionary theory. And, of course, it all inevitably comes back to national socialism: as with Haeckel, more simplistic diagrams provide a better way of educating, unifying, and intellectually standardizing das Volk (305), providing greater social uniformity. Remaining on the evolutionary front (and again following

Gould’s lead), Jeremiah Dyehouse demonstrated how a progressive series of horse skeletons reflective of traditional diagrams of horse evolution leads to an inaccurate perception of evolution as ladder-like and hierarchal, as opposed to branching and nodal (“A Textbook Case

Revisited”; Technical Communication Quarterly; 2011). Other scholars are more interested in data representation (although Gould himself was no stranger to it): visual rhetoricians like Aner

Tal and Brian Wansink concern themselves with how trivial advertisement graphs afford drug companies greatly enhanced rhetorical persuasiveness, even when the graphs are (as anyone who has ever taken a statistics course knows) heavily manipulated and oversimplified (“Blinded 22

with Science”; 117). Stephen Jay Gould points out that eliminating tails, lowering axis points, and even truncating data lines are all allowed; while most journals do attempt to curtail these strategies to some extent, the strategy persists in publishing, succeeding in persuading not only casual magazine readers, but trained scientists as well (Abscheulich! 45). Tuft’s Visual

Explanations (1997), Kostelnick and Hassett’s Shaping Information (2003), Kress and van

Leuwen’s The Grammar of Visual Design (1996), Prelli’s The Rhetorics of Display (1997) and

Dombrowski’s “Ethics and Technical Communication” (2000) all explore this topic. On the popular science front, scholars like Sarah Perrault are interested in how popular science communication filters through society, resulting in (sometimes very flawed) public understanding, which in turn engenders faulty public policy (Communicating Popular Science:

From Deficit to Democracy; 2013).

In the sciences, visual argument affords authors quick and apparently unassailable evidence of their experimentation; in the physical and biological sciences, conference ‘papers’ are primarily presented visually, as standalone posters on easels, to be inspected at will by passing colleagues. The development of graphical data presentation in published journal articles takes this ‘evidence’ one step further. Placed alongside the authors’ verbal verification of their hypothesis, these graphs, tables, and maps perfectly parallel the text, providing what is perceived to be instant verification of said theory. To make matters worse, the statistical regressions underpinning these data displays aren’t immediately calculable via mental math; in the absence of the complete data set and an Excel spreadsheet programmed to run ANOVAs and t-tests, even a colleague has to take the author’s word that both the math and the data sets being fed into their statistical batteries are accurate. In consideration of these difficulties, the 23

greatest inroads into scientific visual rhetorical study have been made via three basic

approaches: 1) a philological examination of how images work to indoctrinate user-viewers, 2)

probative examination of what visual arguments are not showing (the black box), and 3) the application of verbal rhetorical concepts to visual images.

There’s a reason keen visual rhetoricians like Kostelnick and Hassett also emphasize the importance of isolating visual conventions. Without significant scientific training, reverse- engineering scientific visuals can be a daunting project.. But understanding conventions allows us to see how ‘valid’ data trends often stem from mere cultural paradigm. Data is not objective: it is often self-interpreting because it relies on human customs and training to make sense of it.

In Shaping Information: The Rhetoric of Visual Conventions, Kostelnick and Hasset strive to reveal the human hand at the tiller during every process of data analysis:

Conventions operate in social contexts where users control them. Conventions

do not descend from a platonic domain of preexisting forms; they are inventions

that users learns [from other humans], imitate [in order to be judged ‘true’

scientists], and codify [in order to teach new scientists]. Sometimes their

ubiquitous presence may lead readers [and scientists] to forget that conventions

are, in fact social constructs. (6)

Visual conventions are necessary, as they establish a baseline from which multiple people might define and discuss an issue. They therefore become an important part of the ritual cycle: users learn them, employ them, then pass them on to new users. In this respect, these data visualization protocols are no different from any cultural conventions employed by any human 24

social group, right down to the emergence of the species. Within culture, repetition endangers

its own justification. Our own frequent ritualized celebrations of logic, reason, and mathematics

permit these enculturated visual discourse communities “to ignore the artifice of conventions

that, to them at least, appear natural and absent of any mediation” (34). Although it isn’t always so much that scientists can’t see beyond their shared, codified methodologies: the real problem is that they must operate within sanctioned discourse communities to earn a living. Thus, they know they are preparing data for peer and public consumption and must prepare accordingly.

By “evaluating what readers are accustomed to, empirical research loads the deck in favor of

conventional designs at the expense of those that appear novel” (192). Remaining part of the

tribe is even more important than sharing new ideas; one must introduce innovation slowly or

risk being ostracized.

Visual convention can also be an important source of validation for new theories. S.

Scott Graham points out in "Agency and the Rhetoric of Medicine: Biomedical Brain Scans and the Ontology of Fibromyalgia" that PET scan visualization became the rhetorical agentive force

that afforded fibromyalgia (and fibromyalgia patients) agency as ‘real’ sufferers of a ‘genuine’

medical condition. Previously, fibromyalgia patients were believed to be suffering from mental illness or other form of psychosomatic disorder; accepting the diagnosis as ‘legitimate’ represented a radical transition in scientist thinking. According to Graham, this radical paradigm shift was generated by the position a technology (namely, PET scanning) occupied within a complex web of semiotics and actor networks. Graham argues that “change arises from a series of rhetorical events over time,” and that “change becomes the status quo when…authoritative structures operate to maintain the change”. The currency authoritative structures accept when 25

making these changes is often the customary (nomotic) use of a technology—in this case, a form of functional neuroimaging. This is not because PET scans provide indisputable proof, as many scholars have commented on how PET scan data can be variably interpreted in much the same manner as statistics, and is essentially a ‘black box’ process (Dumit 2004; qtd in Graham 2009); rather, it stems from the agency bestowed by social consensus upon the technology by the scientific community. Scientists ‘authorize’ a black box technology when it fits an apparent custom-law and can provide visual ‘evidence’ of a nomos, which can be understood as a ‘black box’ technology whose ‘results’ are approved by scientists, despite their not directly observing the data. Thus, the ‘validation’ of a technology is based on how well it builds on other nomoi.

According to Graham:

neuroscience modeling is built on an ever-receding set of homunculi. That is, for

every description of a neurological system, that description is predicated on

another system that has been essentially black boxed for the sake of the current

model. Subsequently, when describing complex neurological systems such as

pain processing, neuroscientists are faced with an inherent problem of infinitely

receding homunculi, or in the more familiar parlance of philosophers, we have

once again a situation where it is turtles/homunculi/black boxes all the way

down.

In other words, the visual ‘results’ depicted in neuroimaging are not really data at all; they are a projection of how results are filtered through a disciplinary lens which accepts the technology as valid—albeit in a somewhat prima facie fashion, as there is necessarily a complete lack of 26

epistemological certainty when human observation is so far removed from the phenomenon being studied.

Several other intriguing case studies exist which illustrate the profound power of visual convention to indoctrinate new scientists and the lay public. As it turns out, what is NOT shown in data visualization can also reveal conventions (and their problematic after-effects). Let’s return to how lauded evolutionist Ernst Haeckel fudged his famous embryo sketches in order to

‘prove’ that ontology recapitulates phylogeny: the simplified sketches he pioneered have been used to illustrate the similarities of embryonic development among species right up to the modern day. But the use of oversimplification as a training aid has a darker implication: Gould points out that “Improved illustrations masquerading as accurate drawings spell much more trouble in popular books intended for general audiences lacking the expertise to separate a misleading idealization from a genuine signal from nature” (“Abscheulich!” 375). Dombrowski emphasizes Gould’s point that because differences aren’t there, these drawings create the false perception that a highly conserved embryonic stage exists, in which chicken and turtle and mouse and human embryos are functionally equivalent (“Ernst Haeckel” 43). But similarity is not the same as equivalence: one cannot be substituted for the other. Nor are there, for that matter, any discreet ‘stages’ of development. ‘Stages’ are a human construct designed to simplify a continuous process. As budding scientists, “the overall impression [these illustrations] leave is more salient and presumably felt as more important by the audience than the specific details of particular specimens” (75). This becomes relevant in genetic expression studies, as it has led an entire generation of anatomists and taxonomist to ask ‘when’ and ‘at what stage’ embryos begin to ‘diverge’ from one another, when they were never indistinct to begin with. 27

Thus, rhetoric hidden in training aids can guide the research of an entire generation of scientists.

For better or worse, scientists end up retaining the oversimplified constraints of their training

aids, attempt to draw them into research, and end up puzzled as to why their specimens won’t

fit the ‘established’ model and cannot be identified as pertaining to any discreet ‘stage’. They

are entrenched in the paradigm into which they were inadvertently indoctrinated. And yet

Haeckel himself is rarely challenged. Because Haeckel was an artist as well as a scientist, he

effectively had “feet in both territories of these binaries; he could have things both ways and

could not be easily challenged or refuted because he could shift the bases of his argument to

suit” (72). Had Haeckel published his work as a journal article, he could have been challenged;

because he was trading in visual arguments, however, it was much harder to catch his rhetoric

in action, as society tacitly accepts that art is subjective, and generally allows it to function

unquestioned. We’ve created our own blind spot. As Jean-Luc Doumont points out, “a word is

worth a thousand pictures, too”: “visuals are simply not suited expressing abstract concepts and complex meaning” (“Verbal vs. Visual”; 220).

The second intriguing case returns us to the natural history of horses. Most of us already know that evolution is not a hierarchical ladder; that it’s more like an unruly rose bush, rooted in one spot, simultaneously spreading long and sparse creeping tendrils at crazy angles in one place and projecting riotous clumps of thick interwoven branches in others. And yet, the perception of evolution as directional—working slowly but inexorably within the constraints of a certain pattern towards the expression of a final form—persists. Nowhere is this more evident than diagrams depicting the evolution of the modern horse. Stephen Jay Gould points out in

Hen’s Teeth and Horse’s Toes: Further Reflections in Natural History (1994) that depicting proto- 28

horses in a series, starting with tiny, multi-toed ancestors and moving to progressively larger,

more sparsely-digited modern forms implies that evolution moves a species slowly and modular fashion toward its contemporary form (342). It also implies that the modern horse, literally running at the head of the pack, is also somehow the best, most well-adapted form (342). The classic diagram is not wrong, per se: the animals depicted are most likely all actual predecessors of the modern horse. The trouble is with what isn’t depicted: for example, the many larger- toothed descendent populations of smaller horses who went on develop smaller teeth yet again. Some of the smaller horses branched into larger descendant species with bigger body forms and longer legs, only to see their own descendants become more petit again, depending on which selective pressures were operating in colonized environments that varied between forest and marsh and grassland. The fact that our modern horse is singular, and has only one operating toe per limb, is largely a product of chance. In “A Textbook Case Revisited” Jeremiah

Dyehouse advanced this case more deeply into the rhetorical realm, explaining how, in the case of the visual display depicting the evolution of the horse at NYC’s American Museum of Natural

History, “visual rhetoric does not only help generate evidence for accounts of evolutionary change; it also seeks to distinguish different evolutionary accounts” (5). Hidden in the equine pseudo-hierarchy is the implication of the Panglossian Paradigm that so disturbed Gould; the incorrect but pervasive view that current biological forms are as perfectly adapted to their environment as they can be. The perspective engenders the false perception that organisms must match their surrounding perfectly to survive; it also implies the converse determinist notion that every aspect of an organism’s existence has a survival-based or reproductive purpose. And so, under pressure from scholars and scientists, NYC AMNH completely revamped 29

its horse exhibit, creating a branching network of horse forms for visitors to wander. Dyehouse

reports that the museum wanted visitors to come away with a more accurate perception of

evolution, not as linear, step-wise ladder to perfection, but as a highly contingent, nodal,

directionless natural phenomenon. To do this, they intentionally developed a visual rhetoric that

“develops and enhances, rather than merely facilitates, visitors’ understanding of evolution and

evolutionary science” (6). Even though the intent is positive, it still illustrates the power of

rhetoric inherent in the visual image, as well as how “complex forms of communication advance

[their] own institutional aims” (6). Ultimately, “this case demonstrates the multiple roles that

argumentation can play in visual communication about science,”’ it also “suggests the

importance, for science communicators, of understanding scientific visualization from multiple

points of view” (7). Including power discourse: Sharon McDonald emphasizes in The Politics of

Display that what is featured and explicated in museums is very much a Foucauldian expression

of power and social intent: power and ‘truth’ go hand in hand, particularly at locations of truth-

making, such as science museums with active research agendas (3). In creating them, displaying

them, and fostering them, a prominent museum’s displays “are lending to the science that is

displayed their own legitimizing imprimatur” (2). In many ways, scientific visual displays,

whether literary or exhibitive, determine what is allowed to be real, and what is not.

Dyehouse eventually takes his argument to a very intriguing place: he begins using

Aristotelian topoi to analyze a physical, visual series of objects. Classically used as places to generate argument, these ‘topics’ are a means of systematically addressing, creating, and unpacking verbal rhetoric. But as many rhetoricians of science demonstrate, rhetoric, and its

Invention, applies to any form of communication. Drawing on Fahnestock’s Rhetorical Figures in 30

Science, Dyehouse demonstrates how the original horse evolution schematic drew largely on incrementum, the use of stages or increments to arrive at a conclusion. Just as in verbal or written communication, it is the structure of the argument that lulls us into belief:

the patterning itself, not the specific relation of evidence to conclusions, is most

important to nonscientists. In other words, in such illustrations, the trend

suggested by the illustrations’ overall pattern contributes to viewer’s

acceptance of an evolutionary change in horse species. (8)

In any visual argument, such line-type displays cause the viewer to focus primarily on the

“overall form” (8) of the visual argument, rather than the actual concept itself. Humans are programmed pattern—seekers:: our brains are hardwired to conflate coincidence and causality.

Being quick on the uptake is largely a survival mechanism, but it also seriously hampers and impairs our ability evaluate complex data sets (again, another shot at the Panglossian

Paradigm). So, when our brains can match visual patterns to patterns in verbal communication, so much the better. Haeckel’s embryological sketches did the same: each embryo was staged and presented as having developed by a series of discrete steps, which were visually replicated over and over with pictures of other embryos. In The New Rhetoric, Perelman and Olbrecht’s-

Tyteca also discuss visual and verbal parallelism as a major persuasive force in literature.

Originally deployed as a literary topos for the examination of repeated, parallel statements of logical succession in oration, the authors argue that physically “Arranging representational or iconic images in rows or arrays is yet another mode for the parallel presentation of evidence”

(123). Diagrams and museum exhibits are not the only visually representative data to do this. As 31

it turns out, we encounter this representational parallelism in a far more common place: data

tables.

Essentially, arranging data in a table helps to “epitomize an arguer’s claim that multiple

instances belong to the same grouping” (134). This phenomenon is called summative induction,

and it works regardless of whether all available data is included or not:

…when the horizontal entries of a table are translated into full verbal

arguments, their common grounding in the stylistic norms of education and

induction becomes obvious. In effect, the compiler who presents a column of

terms under a heading “argues” for their inclusion as instances of the general

term in the heading…Such a mode of visual parallelism unproblematizes the

data and diminishes occasions of refutation. (139)

In other words, we don’t challenge data tables because they are their own argument. They hide

in plain sight because their validity as a structure, and therefore as a means of evaluating data,

has already been mentally confirmed by the reader via cultural convention. This makes

unpacking a figure extremely difficult, especially if it involves statistical analysis, with which most rhetoricians, as scholars of the humanities, are completely unfamiliar. This is not to say intradisciplinary challenge does not occur: scientists do challenge one another on all aspects of an article prior to publication. However, because visual commonplaces are so widely accepted, they are easy to overlook. Toward A Theory of Verbal-Visual Interaction (2009), Alan Gross

points out that we struggle to rhetorically ‘unpack’ tables because they operate in a sort of

communicative netherworld that is nowhere and everywhere at the same time: 32

Tables are para-linguistic: they mobilize the verbal and visual systems by

exploiting the vertical as well as the horizontal dimension of the page. Their

intersecting grids of rows and columns highlight the cognitively parallel contents

of the individual spaces their intersections create. In addition, a set of visual

clues—superscripts, single and double underlines, form a code that helps the

reader identify and differentiate among the symbolic contents of the rows and

columns. (159)

We struggle to pin tables down because they blend into the page; they refuse to occupy familiar perceptive grids. Disguised and camouflaged within the niches they themselves create, their rhetorical arguments too often escape detection. They even provide their own conceptual roadmaps; it’s a terribly clever means of smuggling an argument. And scientists do make use of it: in Science: From Sight to Insight, Gross and Harmon point out that scientific literature literally hinges on the ability of these visual displays to activate and enable meaning within the text: without tables, maps, and diagrams, science fundamentally cannot make meaning. They posit a complex life-cycle for a fact, which moves from table, to graph, to text: a table legitimizes data, then, a graph projects a causal connection for this data, and finally, textual confirmation

“bestows meaning on these patterns of data” (281). Each stage of the life cycle confirms and reinforces itself, until it allies with the step after. By the time the reader imbibes the text, they’ve already had the argument confirmed, validated, and reinforced no fewer than three times. Most readers aren’t even aware that the first step (the table) is already an argument. It is a form of parallelism in which elements are made to mirror one another in order to become persuasive. 33

In many ways, the scientific article is the ultimate tri-layer enthymeme. Originally defined by Aristotle’s predecessors, the enthymeme is a form of syllogism, an inference adopted by the reader after a rhetor provides a series of premises. The genius of enthymeme is that its justification is based on unconscious, tacit acceptance of its contrary. For example:

Enthymeme: Unstated, Tacitly-Accepted Premise:

Socrates is a man and Socratia is a woman; Men are always more intelligent than therefore, Socrates is more intelligent. women.

This tea contains honey and not sugar; Honey is always healthier than sugar. therefore, it is healthy. This painting costs $100,000; therefore, it is Good art is always expensive. very good art.

Table 1: Sample Enthymemes.

Notice that all of these ‘proofs’ are in fact value judgments disguised as pure, apparently

unassailable logic. Yet each of them can be dismantled by successfully attacking the premise,

which is, at is core, a personal perspective on the part of the reader. If the reader agrees with

the premise, they are likely to adopt the argument, as well. If they become aware of the premise

and attack that premise by, for example, pointing out and challenging the assumptions that 1) men are not always more intelligent than women, 2) honey is not always healthier than sugar, and 3) good art is not always expensive, they can argue against the enthymeme. The tough part is catching the hidden premise, and realizing it represents a value, rather than a fact. According to Aristotle’s Rhetoric, "if, certain things [premises] being the case, something else [the conclusion] beyond them results by virtue of their being the case, either universally or for the 34

most part" (1.2.9: 1356b). In other words, the reader that a point has been demonstrated, even if it hasn’t been explicitly or meticulously explicated:

Aristotle calls the enthymeme the “body of persuasion”, implying that

everything else is only an addition or accident to the core of the persuasive

process. The reason why the enthymeme, as the rhetorical kind of proof or

demonstration, should be regarded as central to the rhetorical process of

persuasion is that we are most easily persuaded when we think that something

has been demonstrated. Hence, the basic idea of a rhetorical demonstration

seems to be this: In order to make a target group believe that q, the orator must

first select a sentence p or some sentences p1 … pn that are already accepted by

the target group; secondly he has to show that q can be derived

from p or p1 … pn, using p or p1 … pn as premises. Given that the target persons

form their beliefs in accordance with rational standards, they will accept q as

soon as they understand that q can be demonstrated on the basis of their own

opinions. (Rapp)

In the enthymeme, the premise is never proved, yet the argument is adopted as though it were; if it were a cake, readers wouldn’t even be aware they’d eaten the first layer, which is most often composed of standards and conventions. In this way, enthymemes are like petit-fours, artfully layered, tiny cakes that are consumed in one bite, without exploring the various layers before ingestion. Here again, we see the importance of nomos in argument-making: 35

Consequently, the construction of enthymemes is primarily a matter of

deducing from accepted opinions (endoxa). Of course, it is also possible to use

premises that are not commonly accepted by themselves, but can be derived

from commonly accepted opinions; other premises are only accepted since the

speaker is held to be credible; still other enthymemes are built from signs…That

a deduction is made from accepted opinions—as opposed to deductions from

first and true sentences or principles—is the defining feature of dialectical

argumentation in the Aristotelian sense. (Rapp)

Essentially, the enthymeme is a pastry-like pastiche combining verbal and visual, an amalgamation of stated premise--provided by the rhetor--and the reader’s own innate knowledge and cultural assumptions. There is a certainly a reason the Greek roots of enthymeme literally translate to ‘within the mind:’ the argument is made there, by the reader, from elements already contained therein, rather than by the rhetor. All the rhetor has to do is provide the appropriate trail of breadcrumbs.

Enthymeme, however, is not the only rhetorical figure at work here. In many ways, tables, graphs, and diagrams also act as chiasmus, a closely rhetorical figure (also a form of syllogism) in which argument is stated and then reiterated in reverse order. Gross argues that this inverted parallelism is achieved by visual counterparts to verbal logic because it follows a basic A:B:B:A antimetabolic (a subset of chiasmus) construction: this graphic validates this premise because this premise is validated by this graphic (Gross; “The Verbal and Visual in

Science”; 123). This ‘circular logic’ is especially true with labelled diagrams, which “train viewers to recognize a phenomenon patterned on the logic of the rhetorical figure chiasmus” as 36

inherently logical (123). Gross points out that we often fail to see this substitution because

images inherently possess so many layers of meaning: they are at once iconic (a reflection of the

thing itself), indexical (linked to associated phenomena), and symbolic (serving as a reflection of

a broad set of ideas) (123). Fahnestock, too, is quick to note that the other strategic benefit of

antimetabole derives from the “predictive power [implicit] in the invitation it offers readers to

participate in how the figure should be completed” (qtd in Gross 124). Just as with text, “The strength of this predictive power…comes across in the ease with which an antimetabole can sustain an ellipsis” (124); the more predictable the chiasmus, the stronger the argument.

Also powerful is the timeline: in Reading Images, Kress and Leeuwen highlight how

temporal diagrams create instant, compelling narratives by transforming “phenomena into a

narrative of gradually unfolding stages” (95):

The persuasive force of this ‘story’ relies on the transitive argument

encapsulated in the entire graphic. The general form of a transitive argument—

A is related to B; B is related to C; therefore A is related to C—mirrors the

transitive relations of mathematics and logic (for example, x=y; y=z; therefore

y=z). However, for rhetorical transitivity, the relationships can be more

complicated than equivalence (=), comparison (for example, > or <), or

“contained by” relations. (97)

With this powerful tool, even causality and finality can be seamlessly implied and used as the

basis of chiasmus. Every figure transitions into another, generating a cohesive flow that seems

like a logical argument. Again, none of this is based on logical foundations having much bearing 37

on data analysis: these rhetorical powerhouses rely strictly on human visual psychology to

operate. And that, of course, is what makes them so tricky.

Many rhetoricians make a similar argument regarding how narratives and non-scientific

discourse standards shape visual individual issues: Fahnestock examines how the personal

narratives and sketches used by 16th century botanists became codified into botanical categorization and taxonomy standards (“Forming Plants in Words and Images”; 2). These rules were not based on any rigorous empirical standards; however, these observations were used as foundational texts for an entire scientific discipline in which certain characters are considered more important than others. Carol Reeves tells the story of how a discourse of commercialism in images of prions aided acceptance of theory regarding the histology and pathology of the prion responsible for Mad Cow Disease (“The Strange Case of the Prion”; 2011). Acceptance of the disease model came about in part due to language more suited to advertising strategy and proprietary discourse than science (263). Quite literally, the prion model of infection was advertised and sold to other scientists. In “Tricks, Hockey Sticks, and the Myth of Natural

Inscription” Lynda Walsh scrutinizes, in part, how internal disciplinary rhetoric regarding data norming (“hiding” outliers, using normative “tricks” to create trend lines, etc.) and ‘cherry picking’ data selection can shape visual images and their perception by the wider world (95). An analogous curiosity is how the narrative of the “Climategate” event can, based on the creation and scrutiny of this single controversial graphic, be held up as proof of ‘climate fraud’ and therefore utilized by climate change denier to create enthymematic and categorical statements denying the existence of climate change. In “The Naturalistic Enthymeme and Visual Argument,”

Cara Finnegan examines how Depression-era photographers created exaggerated or normalized 38

narratives of poverty and drought, depending on the viewer’s regional perspective (121); a

potent reminder that even photographs used as scientific evidence are subjectively framed, and

therefore must be objectively questioned. Ultimately, each of these examples speaks to the

very real role that personal judgement and valuation play in science. Whether we are discussing

personal observations grandfathered into modern science, how persuasive commercial

strategies can be used to convince other scientists about the validity of a pathological model, or how individual scientists can act to remove certain data sets from their work, we can see how science is a human endeavor, and therefore subject to human error.

The trouble with this work, however, is that it draws from many disparate (through

certainly intriguing and pertinent) sources. Many of the foundational long manuscripts of visual

rhetoric draw from a motley assortment of scientific work; giving it an almost anecdotal feel.

Thus, many of their examples are illuminating, but fail to give the impression that a single discipline can provide constant, reliable evidence for the presence of rhetorical elements in their tables, graphs, and figures. Few scholars have actually ‘stuck with’ a particular field long enough to anthropologically monitor recurring tactics, attitudes and framing techniques, as does Lynda

Walsh with regard to climate change science. A sea-change is necessarily on the horizon, but inroads must be made with regards to establishing replicable methods and techniques for reliably analyzing visualizations from all studies in a given discipline, rather than a scattershot approach that involves highlighting graphics in which fascinating rhetorical elements are particularly evident.

Regarding strides towards a reliable, replicable method: in order to understand how

‘objectively’-gathered scientific data could be improperly ‘subjectively’ interpreted, it is vital to 39

make use of stasis theory. Both concepts are drawn from rhetorical study. Because autism

science is conducted primarily by neurotypical individuals, autism science purports to describe

and define a neurotype with which it fundamentally cannot identify; neurotypicals cannot

embody autistic brains, nor vice versa. Thus, contemporary autism research is a profoundly

rhetorical construction. The simple use of stasis theory—coupled with a close examination of

each of the components that comprise an argument itself— could expose any long-held, fundamental flaws inherent in modern neuroscience and offers a straightforward but vital strategy for their future avoidance. Developed around the 1st century BCE by Hermagoras of

Temnos and Aristotle (and later by Cicero), stasis theory is essentially a pre-writing process

(inventio) intended to aid the speaker in analyzing and responding to an existing argument.

When faced with an argument, one must ask four simple questions: An sit (is it/did something happen)?; Quid sit (what is it/what happened)?; Quale sit (what kind is it/what is its significance)?; and Quid actio (what action should be undertaken in response to this)? These four areas are also referred to, respectively, as the conjectural stasis (relating to questions of fact), the definitional stasis (relating to questions of definition), the qualitative stasis (relating to questions of quality), and the translative stasis (relating to issues to jurisdiction).

Stasis Question: Question of: Stasis Type: Did something happen? Fact Conjectural What happened? Definition Definitional Was is the significance of what happened? Quality Qualitative What should be done regarding what Jurisdiction Translative happened?

Table 2: Static Enquiry.

40

My objective is to use stasis theory to uncover any unintentional neurotypical bias inherent in modern cognitive theory, in the hope of potentially correcting and realigning the discipline toward practical, neurologically-inclusive ends. This bias could also be called neurotypical privilege—or neuroprivilege, as coined by Yergeau and Heubner (and in many ways, this study is all about encouraging people of all neurotypes to ‘check’ that privilege—but more on this later.)

Lawrence J. Prelli’s A Rhetoric of Science: Inventing Scientific Discourse (1989) is largely responsible for unearthing and refurbishing this critical tool in the rhetorical arsenal. From its origins in the teaching of Hermagoras and Aristotle, stasis theory was crystallized by Cicero as

Invention, three metaphysical questions designed originally to judge criminal action, but which may be applied to any discursive event. By asking An sit [Is it?], Quid sit [What is it?]? and Quale sit [What type is it?] we can establish questions of fact (whether something happened), problems of definition (what happened and/or what was measured), and problems of kind

(what is the essential quality/nature of what happened), respectively. (There is a fourth stasis, but it deals more with sentencing than exposition.) The usefulness of this tool is that it enables us to verify the logic of each stage of an argument. In the case of a crime, the first stasis allows us to determine whether something actually happened; failing this stasis topples any criminal case. The second stasis asks us to determine what, exactly, did occur. If what happened is not what the prosecutor or victim describes, or the occurrence is not defined as a crime, the case again collapses. Finally, we examine the specific nature of what happened. What is the significance and outcome of the crime? What effects has it had on the victim and community at large? Only if the three stases of the trial are successfully identified can sentencing/judgment of the accused occur. 41

In order to pry out and dissect these enthymemes, which can otherwise ‘hide’ in our

unconscious reasoning subroutines, we can make use of the Toulmin Method. Created by British

philosopher Stephen Toulmin in The Uses of Argument (1958), the Toulmin Method is a means of establishing each of the necessary parts that make an argument tick. According to Toulmin, each successful argument is possessed of a claim, which can, when coupled with a warrant and grounds (evidence) lead to a successful conclusion. Of primary concern to us will be the warrant, which is the tacit underlying assumption necessary to link the claim and the evidence. In essence, this is the major premise in an argumentative enthymeme. Usually, this warrant is considered to be both elementary and obvious, and therefore remains unstated. However, picking out this ‘assumption’ affords us a paramount means of discovering how culturally- determinate thinking may have infiltrated empirically-grounded studies. As we shall see, these warrants also act as inherited and memetic nomoi; they are passed on almost unchanged from study to study. They are so accepted as to have become custom-laws almost universally accepted by scientists within a given discipline. 42

Figure 1: Modified Toulmin Method.

For scientific arguments, we end up 1) uncovering the central warrant—the core belief that links the claim and the data—as well the proxy (logical stand-in, or substitute that serves to validate the nature of the data) that makes the data back the claim up. Note that Toulmin did not utilize the concept of a data proxy in his work (he opted for the arguably-useless term

‘evidence’ when discussing data); however, because it counts as a premise in a scientific argument, I am using it in my modified Toulmin Model to indicate the specific, data-oriented counterpart of the traditional warrant, which is separate from the traditional conceptual/theoretical warrant. I also use it as the building block upon which a warrant must build; without a data proxy, there can be no warrant in a viable scientific argument. (For example, if I argue that Western Badgers using bricks to block gopher holes constitutes tools 43

using behavior, my warrant would be that bricks, used in this way, are tools. My proxy would be that observing brick use constituted observing tool use. Note that a warrant and proxy are always arguable; in this way, it is the same as an unstated premise in an enthymeme.) We can then 2) judge whether or not the overall argument is viable. Next, we will 3) establish how visual elements such as charts and graphs operating within journal articles act enthymematically—that is, without explicitly stating the premise that connects them—to reinforce any operating memetic (that is, meme-like—memes being equivalent blocks of information or ideas that serve as variations on a theme and are passed by imitative means between research generations

[much like an Internet meme]) warrants. Finally, we can 4) determine whether any of these textual warrants have themselves become memetic—again, passed down through imitative means largely without variation—in contemporary research. Detecting these nomotic memes begins with a stasis analysis, then combines with a modified Toulmin analysis. The two will work as a team to reveal both data proxies and research warrants inherent in each study I analyze.

Each will provide a sort of internal validation for the other, allowing me to ensure I am not making any off-base assumptions. Examining multiple studies will allow me to see if proxies and warrants are shared between studies, thereby making them nomoi. If multiple research generations share these nomoi, we can confidently claim they are memetic.

The power of the meme cannot and should not be ignored: though memes may manifest in apparently harmless or even risible ways (such as the ubiquitous Internet meme), it should never be forgotten that their power lies in their ability to manipulate via the human unconscious mind. They spread from mind to mind the way a virus spreads from host to host

(we say an Internet meme is ‘viral’ if is it is successful, because it has spread literally everywhere 44

Internet is available on Planet Earth, literally overnight). , who championed a totalizing ‘gene’s-eye’ view of in books like The Selfish Gene (originally published in 1976)—a philosophical perspective to which Stephen Jay Gould objected a bit, as it allowed less room for his beloved random contingency—saw memes as potential social weapons, claiming that “When you plant a fertile meme in my mind you literally parasitize my brain, turning it into a vehicle for the meme's propagation in just the way that a virus may parasitize the genetic mechanism of a host cell” (Kindle ed). Thus, we become unwitting containers for this content, spreading it when applicable to others:

Just as genes propagate themselves in the gene pool by leaping from body to

body via sperm or eggs, so memes propagate themselves in the meme pool by

leaping from brain to brain via a process which, in the broad sense, can be called

imitation. If a scientist hears, or reads about, a good idea, he passes it on to his

colleagues and students. He mentions it in his articles and his lectures. If the

idea catches on, it can be said to propagate itself, spreading from brain to brain

(Kindle ed)

The important takeaway here is that an individual’s brain has absolutely no say in whether or not a meme is introduced within or transmitted by it. The concept becomes somewhat ominous if we apply a bias, privilege, or unconscious perspective to the meme in question, especially when it comes to scientific theory. Like genes, Dawkins sees memes are being transmitted in and as a unit: thus, the potential exists for unconscious dissemination, whether through overt instruction or incidental familiarity, of a meme with certain tacit ‘attachments’; namely, bias, 45

privilege, and perspective. For better or worse, the ‘idea-as-virus’ is transmissible, and as a meme, may import and impart unwanted cultural baggage to scientific endeavor.

Take, for example, Morton ’s 19th century theory that white males are more intelligent than females or non-white males because they possess larger brains. The first stasis asks us to question if the phenomenon is actually observed. In fact, Morton did possess massive data sets of thousands of skulls (the collection still exists at the University of Pennsylvania), and in fact, for most observed non-white races (with the notable exception of Asians), their cranial volumes were smaller than those of whites. (Technically, we might eliminate his theory here, given his necessary dismissal of a large subset of the human population. But we’ll play along to further illustrate stasis theory.) In the second stasis, we can ask whether or not these cranial volume differences translate to differences in brain size. We can confidently confirm that most observed non-white skulls had smaller brains on average than the observed white male sample. The third stasis, however, asks us to clarify the nature of the data: do the larger brains translate to greater intelligence? This is the central warrant of the entire argument and is the lynchpin on which it hangs. Brain size is the proxy (stand-in) for intelligence; according to Morton, a larger brain means greater intelligence. So does it work? Obviously, the answer is no: intelligence is a rhetorical construct, and even by modern conventional standards of intelligence, races of all average brain sizes all perform equally at all conceived testing batteries. By modern standards,

Morton would be considered motivated by racist ideologies; admittedly such leanings were harder to identify within his social context. But even back then, stasis theory could have thrown up a counterargument, as even in the 19th century there was no agreed-upon standard of intelligence—even then, scientists like Friedrich Tiedeman had used similar data to argue for 46

human equality (Mitchell 2018). But because Morton’s conclusions meshed with operating cultural memes validating the slave trade; Morton’s work seemed less ‘biased.’

Simply put, Morton ’s work was accepted more on the perceived comprehensive nature

of his data and the cultural environment in which he presented his work than the essential nature of his claim. As Prelli points out, “Audiences of scientists judge scientific claims, not with reference to the canons of formal logic, but against received community problems, values, expectations, and interests” (7). These “judgmental standards are located within situated audiences’ frames of reference, not in logical rules that transcend specific situations for scientific claiming” (7). As Carrol Arnold points out in her preface to Prelli, “If scientists cannot justify their claims in positivistic terms, how then are those claims ‘proved’ to be scientifically reasonable?” (ix). The answer, of course, is rhetoric, utilized to back up the author’s instrumental/technological values, their own logical principles, and their own moral values (18),

each of which is largely based on the cultural environment in which they operate. Uncovering

these stases (literally, stopping or sticking points), will enable a rhetorician to locate these heavily interwoven (and therefore thoroughly disguised) biased leanings (58). The real trouble with Morton ’s study, however, is that it both operated based on a memetic assumption that white European society was superior and worked to propagate the dictum for future eugenicists, who used normative white Euro-American society as their basis for a theory of

‘epitomized’ genetic potential.

In this study, we will assume it is unlikely that many modern scientists possess bigoted agendas when conducting their work. So how might scientific endeavor contain unintentional bias? According to rhetoricians like Prelli, scientists of any era can only operate successfully by 47

making themselves rhetorically persuasive within their prevailing discursive societies. After all,

all communication is rhetorical, and all rhetoric constitutes “the suasory use of symbols” (1).

Like all human endeavors, science must be persuasive to be successful. In order to be

persuasive, it must seem reasonable, as all scientific discourse “is accepted or rejected on the

grounds of its reasonableness” (7). But what is reasonable is certainly not always ‘objective’

(whatever that means): what is reasonable is based on common (shared) logic. Thus, a scientist

has no choice but to train and operate within that discursive framework of accepted

conventions, lest they appear unreasonable. As Prelli points out, this is essentially a ‘terministic

screen’; a set of cultural blinders that dictate the bounds of cultural operation. Like most

humans, the vast majority of scientists are not even aware of the metaphorical blinders they

operate behind. After all, all discourse must be framed from what is essential a subjective

perspective. However, for reasons that will continue to become clear, any rhetorician trained in

sophistry is inherently wary of terms like ‘objective’ and ‘subjective’. And so, for this study, I will

use the term terministic to describe a perspective that comes from behind a set of cultural,

professional, or neuronormative blinders. A useful term here is neuroprivilege; the state of occupying the dominate, normate, and therefore both powerful and empowered majority. As with racial or social or gender-based privilege, one occupying this position often unconsciously acts and speaks in subtle ways that undermine, negate, or otherwise disrespect those not occupying such a position. This does not imply any lack of intelligence or broad-mindedness on the part of those identified as having such a perspective; only that humans, as humans, cannot escape the fact that we all must operate from inside our own consciousness, and that consciousness is necessarily framed by beliefs, morals, and experiences that have nothing 48

whatsoever to do with pure empiricism—and that, as the adage goes, we must necessarily

‘check’ our privilege when it disenfranchises others. After all, as humans, our arguments are formed when information passes through the lens of our unique personhood. We can strive for impartiality—we can even do our best to codify it in stringent professional standards—but we can never, ever achieve true objectivity. Hence, I refuse to use a false binary like sub/objective, as it is essentially useless. The concept of a terministic perspective is the only useful means of describing the bias and privilege inherent in all arguments.

As Kenneth Burke—the literary theorist who coined the term— points out in Language as Symbolic Action (1966), these screens are literally composed of words: they are complex woven filters “through which humans perceive the world, and that direction away from some interpretations and toward others” (2) In fact, all discussion necessarily involves the use of terminology, which is similarly formed through terministic discourse:

We must use terministic screens, since we can’t say anything without the use of

terms; whatever terms we use, they necessarily constitute a corresponding kind

of screen; and any such screen necessarily directs the attention to one field

rather than another. Within that field there can be different screens, each with

its own ways of directing the attention and shaping the range of observations

implicit in the given terminology. (50)

The use of language itself necessarily imparts the use of perspective; communication is impossible without some form of lens. After all, "any given terminology is a reflection of reality, by its very nature as a terminology it must be a selection of reality, and to this extent it much 49

function as a deflection of reality" (45). Like a beam of light, any reception of a communicator’s message is also filtered through the interpreter’s own lens, thereby rendering it an interpretation. Unfortunately, any “terministic orientation also imposes an internal logic on [the communicator’s] choices and on the structure and development of their presentation in consecutive discourse” (19). Effectively, like all humans, scientists can “not only ‘see’ within the terms of specific statements concerning what it is at issue, [they] reason in accordance with the criteria for legitimacy that are implied by that way of seeing” (19). We can now see how well- meaning scientists can operate from behind a terministic screen; a subjective, deterministic filter based on their own unique life experiences, through which one differentially perceives and mentally constructs all natural phenomena. These terministic screens are always based on personal perspective, and are the basis for the premises on which scientific enthymemes operate. As premises, of course, they can be uncovered with stasis theory. I will demonstrate in the Methods section how these can be extracted, and how they can then be used to hunt for memetic nomoi.

But how does a terministic screen become nomotic? What determines whether or not a

personal attitude will be accepted as a widespread disciplinary perspective? According to S.

Scott Graham, the key lies in the cultural valuation society ascribes to an entity (such as a

disease) via its perceived evidentiary paradigm. In The Politics of Pain Medicine, he explores how

this “trope-shifting” creates a gradual change in how professionals discuss evidentiary entities like neuroimaging, until eventually the two become synonymous in the professional consciousness, making it nearly impossible for anyone within a discipline to critically distance themselves from prevailing attitude (145). In the case of fibromyalgia, presence of an ‘entity’ in 50

a PET photograph ‘proved’ fibromyalgia existed because ‘something’ was there that should not

be; ‘something’ was wrong with FM patients because the photograph ‘showed it’. Medical

professionals began to ‘read’ PET scans in much the same way they read X-rays; the ‘undeniable’

visual evidence of malady provided by an X-Ray became a metaphor for the ‘pathology’ evidenced by PET scanning. Similarly, in the case of autism, absence of activity in a particular

area on an fMRI image began to mean that something was deficient in the same way X-Ray imaging could reveal the absence of a normative anatomical structure. Through metaphor, valuation and evidence become synonymous, creating Graham’s “warranting topos” (146), a self-evident formula that reflects a disciplinary perspective. The adoption and propagation of these warranting topoi work to calibrate the discipline toward a unified perspective regarding how to interpret this ‘evidence’, leading to an “ontological rarefaction” (147); in other words, changes in warranting topoi authorizes and sanctions arguments by transmuting them into

‘facts’ considered foundational to and necessarily ingrained within a discipline, without revealing how they came into being (151). Thus, neuroscientists unilaterally assume autistic difference equals deficit because a valued technology ‘shows them’ something is wrong by demonstrating that an entity (neural activation) is ‘missing’. They miss the critical step between evidence and interpretation because the warranting topos dictates that the visual ‘appearance’ of deficit is an ‘obvious’ fact; normalcy is a binary construct in which a blip can either be observed or not observed. Categorically, this is no different from assuming that cranial morphology reflects personality; like neuroimaging, phrenology assumes that certain features deterministically equate to character traits. They use the same warranting topos; the only difference is that head bumps and dents are no longer accepted as valid metaphoric ‘blips’. In 51

contemporary science, these warranting topoi usually take 4 forms: 1) a biomechanistic cause;

2) codified criteria; 3) visual evidence; and 4) an FDA-approved treatment (152). Any one of these can be sufficient to ‘authorize’ scientists to ignore premises. As with phrenology, then, rigorous training in ‘reading’ neuroimaging technologies trains neuroscientists to negate the premises supporting their visual/tactile enthymeme when certain officially-sanctioned conditions are met; against a cultural background in which imaged ‘blips’ are the same as accepted facts; awareness of premise simply fades to grey. As we will see, the pathogenesis of autism is based on powerful discourse.

II. METHODS

a. TOOLS i. LEXICAL

In this study, I will examine journal articles discussing modern autism studies sequentially,

using three tools. I will use each to determine whether or not modern neuroscientific studies

operate on tacit assumptions of a socially non-normative condition like autism equating to a defective and deficient condition. The first is stasis theory. The second is enthymeme. The third and is a resultant observation regarding nomotic inheritance, possibly leading to memetic analysis. We will thus examine the background, argumentation, and visual rhetoric operating within each of the articles we will discuss in the Analysis section. With stasis theory, we will treat each textual and visual argument as an enthymeme, probing each tacit premise by uncovering the central warrant and the data proxy that validates it. We can then analyze the premises as 52

terministic screens, as they are arguable constructs based on personal orientation, rather than

data. From there, we can compare these screens to those located in other studies in order to

determine if the orientations are nomotic; which is to say, customary and tacitly accepted by a larger group. If we locate nomoi, we can then compare them between research generations to see if the nomos has become memetic; that is, transmitted in a largely unchanged state through the years. At this point, we can connect all the threads of our argument to produce a single, cogent, straightforward approach with which to analyze a particular piece of scientific literature.

First, to review:

• Scientific arguments are enthymemes, and therefore contain arguable

premises;

• These premises take the form of research warrants and data proxies;

• Stasis Theory and Toulmin Analysis work as team to reveal these

premises;

• These premises are generated based a terministic screen/unconscious

personal orientation;

• Often, these premises are held in common with other groups of

researchers, have become customary, and are therefore nomotic;

• Because nomoi are transmissible in a largely unaltered form between

research generations, they are memetic.

To put it as a process:

• Stasis Theory and Toulmin Analysis extract proxies and warrants; 53

• Proxies and warrants reveal the terministic orientation of the

researcher;

• If other researchers share these orientations as warranting topoi, they

are nomotic (custom-laws);

• If these nomoi are passed between research generations, they are

memetic.

Our procedure, then, must be to:

• 1) Use Stasis Theory and Toulmin Analysis to extract warrants and

proxies.

• 2) Use warrants and proxies to establish a terministic screen;

• 3) Establish whether that terministic screen is a nomos shared in

common by multiple researchers in the form of a warranting topos;

• 4) Determine if that nomos is a meme passed down from generation to

generation.

• 5) Proceed to analysis of each of the visual enthymemes in the article. 54

Figure 2: Procedure for determining nomotic inheritance.

Our first tool, stasis theory, will help us determine whether each study might be operating under the unconscious, tacit assumption that difference equates to deficit. We will use Prellian methodology to examine conjectural, definitional, and qualitative stases to unpack what premises might underlie claims regarding the mechanisms of autistic cognitive functionality.

First, we will ask if a phenomenon has actually been observed in the study. If so, we will question what, exactly, the study purports to demonstrate. In doing so, we will uncover the proxy by which the central argument of the study operates. What does the quantity, quality, or behavior being measured purport to demonstrate? What, in the scientists’ estimation, does the gathered data actually measure? At this point, we will question the logical underpinnings of this 55

proxy and question whether or not this logic is based in some form of unconscious neurotypical

neuronormative bias. Based on this finding, we can ask whether or not the findings of the study

should be considered as being based in non-biased rationale. If unconscious terministic screens exist, they will be discussed in light of the study’s claims. In tandem, we will use Toulmin

Analysis of the article’s core enthymeme (in both its textual and visual incarnations, as these necessarily double one another in order to be persuasive) to uncover its warrant proxy. Both methods do reveal the core premises of an argument; it could certainly be argued that only one analysis is necessary in this process. However, I submit that employing both methods provides a useful form of internal verification. Thus, we will use both.

Recall that I am arguing that these premises reflect the unconscious orientation of the author; in order for a person to genuinely employ a premise, they must certainly believe that it is valid. For example, if I were to argue that the amount of children a couple has is a valid gauge of their sexual and emotional compatibility, I would necessarily be betraying my warrant (which is [or would be, if I actually believed it] a personal belief based on my own personal values), that emotionally-compatible couples will always produce more children than non-compatible couples. The proxy here is that the number of children a couple has is positively correlated with their emotional and sexual compatibility in a statistically significant way. I could claim it is based on simple biological logic; modern couples who dislike each other probably won’t produce as many children as couples whose close bond promotes frequent procreative activity, after all. But the logic is obviously not ironclad, as it based on my believing that highly deterministic biological principles can accurately be applied to something as complex as modern human mating strategy; clearly, there are multiple scientific and cultural counterarguments that can be 56

employed here. So this premise reflects a terministic, pseudo-scientific philosophical orientation, rather than any ‘objective truth’, and is tremendously arguable. As silly as this example is (unless you’re a Wilsonian sociobiologist, of course) it does demonstrate how enthymematic argument works, as well as how it can be dissected. In this study, if we can pinpoint any terministic screens operating within several studies, we will question whether or not they constitute a nomos; a customary tacit assumption accepted in common among neuroscientists over multiple studies. In order to demonstrate that a terministic screen has become a nomos, we will have to observe Graham’s warranting topoi in action. Justifying a terministic screen via one of Graham’s four topoi indicates a movement toward a nomos. Creating a biomechanistic cause (etiology), generating codified diagnostic criteria, demonstrating ‘objective’ visual evidence, or advancing an FDA-approved treatment may all constitute warranting topoi that help shift a terministic screen to a nomos.

Again, our core methodology in evaluating nomoi comes from Susan Jarrat’s Rereading the Sophists: Classical Rhetoric Refigured (1991). Like the Sophists, who defined nomoi as the unwritten codes, habits, customs, conventions validating all social conventions, we can examine each nomos as “a process of articulating codes, consciously designed by groups of people” (42).

Although we are applying this work strictly to modern autism studies, our work, too, aims to

“call attention to the ways patterns of reasoning came to be accepted” as nomoi, which Jarratt rather tellingly translates as ‘custom-law’, specifically because the operant nomos of any discourse “determines the behavior and activities of things through convention” (53). Most importantly, examination of said conventions affords us the ability to “discover marginalized voices” and winnow out “the falsely naturalized logic of patriarchy” (75); in this case, the 57

‘patriarchal’ neurotypical views behind contemporary autism discourse. Only by locating

sufficient static/proxy failures in common can we identify nomoi, the patterns of static failure

inherent to human activity. The failures must be shared in order to classify as nomoi. Only if we uncover repeated Ciceronian and Prellian failures between studies can we begin to identify a nomos.

If a nomos proves to be passed from one generation of research to the next, it is

memetic. If we can locate common nomoi, we can explore how each study might operate in a

cyclical fashion, functioning within and contributing to any underlying beliefs shared in common

between prevailing and preeminent theorists within the discipline by reinforcing warranting

topoi. Nomoi are important because they can cyclically create terministic screens in a memetic

feedback loop; however, we must conduct rhetorical analyses in the opposite direction to uncover how they operate. (No sense putting Descartes before the horse, after all.)

Here, we will designate a memetic study as one that one that contains nomoi inherited from a previous research generation. Because neuroscience, as a medical and scientific discipline, moves quickly, it is safe to say that studies two or more years young possess authors that have likely read articles two years or more their senior. It’s a bit arbitrary, but given the pace at which science publishing moves—and given the necessarily voracious research methods of neuroscience professionals— it is reasonably safe to assume junior research in a particular area is at least aware of trends in senior research, regardless of the ages of the authors. (After all, older researchers read the work of their juniors, as well as their peers; and vice versa.) So we will say that a study is memetic if it shares a nomos in common with a study that is at least 2 years older than it. 58

If and when we uncover nomoi in the text, we can also locate them in the graphics; scientific authors use tables, charts, images, and graphs to create visual enthymemes, which serve to reinforce their argument. Data visualization and fMRI images, because they mirror one-another, necessarily contain similar nomoi—and by extension the same stases and proxy functionality— as their text (otherwise, they wouldn’t be visually enthymematic; if visuals didn’t reinforce the text, there would be no reason to include them at all). I will examine all visual data (including graphs/plots and tables) for these visually enthymematic elements, which do not explicitly express their connection to the text, but nevertheless reinforce it, thereby making the central warrant more convincing. Particularly telling will be experimental design/methodology visualizations; they convey in visual form the proxies and qualitative stases underpinning the experiment. If I uncover static and nomotic elements of note, I will note how they reinforce any driving central nomoi held in common by multiple autism studies, and by extension, the operant proxy and warrant. Basically, this is fancy way of examining the premises that drive the enthymeme that is the scientific argument. In this way, we begin to see the core similarities operating in all arguments: each involves an argument based on premises, which may take the form of proxies and warrants in scientific reasoning, but are nonetheless still premises. Thus, for each graphic, we can ask ourselves what enthymeme reflects the warrant of a figure, and what enthymeme reflects the data proxy portrayed in the figure. We can further establish the premises behind each of those enthymemes. In the case of the warrant, the figure must be arguing that it accurately reflects the study (otherwise, it couldn’t be there). What premise allows this enthymeme to stand? It must reflect one of the core premises of the text: that the figure itself provided a valid or invalid means of demonstrating a relationship, concept, 59

presence, or absence. The proxy enthymeme is, of course, directly related to the data. This

enthymeme must be that the data in the figure presented somehow validate or invalidate the

research hypothesis. This takes us back to the heart of the textual argument, as it mirrors or shatters the argument’s proxy. This may seem like a blankly redundant process; however, it is here that the researcher’s agenda is represented in its most raw form. The usefulness of a visual enthymeme analysis is that it allows us to reinforce or re-evaluate what we think we know about the textual argument, while de-complicating any possible obfuscation present in the text.

While we’re at it, we can also perform a few other rhetorical analyses. What type of figure has been used? Does this figure type have any associations or special strengths that boost its persuasiveness? Similarly, we can ask how the data has been presented with that figure: how do the layout and arrangement affect our comprehension of its contents? It is worth noting that the order in which we examine these elements parallels the construction procedure for that figure, allowing us to more thoroughly deconstruct it. In effect, we can ask ourselves four questions for each figure:

• What is the visual medium? What type of figure is this? Why was it chosen?

• What is the proxy enthymeme? What data-based premise makes the proxy

enthymeme run?

• What is the warrant enthymeme? What terministic premise makes the warrant

enthymeme run?

• What is the arrangement system? How do layout and placement within the figure

affect how it is interpreted and made persuasive? 60

Figure 3: Procedure for analyzing visual enthymemes and extracting their premises.

To illustrate a case in point involving a successful static analysis, I will use the late

evolutionary biologist Stephen Jay Gould’s basic Ciceronian methodology, as exemplified in his

exploration of Samuel George Morton, followed by Lawrence Prelli’s five-point static examination. In The Mismeasure of Man (1996), Stephen Jay Gould identifies a fundamental

problem with our pervasive collective search for accurate metrics of neurotypical intelligence,

aka “the argument that intelligence can be meaningfully abstracted” via a “linear scale of

intrinsic and unalterable mental worth” (20): arguments about intelligence are always

consciously or unconsciously biased to favor the intelligence of the theorist. The quest for a

metric of intelligence is also a byproduct of our inherent human tendency to reify (“the

propensity to convert an abstract concept [like intelligence] into a hard entity”), to dichotomize

(“to parse complex and continuous reality into division by two [smart and stupid, black and 61

white]”, and to create hierarchies (via “our inclination to order items by ranking them in a linear series of increasing worth”) so as to regulate “our attitude to those judged inferior” (26). The desired outcome is a construct engendered by the same system it seeks to analyze; that is, we try to ‘prove’ that a particular group is inferior in order to justify disregard or mistreatment of said group. But even if we aren’t overt, conscious bigots or discriminators, true objectivity is impossible:

Impartiality (even if desirable) is unattainable by human beings with inevitable

backgrounds, needs, beliefs, and desires. It is dangerous for a scholar even to

imagine that he might attain complete neutrality, for then one stops being

vigilant about personal preferences and their influences—and then one truly

falls victim to the dictates of prejudice. (36)

For Gould, this argument is far more than academic: he was father to an autistic son. He freely admits to a definite personal stake in the exploration of the biological causes of autism; if autism has an innate, biological basis, it cannot be the fault of too much or too little parental attention, as was once theorized by Leo Kanner (32). Though a respected biologist and statistician, he explains that “Objectivity must be defined as fair treatment of data, not absence of preference”

(36). For Gould, “No conceit could be worse than a belief in one’s own intrinsic objectivity, no prescription more suited to the exposure of fools” (36). Far worse, however, is a unilateral belief in the pure objectivity of science itself, as it is necessarily culturally and psychologically bound to the cultures and influential minds that invent, espouse, and practice it: 62

I criticize the myth that science itself can shuck the constraints of their culture

and view the world as it really is…Rather, I believe that science must be

understood as a social phenomenon, a gutsy, human enterprise, not the work of

robots programmed to collect information….Science, since people must do it, is

a socially embedded activity. It progresses by hand, vision, and intuition. Much

of its change through time does not record a closer approach to absolute truth,

but the alteration of cultural contexts that influence it so strongly. Facts are not

pure and unsullied bits of information; culture also influences what we see, and

how we see it. (52)

Though it is tempting to put one’s faith entirely in numbers and statistical analyses, one must

realize that numerically-derived facts, too, are collected by humans, from data sets curated by

humans, from sample populations selected from and culled by humans. And those numbers

must always be filtered through human cognition, which is, after all, inherently terministic. How

the numbers are crunched is largely immaterial; one might recall Gould’s well-meaning but

faulty attempt to correct the numbers of Samuel George Morton, our old friend measuring

cranial volumes. Gould believed that Morton had unconsciously packed more lead shot into

white skulls, leading to faulty data. Unfortunately, other contemporary researchers did not

replicate Gould’s data; theirs more closely matched Morton’s. Mitchell reports that some

researchers even accused Gould of fudging his numbers (2018). But Gould’s own ‘corrective’

retroactive math and data-gathering proved inaccurate not because he was being deceptive or dishonest, but because he was using his own terministic perspective to measure and to crunch numbers, exactly as Morton had. 63

Ironically, Gould’s controversial misstep in attempting to appeal to scientists by

‘scientifically’ correcting the math of antiquated, racially-motivated science actually validated his rhetorical analysis: the purpose of numbers is indeed to “suggest, constrain, and refute; they do not [and cannot], by themselves, specify the content of scientific theories” (106). No matter who is creating them, “Numbers and graphs do not gain authority from increasing precision of measurement, sample size, or complexity in manipulation”, because ultimately, “Experimental designs may be flawed and not subject to correction by extended repetition” (114). Data sets— no matter how extensive—cannot by themselves in/validate a subjectively biased theory. To illustrate his point regarding the dangers of data reification, Gould references the craniometric work of Samuel George Morton, who, like most craniometrists, famously conflated brain volume with intelligence. Despite not altering his numbers, he still managed to reach wildly inaccurate conclusions regarding white racial ‘superiority’: in his dogged quest for ever more skulls to measure, Morton simply disregarded the excessively large cranial volumes of Caucasian females,

Mongolian and Chinese males, and criminal European skulls of both sexes as anomalous artifacts. When a sample of 12th century Parisian common grave skulls proved to possess higher average cranial volumes than those of an 18th century society cemetery sample, he dismissed them as belonging to con artists (who were certainly clever, but not ‘morally’ intelligent). On average, though, his European male skulls were larger than his non-white and female skulls; it was mostly his metric that was flawed. So technically, Morton did not falsify his data: his math wasn’t off, nor did he intentionally manipulate his data set; rather than ‘fudging numbers,’ he used circular reasoning to justify his data sets, “interpret[ing] his way around them to favored conclusions” (119). The kicker is that cranial volume, while roughly correlating with racially and 64

historically contingent technological advancement in 19th century society, has absolutely zero causative influence on intelligence. Morton ’s facts “were reliable, but they were gathered selectively and then manipulated unconsciously in the service of prior conclusions. By this route, the conclusions achieved not only the blessings of science, but the prestige of numbers” (117). It is amusing to note that Broca’s area—an area of the brain currently believed to be responsible for speech production named after one of Morton ’s peers—is the subject of many contemporary theories involving autism; contemporary neuroscience seeks to locate technologies to ‘objectively’ prove that this area is variously deficient, smaller, less efficiently neurologically wired, or possessed of less high-functioning neurons in autistics, in exactly the same way that Morton sought to demonstrate that the frontal cortex was smaller and less developed in ‘primitive’ non-whites and females. But poor science cannot be permitted to hide behind data; we must collectively learn from Morton ’s errors (and Gould’s) and exercise extreme caution when attempting to conflate or decouple individual brain area activity with overall ability. Certainly, and with large enough data sets, fMRI data may reveal different patterns of cerebral activation between neurotypes—just as Morton revealed different average brain volumes between races. However, this data does not by itself prove the existence of any qualitative differences (aka deficiencies). This is a problem with the premises underlying the enthymeme that difference and deficiency are the same. Neuroscientists would do well to ensure they do have not simply replaced Morton ’s birdshot and tape with fMRI machines and electroencephalography caps.

Gould uses Morton ’s fallacy as a case study in illustration of the principle that numbers cannot by themselves achieve scientific understanding; data, no matter how meticulously 65

gathered, does not intrinsically prove anything: it must always be filtered and interpreted by the

human mind. Conclusions based on data derive from assumptions about that data, as much as the data itself. As Gould is fond of pointing out, “my age, the population of Mexico, the price of

Swiss cheese, my pet turtle's weight, and the average distance between galaxies” (The

Mismeasure of Man 74) all possess a definite, demonstrable, and above all, replicable, positive

correlation. Of course, no sane researcher would suggest that Gould’s pet turtle’s burgeoning

weight was a causative factor in the rising price of fermented dairy during the late Nineties or

the rough volume of the expanding Universe (unless perhaps they were a diehard Terry

Pratchett fan). And yet, as we shall see, such bizarre causative/correlative conflations are

common in contemporary autism research; they are simply made to appear palatable via the

use of parallelism and various other commonly accepted rhetorical moves. For better or worse,

even a field as lofty as neuroscience is ultimately “rooted in creative interpretation” (106). It is

the potential for creative misinterpretation that necessitates scrutiny for culturally-derived

perspectives.

And these perspectives have consequences. Gould’s most resonant point remains that

numbers, however ethically or unethically gathered, can always be used by biological

determinists to justify everything from poverty and segregation to mandatory sterilization and

outright genocide. The status quo is relegated to an unfortunate—but immutable—fact of life,

bolstering everything from segregation to eugenics:

Why struggle and spend to raise the unboostable IQ [or social integration and

adjustment] of races or social classes [or divergent neurotypes] at the bottom of 66

the economic [or social] ladder; better simply to accept nature’s unfortunate

dictates and save a passel of federal [or institutional] funds. Why bother

yourself about underrepresentation of disadvantaged groups in your honored

and remunerative bailiwick if such absence records the diminished ability or

general immorality, biologically imposed, of most members of the rejected

group, and not the legacy or current reality of social prejudice? (28)

Craniometry, anthropometry, and Intelligence Quotient testing have all been used in the service

of the American eugenics movement. Gould pulls no punches in reminding us that most

prominent midcentury American universities featured thriving eugenics departments and

approved major coursework in what was then considered a socioscientific movement vital to

the advancement of the human race; only after the events of the Holocaust became common

knowledge did American universities ban eugenics studies (54). Nor can we afford to dismiss

the events of the last century as false steps issuing from a bigoted, tribalistic society: whether as

a means of dismissing and disenfranchising persons of a particular race, socioeconomic

background, or neurotype, “The general argument is always present, always available, always

published, always exploitable” (28). (The recent popularity of Theory of Mind [which quite

literally classifies autistics as non-self-aware homunculi], is proof enough). The dangers of determinism hav