Denaturalizing Information Visualization

by

Gabriel Resch

A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Faculty of Information University of Toronto

Copyright © 2019 by Gabriel Resch Released under the WTFPL license. Abstract

Denaturalizing Information Visualization

Gabriel Resch

Doctor of Philosophy

Graduate Department of Faculty of Information

University of Toronto

2019

This dissertation investigates how, behind the recent proliferation of information visual- ization technologies, a number of opaque epistemic conventions structure distinct prac- tices of data analysis, sensemaking, argumentation, and persuasion. The dissertation’s primary claim is that contemporary infovis practices and tools naturalize specific repre- sentational forms and modes of interaction (e.g. ocularcentric objectivity) at the expense of others. It examines this naturalization process through a series of four interconnected case studies that consider, in turn, the translation of 2D images and graphics into tangible

3D objects; the development of two multimodal visualization projects; scale experiments in a virtual reality-based visualization environment; and an empirical study of 14 blind and visually impaired citizens interacting with tactile data objects. By surfacing specific naturalizations and attempting to denaturalize them, these four case studies illuminate a host of critical issues (e.g. a long-simmering tension between truth and aesthetic con- cerns) raised by the appearance of new interactive approaches to visualization. In doing so, they also demonstrate alternative visualization methods that might be deployed in service of a more critical infovis practice. As a result, the dissertation contributes to an emerging body of critical infovis research that spans domains ranging from HCI to critical data studies. Visualization media has become the de facto currency of data in-

ii terpretation, and a rich and nuanced understanding of the epistemology of information visualization is absolutely necessary. Discussions around the challenge of multimodal representation, the tendency for new media to produce epistemic conflict, and the ben- efits of critical data literacy, all of which appear in the dissertation, will help colour in this rich and nuanced picture of visualization.

iii What can be studied is always a relationship or an infinite regress of relationships. Never a

“thing.” Gregory Bateson (2000, p. 246)

iv Acknowledgements

While this is original research, I can hardly take sole credit for many of the ideas that it documents. I stand on the shoulders of numerous giants, and there are so many people who have earned my endless gratitude that a truly exhaustive list would be impossible.

That said, I must first thank my family. My parents, Sharon Cory and Michael Resch, and brothers Simon and Manny - you have all set the bar so high. Judi Kibbe, for always reminding me of the finish line and providing support throughout the years. Watts, who made it through my first few months of the PhD like a proud mother, and Volta, my walking companion on many cold nights as I sketched out ideas.

All my family at Toronto BJJ, Jorge Britto, Josh Rapport, and especially Nathan Rector for putting your faith in me.

My committee members, Cosmin Munteanu and Costis Dallas, whose experienced in- sights will forever be appreciated. Michelle Murphy, my internal reader, who inspired my approach to critical research from the time I was a new doctoral student. Chris Salter, my external examiner, whose model of scholarship has given me something to aspire toward.

Cara Krmpotich, who helped me learn how to write for an academic audience, and has been supportive at various points along this journey. All the Inforum staff and librarians who have provided ideas and support throughout the years. Carol, Glen, Anna, Kathy, and all the other iSchool staff who have made so many things possible... and especially

Christine Chan, the real central nervous system of the Faculty of Information. I owe a special debt of gratitude to Aida, Luis, and (especially) Sylvia, my companions on many late nights in Bissell. I wish you all the best in retirement. You have earned it tenfold!

To everyone who has taken part in or supported research I’ve carried out, including all

v those who participated in the InclusiVis project. eCampusOntario, Intel, and Autodesk for financial support. Donald Knuth for LaTeX, RMS for Emacs, and everyone else who has contributed to the free and open software I’ve used throughout my work.

To my many mentors, collaborators, and colleagues... Jim Slotta, Cheryl Madeira, and all of my Encore Lab family. Diego Gonzaĺez, Sarah Choukah, Arno Verhoeven, Yanni

Loukissas, Shayne Dahl, Margaux Smith, Chris Castle, and Sabrina Greupner for so many great ideas - I truly hope we will work together in the future. All of my Semaphore colleagues, who are far too many to list but hopefully know how much I appreciate every- thing they’ve given me (especially Amy, Isaac, and Adam). iSchool doctoral colleagues, particularly those who came before me and opened so many doors (Hannah, Quinn, An- tonio, Ashley). Again, there are so many of you who deserve recognition... it feels unfair, but also necessary, to only highlight a few by name. Andy Keenan for jokes. Jenna

Jacobson for keeping us respectable. Christie Oh for coffee talk.

And to my closest collaborators... Dan Southwick, my comrade in arms. A better colleague could never exist! Matt Ratto, for being the best supervisor someone like me could ask for! The gratitude I have for everything that you two have given me really cannot be expressed.

Finally, Alden and Yuri, for having to sacrifice. I only did this because of you two.

Watching you grow up throughout this period has been far more rewarding than the completion of a dissertation could ever be. And, of course, Alanna, who courageously battled cancer while I was nearing the end of this. Any hardships I have faced pale in comparison to what you have been through. Without your support, none of this would ever have been possible. To you, I will always dedicate everything.

vi Contents

1 Visual Information1

Introduction: Information Anxiety...... 1

Denaturalization...... 8

Representation...... 11

Proliferation of Infovis...... 13

Statement of Thesis...... 17

Naturalizations the Dissertation Surfaces...... 20

Research Questions...... 21

Methodological Approach...... 22

Structure of the Dissertation...... 24

Chapter 2...... 24

Chapter 3...... 25

vii Chapter 4...... 25

Chapter 5...... 26

Chapter 6...... 26

Contribution and Advancing Scholarship...... 27

Scholarly Contribution...... 27

Public Contribution...... 29

2 Crises of Ocularcentrism 31

Introduction: Ocularcentrism and Representation...... 32

Entanglements of Representation...... 37

Diffractive Reading...... 41

Naturalization: Cartesian Ocularcentrism...... 43

Case Study: Photographic Reliefs...... 46

Project Description...... 48

Naturalizations in 3D Translation...... 57

Method: Transduction...... 61

Conclusion...... 68

viii 3 Visualization: Epistemic Technology 70

Introduction: The Epistemology of Infovis...... 71

Epistemic Objects...... 72

Epistemic Technologies...... 74

What is Infovis? Where does it Come From? What does it do?...... 77

Historical Roots...... 78

Modern Era...... 85

What is a Graph?...... 87

What Do Graphs Do?...... 91

Naturalization: Aesthetics vs Truth...... 94

Mathesis and Graphesis...... 95

Grammar of Graphics...... 97

Aesthetics|Truth Entanglement...... 101

Case Study: Prototyping Multimodal Visualization Interfaces...... 102

Visualizing Craft Expertise...... 103

Self Tracking Narratives...... 106

Connections...... 110

ix Method: Critical Visualization...... 111

Conclusion...... 113

4 Visualizing The Embodied Data Sublime 117

Introduction: The Data Sublime...... 117

The Sublime...... 120

The Anti-Sublime...... 123

Immersive Visualization...... 126

Naturalization: Disembodiment and Scale...... 131

Case Study: The Embodied Data Sublime...... 134

Project Description...... 135

Denaturalizing Scale...... 140

Method: Estrangement...... 142

Conclusion...... 144

5 Beyond the Visual 147

Introduction: Tangible and Multisensory Visualization...... 148

Multisensory Interaction...... 150

Tactile Infovis and Data “Physicalization”...... 153

x Naturalization: Firewall Between Material and Digital...... 156

Study: InclusiVis...... 158

Summary of Study Design...... 161

Study Participants...... 164

Session Structure...... 166

Civic Data Context...... 167

Tactile Graphics...... 169

Dissemination and Public Engagement...... 172

Findings and Key Insights...... 174

Method: (Re)Materialization...... 186

Conclusion...... 188

6 Information, Not Representation 193

Introduction: Critical Visualization as a Critical Data Practice...... 194

Political Visualization...... 197

The (Re)Emergence of Data Journalism...... 199

A Reflexive Turn in Visualization Design?...... 202

Visualization Fluency...... 206

xi Radical Graphicacy and Technological Fluency...... 209

Critical Information Practice...... 215

Responsibility...... 219

Conclusion...... 222

A Interview Guide 243

B Recruitment 246

C Informed Consent 248

xii List of Figures

1.1 Mann’s original “hockey stick graph.”...... 3

1.2 Al Gore with a variation of the hockey stick from “An Inconvenient Truth.”3

1.3 Hans Rosling in his famous TED talk...... 4

2.1 The evidentiary process, according to Tufte...... 40

2.2 One of the base images used in the case study...... 50

2.3 Digitally sculpting a 3D model of the image to ensure suitable tactile

properties would be contained in the physical rendering...... 51

2.4 An early prototype of a piece that ended up in the exhibit...... 53

2.5 An image of the touchable exhibit piece on display at the ROM in a panel

that included braille, visual text, and photographic images...... 54

2.6 Another of the base images used in the case study...... 55

2.7 One of the base images used in the case study...... 59

xiii 2.8 A diagram depicting how software and urban infrastructure are entan-

gled, found on the of the Programmable City research project at

Maynooth University...... 63

2.9 An untitled painting by Paul Emile Borduas in the Art Gallery of Ontario.

Its online collection record is available here: http://ago.ca/collection/

object/agoid.68399...... 65

2.10 Digitally sculpting the Borduas model...... 66

2.11 Design experiments included cylinder seals that would roll out reliefs of

images in soft, conductive materials designed for electronic prototyping

engagements...... 67

3.1 Top: A series of pie and slope charts from Playfair’s “Statistical Breviary”

depicting the extent, population, and revenue of the principal nations of

Europe in 1804. Bottom: A series of flow maps from Minard depicting

European raw cotton imports in 1858, 1864, and 1865...... 80

3.2 Top: Snow’s famous map of the 1854 Broad Street cholera outbreak. Bot-

tom: Nightingale’s “coxcomb” diagrams depicting causes of British mor-

talities in the Crimean War in 1855 and 1856...... 82

3.3 A diagram of the graphical structure of a figure generated by Matplotlib.

Source code for this diagram can be found here: https://matplotlib.

org/gallery/showcase/anatomy. ...... 100

3.4 A 3D scatterplot spatial representation of an expert linoleum artist’s hand

movements...... 105

xiv 3.5 A 1:1 scale 3D-printed data sculpture of the hand movements of an expert

linoleum artist (represented by the cluster of orange points in the previous

figure)...... 106

3.6 A section of the narrative describing a research subject, followed by a

brief description of the data capture device’s biometric measures, and an

interactive time-series interface for studying a day worth of heart rate data.108

3.7 An interactive visualization I designed for comparing biometric measures

from the caregiver and the person they were caring for. Brushing and

zooming for context enabled the ability to pinpoint specific points during

the day when physiological indicators marked moments of stress, which

could then be cross-referenced with image data captured from a chest-

worn camera...... 109

4.1 Turner’s “Snow Storm - Steam-Boat off a Harbour’s Mouth” which is rec-

ognized as an exemplary depiction of the sublime in which “nature and

human culture work across and against each other” according to Sarah

Monks (2010)...... 121

4.2 A rendering of the 3DTF visualization platform...... 128

4.3 A model of an internal network using what Smith described as a “Space

Defense” metaphor. I would discover this only after constructing the VR

sublime example that is profiled in this chapter’s case study. The aesthetic

parallels between the two environments, separated by many years, are

striking. Had I encountered it earlier, I would surely have taken this as a

design inspiration...... 129

xv 4.4 A contour plot representing a linocut artist’s hand moving above a table. 136

4.5 A God’s eye perspective of the 3D visualization...... 137

4.6 Immersed in the landscape...... 138

4.7 My colleague, Daniel Southwick, trying to make sense of the landscape.. 139

5.1 Nathalie Miebach’s “The Halloween Grace,” a piece that translates weather

and ocean data from the Perfect Storm of October 1991 and depicts the

collision of two major weather fronts merging...... 155

5.2 An initial data tile layout presented in the case study...... 158

5.3 Data tile and tactile dashboard prototypes...... 167

5.4 Early Jupyter-based visualization prototyping...... 170

5.5 3D printed data maps depicting population change between 2011 and 2016

by city ward...... 172

5.6 Mark comparing pedestrian volume counts at different King Street inter-

sections...... 174

5.7 Ron reading braille descriptive text and tracing graph outlines on proto-

type tactile dashboard layouts...... 179

5.8 A research participant interacting with a haptic interface that would pro-

vide audio playback of associated data from text-to-speech files...... 181

5.9 Barrie reading a braille legend on a cylindrical radial graph...... 183

xvi 6.1 Minard’s famed visualization of losses suffered by Napoleon’s army in its

winter march through Russia...... 198

6.2 A spectrum of visualization roles from Meeks’s essay on why people are

leaving the field...... 204

xvii Chapter 1

Visual Information

Chapter Summary: This chapter introduces and provides a general overview of the topics, themes, and aims of the dissertation. It describes the dissertation’s rationale, research questions, and methodological approach, presenting an outline of the chapters that will follow. It describes the unique contribution this research provides to the field of information science, as well as to the general public. Importantly, it establishes the key idea that specific epistemic conditions are naturalized in the practices and technologies of information visualization and, in doing so, lays the groundwork for how the following chapters will attempt to denaturalize these conditions.

Introduction: Information Anxiety

There is heightened anxiety about the role visual information plays in contemporary so- cial discourse. This anxiety manifests, in part, through concern about the proliferation and truthfulness of information graphics (Crawford, 2014). When deployed in research and data analysis contexts, information graphics ostensibly serve as cognitive aids that

1 Chapter 1. Visual Information 2 help their users make sense of numerical or abstract phenomena, formalize relationships between entities, and schematize theoretical developments. When prepared for public consumption, however, they more often serve a rhetorical function: to motivate or per- suade their audience. This is an audience largely incapable of knowing or understanding the mechanisms by which these media are produced, and anxiety about the role of vi- sual information in contemporary social discourse is, in part, a symptom of this lack of knowledge.

Examples abound. Consider the former U.S. Vice President Al Gore, backdropped by an animated line graph, illustrating his definitive claim about the relationship between

CO2 and temperature. This image, which intended to foreclose debate about the data it presented, ended up obfuscating its claims. Whether the data was accurate or not, the use of the famous “hockey stick graph” - and its later debunking and withdrawal by the UN’s Intergovernmental Panel on Climate Change - sparked considerable debate.1

It was contentious not only because of the potential consequences of political inaction on climate chance, but because the public was forced to negotiate the validity of a data representation that it didn’t have the scientific background to fully understand.

Recall the late Swedish physician Hans Rosling, in his most famous TED talk, gesticu- lating passionately in front of a lively bubble chart depicting global public health trends mapped to economic development.2 Rosling was celebrated for clarifying correlative relationships, even while others took him to task for implying definitive causal ones. In-

fluential Harvard cognitive psychologist Steven Pinker bolstered his own claims about the teleological progress of modernity with essentially the same charts interspliced through- out the text of his controversial recent book, “Enlightenment Now” (Pinker, 2018). Each

1See Michael E. Mann’s (2013) book, “The Hockey Stick and the Climate Wars,” about the contro- versial graph, which turns up on page 65 of Gore’s (2006) “Inconvenient Truth” book, as well as near the 20:00 minute mark in the Academy Award-winning documentary that was released alongside it. 2https://www.ted.com/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen/up- next?referrer=playlist-the_best_hans_rosling_talks_yo Chapter 1. Visual Information 3

Figure 1.1: Mann’s original “hockey stick graph.”

Figure 1.2: Al Gore with a variation of the hockey stick from “An Inconvenient Truth.” Chapter 1. Visual Information 4

Figure 1.3: Hans Rosling in his famous TED talk. of these public intellectuals has been praised for the clarity of their narrative, while also decried for the reductive simplicity of their claims.3 This reductive simplicity can be at- tributed directly to their use of information graphics at the foundation of their complex arguments.

Another famous public intellectual, the late evolutionary biologist Stephen Jay Gould, warned of the general public’s limited capacity for critical reflection in the face of those who stretch “truth with numbers.”4 He cautioned that the only appropriate way to con-

3See, for example, Christian Berggren’s challenge to Rosling’s “biased sample of statistics” here: https://quillette.com/2018/11/16/the-one-sided-worldview-of-hans-rosling/. Max Roser provides a more positive take on Rosling’s contributions here: https://blogs.bmj.com/bmj/ 2017/02/14/seeing-human-lives-in-spreadsheets-the-work-of-hans-rosling/. Jeremy Lent uses graphs to dismantle Pinker’s claims: https://patternsofmeaning.com/2018/05/17/steven- pinkers-ideas-about-progress-are-fatally-flawed-these-eight-graphs-show-why/. David Wootton’s review of the book, on the other hand, which focuses on its use of graphs, is less critical: https://www.the-tls.co.uk/articles/public/comfort-history-enlightenment-now/. 4See his essay “The Median Isn’t the Message”: https://www.cancerguide.org/median_not_msg. html Chapter 1. Visual Information 5 sider statistical relationships, whether in cancer research or environmental policy, was to take into account the full range of available information - the variation around the , so to speak. This variation can be accounted for numerically, as well as through textual and visual evidence, but Gould’s proposal is a challenging one. Information graphics are frequently taken as conclusive and final, and error margins, confidence intervals, and

“noise” are often difficult to interpret (Correll and Gleicher, 2014). Gould was not op- posed to the use of statistical graphics, though. He was worried about statistical claims that sacrifice nuance in order to be seen as definitive, and about statistics being used in misleading and deceptive ways. Building on his earlier warning, Gould later wrote that “when people learn no tools of judgment and merely follow their hopes, the seeds of political manipulation are sown.”5

Gould’s prescience is alarming. Today, we are witness to the proliferation of data-driven journalism and its ugly Janus twin, the trefoil of internet memes, graphical misrepre- sentation, and fake news. Big data-infused health applications drown us in streams of often meaningless personal biometric information. Every verb related to technology or political agency seems to now be prefaced with a data-centric adjective of some kind

(e.g. data-driven civic engagement). Information graphics have shed the impoverished description that they are little more than cognitive aids for research or analysis - where the “real work” is done in tables and models.6 They are no longer taken as mere simplifi- cations of more comprehensive models or theories. More and more often, they are meant to stand on their own. And with the widespread growth of the infographic medium, they are increasingly designed to move public audiences emotionally as well as politically

(Kennedy and Hill, 2017). But who among the general public is fully capable of the various streams of data, navigating the myriad interfaces, or deciphering

5From his essay “The Quack Detector”: https://www.nybooks.com/articles/1982/02/04/the- quack-detector/ 6See Gelman’s (2011) discussion of this subject, along with the responses that follow it. Chapter 1. Visual Information 6

the visual metaphors that underlie both benign and persuasive information graphics?

Who can re-create these images, or, better yet, reverse engineer how they are made? If

the answer is few, then how might we go about educating the public to be able to do

this?

Understanding the rhetorical power of information graphics has become a crucial skill

of engaged citizenship. There is a prevailing condition, however, in which visualization

technologies are used in ways that obscure the epistemic conditions they have been devel-

oped under. Visualization media is now widespread,7 and its evidence claims frequently go unchallenged due to an equally widespread lack of statistical and graphical literacy.

Through various machinations, the graph has been naturalized as truth embodied in vi- sual form (Drucker, 2014; Halpern, 2015). Few accessible methods for determining the veracity of information media are available widely. Furthermore, this form of media is open to criticism for the ease with which it can be made to misrepresent data. Axes can be easily truncated. Data gets misplaced. Representational choices indicate alignment with or within specific disciplines. The numbers hardly ever speak for themselves! Forget the fact that the fundamental ethos of information visualization - that it should reduce complexity - is both a vague and somewhat shallow mandate.

With this in mind, the stakes are high for developing appropriate, valid, engaging, and meaningful visualization media in a “post-truth” era of fake news, social media gover- nance, and the purported death of expertise. Amid a shifting landscape of interactive digital media, there is a crucial need to build and deploy technologies that take into account the (often) deeply-held epistemic values that underlie their design and are fre- quently embedded in the representational systems we collectively share. To create visual- izations, and visualization technologies, that “expose, rather than conceal, these principles

7Consider, for example, the growing use of individualized smartphone-based data displays used for fitness tracking. Chapter 1. Visual Information 7 of knowledge in the domains where the authority of information makes (still persistent and often pernicious) claims to ‘truth’ through the ‘transparency’ of the visualization”

(Drucker, 2014, p. 135).

This dissertation investigates how, behind the proliferation of information visualization

(infovis) technologies that both support and demarcate contemporary practices of data analysis, argumentation, and persuasion, a number of opaque epistemic conventions struc- ture knowledge-making practices.8 It examines the epistemological foundation of various representational media used in infovis practice and, in the process, shines light on a number of critical issues raised by the emergence of new interactive infovis techniques.

The dissertation’s fundamental argument is constructed around the idea that infovis practices and tools naturalize specific representational forms and modes of interaction at the expense of others. It surfaces these naturalizations and attempts to denaturalize them through a series of case studies that illuminate how they have come to be ac- cepted. In doing so, the dissertation demonstrates alternative approaches and methods that might be deployed in service of more reflexive infovis practice. It also crucially re-thinks visualization’s capacity to serve as a tool for open-ended exploration. Through these demonstrations, it ultimately presents a framework of practices intended to con- tribute to a growing scholarly community of critical infovis researchers and practitioners that spans domains ranging from human-computer interaction to critical data studies.

8I will use the term infovis as a shorthand for information visualization throughout the dissertation. One might encounter various spellings or semantic pairings that change its meaning somewhat (e.g. InfoVis, InfoViz, datavis, etc.). For the most part, these terms are interchangeable - although not always interchangeable with more specific terms like scientific visualization (SciViz). InfoVis, what the Institute of Electrical and Electronics Engineers (IEEE) uses, appears to be the most ubiquitous. See Nathan Yau’s short piece on the politics of infovis names: https://flowingdata.com/2011/09/29/the-many- words-for-visualization/. In it, he refers to infovis as “research of information visualization that people talk about at InfoVis (an annual conference that most visualization researchers go to).” Chapter 1. Visual Information 8

Denaturalization

Inspired by information historian and visual theorist Johanna Drucker, the primary aim of

this dissertation is to demonstrate methods for denaturalizing graphical visualizations we

encounter frequently - images that structure the “increasingly familiar interface that has

become so habitual in daily use” (Drucker, 2014, p. 9). The field of science and technology

studies (STS), a cognate discipline of information,9 has had a long intellectual concern with naturalization, especially vis-a-vis scientific practices that result in the formalization of epistemic virtues. Despite this, no single common definition of “naturalization” runs through the STS literature or its various offshoots that overlap with the broad domain of information (including sub-domains like sociology of knowledge or the social construction of technology). The term commonly refers to processes that result in actors (scientists; computer users; the general public) assuming that prevailing conditions are natural, part of some grand teleology, and not caused by social, structural, or institutional design.

When we’re talking about how new technologies become accepted, naturalization occurs as a kind of “settling in as a component of modern socio-political activity” or a stabi- lization of a technology or media as a sociocultural phenomenon that includes both the technological artifact and the language used to describe it (Gillespie, 2006). This gener- ally entails ongoing processes of negotiation which, in retrospect, are shaped by patterns of what Pinch and Bijker (1984) refer to as “interpretive flexibility,” even if this does not seem to be the case while the technological artifacts are taking shape. “Every technology is shaped by a process of social definition, in which those invested in it struggle not only to implement the technology, but also to narrate what it is and what it is for” (Gillespie,

2006, p. 428). Visualization technologies, from the Python library matplotlib to business

9Throughout this dissertation, I will use the more encompassing information, rather than information science or information studies. While I recognize that significant attention has been given to demarcating the concerns of these interrelated fields, different aspects of my work belong to each of them. Chapter 1. Visual Information 9

intelligence dashboards, are exemplars of this process.

What are the intellectual moorings of naturalization? Scientific reason is an important

foundation. Sheila Jasanoff writes that “reason is a great naturalizer” when it comes

to understanding scientific and technological claims. The iterative method of reasoning

that Descartes first articulated - accepting only that which is clear in one’s own mind;

splitting large difficulties into smaller ones; arguing from the simple to the complex; and

validating results - underlies naturalization.10 Jasanoff states that “once we are persuaded

of the reasonableness of an argument or action, it becomes the most natural thing in the

world to accept it: of course, this is how things are; of course, this is how things should

be” (Jasanoff, 2012, p. 6). Hers is a socio-cultural argument for how the technologies

of knowledge construction obscure their modus operandi behind a veneer of reason that,

in effect, operates as one side of a Cartesian dialectic with truth. Jasanoff recommends

that, as an alternative to naturalization, we look at phenomena as if “through the eyes of

visitors from other worlds” - a process of making the “familiar strange,” something that

humanities scholars might recognize as parallel to estrangement.11 Jasanoff’s approach guides the methods for denaturalization that are described throughout this dissertation.

Reflexive techniques that include comparison in Jasanoff’s words, negotiation in Gille-

spie’s, and interpretive flexibility in Pinch and Bijker’s, counteract the myth of universal

objectivity - the “view from nowhere” famously described by Haraway (1988) as arising

“from simplicity,” a “god-trick of seeing everything.” The epistemic condition of universal

objectivity privileges data at the expense of researcher expertise, resists a plurality of

perspectives, and suffuses much contemporary literature on big data analysis with an

unmistakably neo-positivist orientation (Crawford, 2014; Jurgenson, 2014). It is a view

10For more on this, see the opening chapter in Davis and Hersh (2005). 11For more on these concepts, see Shklovsky (2015) and Boym (2005). Spiegel (2008) provides a thorough overview of how the terms estrangement, defamiliarization, and alienation have come to be both similar and distinct. Chapter 1. Visual Information 10 that no analyst can ever fully achieve in practice, according to Jasanoff. Denatural- ization, however, enables “actual ‘somewheres’ to be brought into productive contrast, revealing patterns and persistences that might otherwise remain unperceived” according to Jasanoff (2012, p. 7).

In the context of this dissertation, naturalization is not meant to be taken as a direct syn- onym or stand-in for ‘embedded assumptions’ or ‘epistemic biases,’ although it is related to each of these concepts. It should also be noted that the meaning of naturalization

I’ll use throughout this dissertation departs slightly from how philosophers might use the term (in relation to processes of reification and essentialism). And while significant work has laid a foundation for examining the different kinds of biases that are embedded in computer systems,12 my aim is not merely to illuminate biases, but to identify pro- cesses that are both frequently taken for granted, and that produce or reinforce general epistemic conditions like ocularcentric objectivity. What are these processes? What are the representational forms and modes of interaction that are naturalized in contempo- rary information visualization practice? There are many, and this dissertation does not exhaustively catalogue them. My goal, however, is to highlight specific ones that have been surfaced throughout my research, and then to show how these naturalizations have become embedded in visualization technology and media. In this, I suggest that a phe- nomenon of ‘double naturalization’ is under way in which both representations and the technologies of representation are encountered as naturalized phenomena.13 Denatural-

12See, for example, Friedman and Nissenbaum (1996, p. 336), in which the authors write “emergent bias arises only in a context of use. This bias typically emerges some time after a design is completed, as a result of changing societal knowledge, population, or cultural values. ... User interfaces are likely to be particularly prone to emergent bias because interfaces by design seek to reflect the capacities, character, and habits of prospective users. Thus, a shift in context of use may well create difficulties for a new set of users.” 13For more on this concept, see Neven (2011, p. 182), who documents how, in designing technologies for senior citizens, ageist language often influences representations of older users. This tendency, in turn, leads to a naturalization of the idea that specific technologies are appropriate for older users - an idea that is then incorporated into the designs of new technologies. I borrow this term to recognize how certain representations of information are seen as appropriate for specific contexts. This, in turn, influences how specific infovis technologies are considered appropriate for specific contexts. An example Chapter 1. Visual Information 11 ization, then, is an approach that works to undo these phenomena.

Representation

As a theoretical concern, infovis is frequently encountered as a subset of the extensive literature on representation, an area of scholarly interest that dates back centuries (in philosophy) but has had an explosion of more recent interest (in STS and media stud- ies, among other disciplines).14 Visual representation mediates virtually every knowledge practice. It entails the use of photographs, abstract images, physical objects, sketches, animated videos, engineering diagrams, and data graphics in the interpretation, display, and demonstration of evidence. Many of these examples operate as epistemic objects that, according to Rheinberger (1997), are defined by their lack of completeness of being and, following Knorr Cetina (2008), are always in a process of being materially defined.

They are characterized by practices that treat representation as “always multiple,” ac- cording to Kress and Leeuwen (2006). Other representational forms that do not adhere to exclusively visual modes - audio and tactile objects, for example - are occasionally de- ployed in evidence-making practices, but the visual dominates, residue of epistemological conventions that avow ocularcentric objectivity as a paramount virtue.

Contemporary digital media technologies, however, introduce approaches to interaction with information that are fluid and dynamic. These technologies afford representational strategies that are increasingly capable of migrating across material and digital bound- aries, temporal registers, and sensory modes. At the same time, they engender vari- ous contexts in which epistemic claims about the value of standardized representational norms become unsettled. While representation will be covered in greater detail in the of this would be time series representations for financial data that are embedded as default options in spreadsheet application software (which is, in turn, naturalized as “appropriate” for financial analysis). 14See Coopmans et al. (2014) for the most thorough treatment of representation in STS. Chapter 1. Visual Information 12 next chapter, the dissertation’s broad interest is in the depth of representation, rather than its shape - meaning, it does not focus too much on big-R representation issues (e.g. how scientific representation structures knowledge). The primary focus here is on the impact of specific representational forms, technologies, and media.

If infovis has become an important area for scholars interested in representation and representational media, beyond merely being a subject of technical interest, what are the topics of their concern? Statistical graphics have long been a central feature of scientific knowledge practices. From Florence Nightingale’s coxcomb diagrams, which summarized

Crimean War mortality rates and were used to make compelling arguments to policy makers about the conditions of medical care, to the numerous time series plots that depict a correlation between global CO2 and temperature over varying temporal scales, statistical graphics are frequently a fulcrum for the truth claims of scientific evidence.

Various disciplines spanning the natural sciences, social sciences, and (increasingly) the humanities, rely on legible visualizations of data in their methods of argumentation and evidence-making. This relies on, in turn, the promotion of graphical literacy in fields that adopt these media practices.

While different visual grammars have developed since the golden age of statistical graphic production, contemporary practitioners are often torn between the rules, style, and ar- gumentation of positivist statistical and computer sciences, on the one hand, represented by the vanguard of statistical graphics - Ben Shneiderman, Edward Tufte, and Leland

Wilkinson - or the aesthetic creativity of the emerging information design and visual- ization community on the other, represented by “beautiful data” artists like Jer Thorp,

Ben Rubin, and David McCandless (Gelman and Unwin, 2013). In particular, an explicit

“grammar of graphics,” theorized by statistician Leland Wilkinson, has held considerable influence in the stats and computer science corners of infovis since first being published in 1999. This set of laws or graphical formalisms “takes us beyond a limited set of Chapter 1. Visual Information 13 charts (words) to an almost unlimited world of graphical forms (statements)” according to Wilkinson, and features design conventions that are predominantly mathematical in origin (Wilkinson, 2006). While much has been spilled in the technical literature about the importance of Wilkinson’s work, it has received little attention in fields like

STS, even as scientific visualization, imaging technologies, and representational prac- tices have long been concerns. STS and media studies scholars discussing representation appear (at the moment, anyway) preoccupied with issues of data access and power, visu- alization literacy, and illuminating the black boxes of machine learning. This dissertation hopes to challenge this tendency by offering a comprehensive look at how many of the representational conventions that people like Wilkinson promote come to dominate - and restrict - modern infovis practices.

Proliferation of Infovis

The visual representation of information plays an increasingly critical role in every situation where data and quantitative information need to be translated into more digestible stories, both for the general public and for professionals who need to make sense out of numbers. Giorgia Lupi15

Visual representation is no longer merely a tool for scientific inquiry and evidence making.

We have arrived at a point where passive, everyday data collection, interactive computing, and embodied, immersive visualization technologies intersect regularly. The emergence of infovis, as both an outgrowth and divergence from its progenitor field, statistical graphics, is a recent phenomenon,16 even though static modes of graphical representation that can be traced to mid-twentieth century financial magazines and graphic design manuals have

15https://medium.com/accurat-studio/sketching-with-data-opens-the-mind-s-eye- 92d78554565 16See Gelman and Unwin (2013). Chapter 1. Visual Information 14

a lingering influence. Despite this, recent trends in computer graphics, social media,

and data-driven life17 have fundamentally altered the field’s trajectory, resulting in an

exponential growth of information graphic development that is responsive, dynamic, and

open-ended since approximately 2010.

These trends have gone hand-in-hand with what some scholars are beginning to call an

algorithmic turn in social dynamics. All of a sudden, there is not only a widespread

public interest in data, but in its visualization as well. This is manifested in everything from the emergence of data journalism, with so-called old media like the New York Times investing heavily in the development of data products, to what Shannon Mattern (2015) has recently referred to as an impending “age of dashboard governance.” Information, and the evidence claims scaffolded on it, is understood in these contexts as something that acquires its weight through processes of visual representation. Understanding how information has always been visual requires a kind of inversion, one that enables thinking beyond a specific type of media (like information graphics) and toward various ways of making and expressing knowledge. We can begin this inversion process by deconstructing the various genealogies of visual information, the bedrock of visual epistemology. Visual epistemology is a subject of interest in various fields, including engineering, architecture, and statistics, but it has “failed to become a separate field among academic disciplines” according to Drucker (2014, p. 18). Recent trends, including interest in ‘visual STS’ appear to indicate this might be changing.18 Drucker’s focus is the study of visual methods - infovis, specifically - in humanities disciplines, and doesn’t completely account for the extensive histories of imaging practices that have been traced by scholars in history and philosophy of science, science studies, STS, etc. These are histories that span analog and digital contexts.

17See Wolf (2010). 18This is the subject of a chapter by Peter Galison in Coopmans et al. (2014). Chapter 1. Visual Information 15

Visual methods have always been grounded in material practices, though. Daston and

Galison (2007) have given us the most exhaustive history of how the natural sciences have naturalized objectivity through representational practices, extending from the prepara- tion of visual artifacts to the establishment of epistemic cultures that rely on the in- strumentalization, operationalization, and eventual standardization of these artifacts in order to make evidence claims. We find the same routine, day-to-day practices that sup- port representational objectivity in science, from the retouching of photographs to the selection and curation of data, in adjacent epistemic contexts (including, for example, museum preparation and exhibit design). Daston and Galison have described, through various examples in their individual and collaborative works, how visual artifacts are accessioned into the material-cultural record, ending up in museum exhibits and dis- play cases. Such objects engage with diverse representational apparatus, taking on new lives when they are carved out from the positivist realms of (supposedly) raw data in which they were birthed. This kind of representational fluidity makes up a core feature of multimodal representation and interaction, and is at the foundation of contemporary information visualization practices. It is not only the emergence of novel computational techniques that encourage the representational fluidity that lies at the root of contem- porary multimodal interaction, however. Attendant shifts in the epistemic contexts of knowledge production and meaning making appear to also be a significant factor. As a consequence, a range of new media technologies that veer across material and digital boundaries, trouble distinctions between 2D and 3D representation, engage the senses through multiple channels, and destabilize institutions and the truth claims they are built on, require our greater attention.

Naturalization is a structuring force in the development of visualization technologies. It ensures particular epistemological conditions triumph at the expense of others. Consider the following claim by Drucker: Chapter 1. Visual Information 16

“Most, if not all, of the visualizations adopted by humanists, such as GIS map-

ping, graphs, and charts, were developed in other disciplines. These graphical

tools are a kind of intellectual Trojan horse, a vehicle through which assump-

tions about what constitutes information swarm with potent force. These as-

sumptions are cloaked in a rhetoric taken wholesale from the techniques of the

empirical sciences that conceals their epistemological biases under a guise of

familiarity. So naturalized are the maps and bar charts generated from spread

sheets that they pass as unquestioned representations of ‘what is.’ This is the

hallmark of realist models of knowledge and needs to be subjected to a radical

critique to return the humanistic tenets of constructedness and interpretation

to the fore. Realist approaches depend above all upon an idea that phenomena

are observer-independent and can be characterized as data. Data pass them-

selves off as mere descriptions of a priori conditions. Rendering observation

(the act of creating a statistical, empirical, or subjective account or image) as

if it were the same as the phenomena observed collapses the critical distance

between the phenomenal world and its interpretation, undoing the concept of

interpretation on which humanistic knowledge production is based. We know

this. But we seem ready and eager to suspend critical judgment in a rush to

visualization.” (Drucker, 2014, pp. 125–126).

Recent interest in subjects like “humanities visualization” is confounded by a positivist orientation lifted from the natural sciences and ported wholesale into interpretivist do- mains. Data-driven knowledge practices are currently fashionable in numerous fields that typically resist such an orientation, where visualization has become their de facto medium of data sensemaking/analysis. This has happened, for the most part, without careful consideration of how the application of a visual epistemology will alter these domains’ knowledge practices. As a consequence, concealment of the various epistemic Chapter 1. Visual Information 17 biases and conditions that Drucker describes, “under a guise of familiarity,” will continue unabated in visualization systems, regardless of the interpretivist origins of fields to which they are applied.

Statement of Thesis

Has infovis ever not been a primarily visual phenomenon? What epistemic commitments are expressed by assuming that it has always belonged to the realm of the ocular and observational? When scholarship in the fields of information, science and technology studies, (new/digital) media studies, human-computer interaction, design, and philos- ophy of science addresses graphical data representation, it often treats it as a visual, screen-based phenomenon, describing aspects of it through the language of user(s) and interface. This scholarship reifies modes of representation through their computational outputs - usually new elements - but often fails to consider how new forms of interaction (e.g. immersive, tangible, and multisensory) do not always adhere to the ocularcentric principles upon which the former are constructed. While this work has offered numerous insightful examples that consider the importance of vi- sual representation to research spanning a wide range of domains, there lacks a coherent framework of materially-grounded, theoretical principles to design mediated data rep- resentations in the age of immersive (e.g. virtual reality), tangible (e.g. 3D printed tactiles), and multisensory media (e.g. affective computing). This dissertation builds toward such a framework for critical scholarship interested in these themes.

Tracing a genealogy of contemporary infovis as it moves off the screen, this disserta- tion presents a critical analysis of ocularcentric visual epistemology as it offers a novel methodological framework for denaturalizing visualization media. The aesthetics of in- formation visualization developed through a long history of scientific image production Chapter 1. Visual Information 18

are undergoing a profound shift that alters how this media is used to make truth claims.

Throughout the dissertation, I map transformations in the relationship between repre-

sentation and interaction design to theoretical insights from work in fields ranging from

visual studies to philosophy of science, investigating how this relationship is being re-

formulated with respect to specific new media practices. In doing so, I present a novel

analysis of affiliative epistemic objects that move across material and digital boundaries

in a host of emerging representational contexts.19

The aims of the dissertation are twofold. First, I hope to extend recent scholarly work around multimodal representation20 to consider how the semiotics of non-screen-based

and immersive modes of interaction translate between each other - as well as back into

traditional screen-based modes. Second, I hope to unwind the knot of interaction, visu-

alization, representation, and meaning making that binds disparate fields ranging from

visual studies to HCI in order to demonstrate the relevance of infovis to a wide array of

topical issues.

Digital technologies are driving significant changes to how we determine what constitutes

truth, objectivity, and validation. The motivation for this research came as a direct

consequence of working on a host of projects in what originally seemed like disparate

contexts: art-historical and cultural heritage representation, on the one hand, and screen-

based data representation on the other. There is scholarly precedent for recognizing the

crossover between cultural and scientific representation, but there has yet to be original

research (to my knowledge) that acknowledges similarities between the two with respect

to how they simultaneously handle issues of truth and validity in an age of material-

digital data fluidity. Through the various case studies that will be described throughout

the dissertation, I was able to experiment with novel methods of visualization that cross

19See Suchman (2005). 20See, for example, Alač (2011). Chapter 1. Visual Information 19 back-and-forth between these domains.

A significant obstacle impeding any work that seeks to question graphical orthodoxy can be found in the ubiquity of dualistic tropes that underlie many computational metaphors

(and, in effect, shape media design and create artificial distance between different modes of representation and interaction). An aesthetics of the data graphic is a moving target, with various graphical legacies it must be held accountable to, most of which can be traced back to enlightenment-era/Cartesian separations between mind and body. Con- temporary distinctions between material/digital or subject/object are often framed in dialectical terms or binary distinction, even as contemporary media place these phe- nomena on shifting ontological sands. While distinction is not inherently wrong, the neat compartmentalization of infovis technologies and approaches into categories like digital, material, or embodied results in a difficulty for new methods that cross or blur these boundaries to gain traction. These dualisms are deployed in the knowledge-making practices of assorted milieus, from cultural heritage interpretation to methodological demonstrations in the sciences. Visual representation is a domain where we encounter the challenge of binary distinction most readily, and it is here where we might begin to program alternative strategies that recognize the complexity of the relationships these dualisms describe.

Inspired by Drucker (2014, p. 17), this dissertation argues that there is an urgency to develop “critical languages for the graphics that predominate in the networked environ- ment: information graphics, interface, and other schematic formats.” It suggests that, in order to develop a critical vocabulary for representations of information - one that takes seriously multimodality in the varying stages of representation and interaction - we must learn to read visualization through, following Haraway (1997) and Barad (2007), a diffractive lens. This approach asks how objects, tools, and texts influence each other and, importantly, seeks to illuminate what gets excluded in representational practices. It Chapter 1. Visual Information 20 enables recognition of the multiple relationships that undergird contemporary visual rep- resentation not as binary choices, but as entangled phenomena. An outline of “diffractive reading,” along with a thorough account of entanglement as a central theoretical concept of the dissertation, will be taken up in the following chapter.

Naturalizations the Dissertation Surfaces

What are the epistemic conditions that are naturalized by visualization technologies and practices? Breaking from methodological traditions that treat deconstruction as an end in itself, this dissertation is not only concerned with surfacing and illuminating vari- ous epistemic conditions that are naturalized in infovis practices, but in demonstrating methods for denaturalizing them as well. Each case study that will be described involves a three-step process in which 1) specific naturalized epistemic conditions are surfaced;

2) critical making-inspired methods are employed to denaturalize these epistemic condi- tions; and 3) the specific value propositions associated with these epistemic conditions are examined in order to propose new approaches and methods.

To this end, epistemic conditions that the dissertation surfaces and denaturalizes include:

• that we can describe core phenomenal relationships of information visualization

(e.g. subject|object) as separate from each another; that, following an ocularcentric

Cartesian legacy, we can make sense of information primarily through visual repre-

sentation and perception; that visual information can be readily translated across

different sensory modalities (chapter 2).

• that aesthetics and truth are mutually exclusive; that there is an underlying gram-

mar of visualization to which all infovis practices should adhere (chapter 3).

• that specific scale relations in data analysis environments are appropriate while

others are not; that data visualization is an “anti-sublime” endeavour; that the Chapter 1. Visual Information 21

body should be removed from the site of interpretation (chapter 4).

• that material and digital are entirely separate and distinct spheres; that there are

specific visual semiotic conventions, formats, and interaction modalities imported

from visual media which should underwrite the development of multisensory visu-

alization (chapter 5).

Research Questions

This dissertation is guided by four interconnected research questions to undertake this process of illumination and denaturalization. They are: (RQ1) What epistemic conditions are “concealed under a guise of familiarity” (Drucker, 2014) in infovis? (RQ2) How can material engagement surface and reveal them? (RQ3) How can critical making (Ratto,

2011b) inform the development of alternative visualization methods? (RQ4) How might these alternative methods help shape critical approaches to visualization in emerging

fields like critical data studies, as well as long-standing fields like HCI?

These over-arching questions are supplemented by specific ones that speak to the themes of each chapter.

• Chapter 2 engages most directly with RQ1 by addressing the condition of ocu-

larcentrism. It asks: What happens to knowledge objects and modes of interac-

tion when historically visual domains are unsettled by non-ocularcentric practices?

What are the epistemological consequences of denaturalizing visual information

using modes of interaction that are non-visual (or, at least, less visual)?

• Chapter 3 also engages with RQ1 by outlining the epistemology of contemporary

infovis. Through specific case studies that are described, it engages with RQ2 by

asking (following Johanna Drucker (2014)): how do we begin to “de-naturalize the

increasingly familiar” graphical visualizations that we encounter in contemporary Chapter 1. Visual Information 22

data engagement.

• Chapter 4 takes up RQ3 in its description of a critical making-informed approach to

embodied visualization. In doing so, it asks: What would happen if we disrupt the

scale relations that conventional data visualization enforce? What happens when

the body is re-inscribed into the site of data interpretation? Is data visualization

an “anti-sublime” medium?

• Chapter 5 asks: How do the digital affordances of infovis differ from material

ones? How does the material world resist representational practices that have been

crafted for digital, screen-based interaction? How and when does the material push

back? Together with chapter 6, it proposes methods intended to inform critical

and inclusive approaches to visualization, directly addressing RQ4.

Methodological Approach

The dissertation employs a methodological approach to denaturalization that has three pillars: diffractive reading (Barad, 2007; Van der Tuin, 2014); critical making (Ratto,

2011b); and qualitative, interpretive case study methodology (Yin, 2006). To address the questions outlined in the previous section, and to support the dissertation’s arguments, each chapter features an applied case study based on experiences I have had developing unique multimodal objects and experiences. These build toward chapter 5’s presenta- tion of findings from an empirical study I undertook, in which I designed tangible data objects and employed them in design interviews with 14 blind users. Data collection and analysis of this study were guided by principles of qualitative inquiry, employing inductive thematic analysis in its coding method. More detailed information about the methodological approach for this study can be found in chapter 5.

Three core themes emerged from an exhaustive analysis of this study. The first theme Chapter 1. Visual Information 23

finds that participants’ idiosyncratic mental models of how spatial data should be rep- resented, as well as how they situate themselves in relation to these representations, can have a significant effect on the efficacy of tangible data objects that draw inspiration from visual metaphors. The second theme finds that screen-based infovis interaction methods for changing scale between data views do not translate well to physical mod- els; undue cognitive burden can be placed on visually impaired users who are asked to compare objects at different scales using only tactile interaction. The third theme that emerged finds that “seamlessness” and minimalism, which are often treated as virtues in the world of visual design, have the effect of producing confusing and unusable interfaces for visually impaired users, who indicate a preference for very clear seams and depth contrasts between graphical items. In addition to the three themes listed here, a handful of important inclusive design insights are also provided for researchers and designers who are interested in replicating some of the methods I describe.

Operating from a methodological stance that posits making as a way of knowing, the dissertation contends that critical making (Ratto, 2011b) provides a way to surface the entangled phenomena that the dissertation deals with. Merely examining these phe- nomena through a discursive apparatus would be insufficient. Critical engagement with representation should entail making representations. This way, it is not merely a reflexive or deconstructive process. It also may require building the apparatus of representation, making what Barad (2003) refers to as “deliberate cuts in the world.” Producing “entan- gled objects” - in effect, bringing entanglements in closer proximity to one another - calls these phenomena into relief. Chapter 1. Visual Information 24

Structure of the Dissertation

Over the course of six chapters, this dissertation argues that the recent spread of infor- mation graphics as visual evidence, a form of representational media transecting analog and digital as well as visual and multisensory concerns, is profoundly shaped by the predominant epistemic conditions under which they arise: ocularcentric objectivity; a dichotomy separating aesthetics and truth; disembodied interaction; and a perceived

firewall between digital and material worlds. The disparate fields of information visual- ization require new methods and approaches to surface and denaturalize these conditions.

The structure of this dissertation is designed to consecutively introduce alternative infor- mation sensemaking practices that build on each other, growing in complexity toward the case study and analysis in chapter 5, which demonstrates a real-world context connected to a specific user group.

Chapter 2

The following chapter, “Crises of Ocularcentrism,” provides a review of themes related to ocularcentrism, while also situating information visualization in terms of specific entan- gled relations that structure it. It lays the groundwork for a diffractive methodological approach that is employed throughout the dissertation, and then historicizes the epistemic conditions under which the entanglements it details have been formed and articulated.

Throughout, it grapples with aspects of a Cartesian legacy that have been naturalized in infovis technologies. The chapter argues that the visual epistemology of infovis must be considered in light of a broader crisis of ocularcentrism, claiming that, ultimately, this crisis is actually multiple crises that can be leveraged as opportunities to expand the

field. To support this argument, it presents a case study of experimental design work I undertook to develop tangible 3D reliefs from 2D images for a museum tactile interaction Chapter 1. Visual Information 25 context. The chapter culminates by outlining a transductive methodological approach.

Chapter 3

Chapter 3, “Visualization: Epistemic Technology,” examines contemporary screen-based data visualization technology and its epistemic values and claims. Through a case study describing my experience building screen-based multimodal visualization prototypes, it describes how a “grammar of graphics” underpins many contemporary visualization tech- nologies, making it difficult to propose new approaches to visualization that are not visual or screen-based. Furthermore, it contextualizes this within a current debate about the epistemic values of truth and aesthetics. The chapter culminates by describing principles and methods that might inform critical approaches to visualization.

Chapter 4

Chapter 4, “Visualizing the Embodied Data Sublime,” examines recent ideas in the space of immersive visualization. Through a case study that presents my work developing exploratory virtual reality-based visualization techniques, it describes how the data sub- lime offers an interesting backdrop for work that seeks to denaturalize scale relations in embodied visualization technologies. The chapter culminates with a discussion of how methodological strategies borrowed from the humanities, such as estrangement and defa- miliarization, can serve as productive starting points for alternative visualization design approaches. Chapter 1. Visual Information 26

Chapter 5

Chapter 5, “Beyond the Visual,” synthesizes findings from the previous three chapters and explores the space of multisensory visualization - in particular, the growth of tactile/tan- gible visualization as a subset of HCI research. This sets the stage for the presentation of results from an empirical study of 14 blind users of tactile data visualization objects de- veloped within the context of a research project called InclusiVis. The chapter examines how an artificial firewall between material and digital contexts impedes the development of tangible infovis. It culminates with a discussion of design considerations for devel- oping inclusive visualization tools, suggesting, among other things, that inclusive data visualization is a unique area for denaturalizing epistemic virtues related to truth, ocular- centric interaction, and the value of data representation to a widening audience (beyond researchers and traditional data analysts). It proposes an approach to creating physical data objects - rematerialization - that takes into account the full spectrum of a data object’s materiality.

Chapter 6

The final and concluding chapter, titled “Information, Not Representation,” returns to the question of why this dissertation’s themes matter. It asks whether it is more appropriate to discuss these subjects under the banner of information, rather than representation.

Applying ideas from the dissertation’s case studies, this chapter extends its findings to new areas of infovis in society (e.g. the emerging field of data journalism), and pro- poses a number of insights for the emerging scholarly areas of critical data studies and visualization literacy. Chapter 1. Visual Information 27

Contribution and Advancing Scholarship

There are three areas in which this dissertation seeks to make contributions: to the emerging fields of critical data studies and critical visualization studies; to the broader infovis community, especially researchers and practitioners within it who are interested in unsettling conventions or experimenting with new interaction modalities; and, finally, to a general public that would benefit from more accessible tools to develop data literacy.

Scholarly Contribution

The findings described in the dissertation speak to scholars and practitioners operating in various disciplines. Chief among these is my home domain, information. The dissertation presents new insights for information scholars working on themes at the intersection of materiality and digitality. It does this by introducing a number of novel concepts, or ex- tending existing ones (e.g. transduction) into information concerns like materialization.

In doing so, it also provides new insights into questions around authenticity and truth in a host of information contexts. Ideas that will resonate with the various information disciplines also include the dissertation’s extensive contributions to the emerging inter- disciplinary field of critical data studies (and, specifically, research in it that engages with visualization-related themes). Critical data studies brings together scholars at the nexus of data/information science, digital humanities, computational social science, and a hand- ful of adjacent disciplines. My goal with this research has been to contribute to ongoing conversations in data studies about critical approaches to representation, data literacy, and inclusive design. Related to this, the dissertation speaks to ongoing conversations in STS around epistemic conflicts arising from the use of novel forms of representational media, in the process providing insights for the emerging area of visual STS. Chapter 1. Visual Information 28

The dissertation also contributes new methodological insights to HCI and interaction de- sign, as scholars aligned with its foundational disciplines - design, engineering, computer science, and psychology - introduce complex new approaches for developing multisensory and immersive experiences, address ongoing issues around the use of 3D printing for ac- cessibility concerns, and, most important, grapple with methodological challenges raised by the expansion and adoption of qualitative and ethnographic methods for the field. The dissertation provides a number of new methodological tools (e.g. denaturalization) that, while familiar to humanities and media scholars, remain outside of mainstream HCI. As

HCI continues to open its tent to artists, ethnographers, and designers who forgo more traditional methods drawn from psychology and human factors research, speculative and critical methods like the ones described in this dissertation can provide new opportunities for both sensemaking and innovation.

In presenting arguments against ocularcentric interaction that are grounded by lengthy historical narratives, or extending methods like transduction and materialization into case studies that are appropriate for an HCI audience, the dissertation makes a strong argument for interdisciplinary collaboration between HCI, critical data studies, media studies, STS, and information. In posing new ethical questions and concerns for HCI re- searchers and professionals working at the intersection of infovis and inclusive design, the dissertation builds on crucial work initiated by Dörk et al. (2013) which argues that the infovis community has not given sufficient consideration to how values and assumptions pervade information visualization - that “there is a need to think more systematically about how values and intentions shape visualization practice.”21 In illustrating multi-

21The bridge this dissertation seeks to build between HCI, humanities, and social science methods owes a great debt to the work of Sheelagh Carpendale. An infovis research luminary, Carpendale gave a Sanders Series lecture at the University of Toronto in January 2017 in which she spoke about the power and potential of alternate representations, and about infovis as a conversation that sits between representation and interaction. She built on the critical infovis ideas that had been outlined in the 2013 paper as she spoke about “empowering people with data” and making data representation more accessible. She ended the lecture by asking her audience to consider the various ways that “representation becomes powerful.” A video of the lecture is available here: https://www.youtube.com/watch?v=geQcMZV8LZs. Chapter 1. Visual Information 29

ple examples that highlight how values and intentions shape visualization practice, the

dissertation also contributes to the growing values in design (VID) field that has moved

beyond its erstwhile status as a sub-concern of HCI and systems design.

As concrete examples of my commitment to illuminating interdisciplinary case studies,

presenting new approaches for HCI that are drawn from humanities and social science

methods, and probing the ethical dimensions of novel data interaction technologies and

practices, I have already published various components of this research. These include

the following refereed articles and chapters (as well as various non-refereed conference

talks):

• Gabby Resch (2019). “Denaturalizing Visual Bias in Multisensory Data Visualiza-

tion”. In: Digital Culture and Society. Forthcoming - In Revision

• Gabby Resch, Daniel Southwick, and Matt Ratto (2018). “Denaturalizing 3D Print-

ing’s Value Claims”. In: New Directions in 3rd wave HCI. ed. by Michael Filimow-

icz. Cham, CH: Springer International

• Gabby Resch et al. (2018). “Thinking as Handwork: Critical Making with Hu-

manistic Concerns”. In: Making Things and Drawing Boundaries: Experiments in

the Digital Humanities. Ed. by Jentery Sayers. Minneapolis, MN: University of

Minnesota Press

• Gabby Resch, Yanni Loukissas, and Matt Ratto (Sept. 2016). Critical Information

Practice. Society for the Social Studies of Science Annual Meeting. Barcelona, ES

Public Contribution

As visualization media has become the lingua franca of data interpretation, a richer and more nuanced understanding of the epistemology of infovis is absolutely necessary.

Limited, constrained notions of visualization lead to a deficit in graphical literacy and, Chapter 1. Visual Information 30 consequently, impede efforts to promote information literacy (Battista and Conte, 2017).

This has real-world effects. The evidentiary practices undergirded by visual representa- tion (and, often, occluded by it) are increasingly used to present arguments and make truth claims to the general public. The purpose of this dissertation is not to make visualization more efficient, necessarily, or, in the short term, to build better software or environments for interpretation. It is to propose methods that can support critical interrogation of these evidentiary practices and their public-facing consequences.

For my claim that naturalized epistemic conditions have a detrimental effect on infovis to hold any water, I will demonstrate how denaturalization leads to alternatives that produce insightful, rich engagements with data in specific real-world contexts. Inclusive interaction with open civic data for visually impaired citizens is only one such context. If, following Bateson’s oft-repeated maxim, information is truly to “make a difference,” it will need to recognize its ocularcentric heritage and encourage the development of strategies that de-center the eye. The dissertation does this as it unsettles ocularcentric interaction in public institutions like libraries and museums. The field of information will also need to encourage practices that skate across the divide between material and digital. And it will need to foster environments and contexts that don’t just rely on information transfer

(e.g. publication), but encourage reflexive engagement with the raw material - data - at its core. To this end, chapter 5 will describe my work collaborating with the Toronto

Public Library to prepare instructional content based on data objects I created for the

InclusiVis project. With this sort of public-facing collaboration in mind, each chapter works toward a systematic approach that can facilitate critical data fluency among three communities - academia, the professional infovis world, and the general public. This is the key theme of chapter 6, which outlines a number of topical public concerns that motivate my research and synthesizes the dissertation’s methodological contributions in order to address these concerns head-on. Chapter 2

Crises of Ocularcentrism

Summary: This chapter introduces a core theme of the dissertation, the naturalized oc- ularcentrism of visualization practices and the tightly coupled relationship between repre- sentation and visual sensemaking. It asks what happens when epistemic objects and modes of interaction from historically visual domains are unsettled by non-ocularcentric prac- tices. The dissertation’s first case study, which describes a research project that sought to denaturalize ocularcentric interaction in a museum context, is introduced in the second half of the chapter. This chapter presents a context for considering how naturalizations that will be examined in following chapters are also marked by a pervasive ocularcentrism.

It ends by elaborating on why transduction, rather than translation, is a more appropriate description for what happens when 2D and 3D collide in epistemic contexts.

The eye, which is called the window of the soul, is the principal means by which the central sense can most completely and abundantly appreciate the infinite works of nature. Leonardo da Vinci1

1http://people.virginia.edu/~jdk3t/LeonardosParagone.htm

31 Chapter 2. Crises of Ocularcentrism 32

Introduction: Ocularcentrism and Representation

Ocularcentrism underlies virtually every epistemic practice2 that entails interacting with

data (both quantitative and qualitative, numerical and textual). Information graphics,

for example, are almost exclusively visual, ocularcentric media. Tables and spreadsheets

are nearly impossible to interpret through any mode of perception other than vision. Even

relatively newer modes of data representation, such as sonification, typically act as an

adjunct to visual media. Ocularcentrism is an epistemological tradition based on vision,

visual metaphors, and visual practices, in which sight is privileged and the perceptual

order of vision is ranked above the other senses (as much as they can be demarcated

separately). It is the cornerstone of Western visual epistemology, which Drucker (2014,

p. 8) defines as “ways of knowing that are presented and processed visually,” and is as

old as the cave paintings and petroglyphs that are part of our material cultural heritage.

Ocularcentrism naturalizes specific epistemic practices, types of knowledge objects, and

modes of interaction at the expense of non-visual and multisensory ones.

In seeking to illuminate the epistemic condition of ocularcentrism, this chapter most

directly engages with the dissertation’s first research question, which is concerned with

the epistemic conditions that are concealed “under a guise of familiarity” in infovis. To

do so, it considers the following research questions: What happens to these knowledge

objects and modes of interaction when historically visual domains are unsettled by non-

ocularcentric practices? What are the epistemological consequences of denaturalizing

visual information using modes of interaction that are non-visual (or, at least, less visual)?

The chapter’s questions and themes will be read through what historian Martin Jay has

described as a “crisis of ocularcentrism,” a condition in which the dichotomy between

2By epistemic practice, I refer to practices in which knowledge production is a key frame of meaning in which people enact their lives (Knorr Cetina, 2007, p. 364). See also Knorr Cetina (1999, 2008). Chapter 2. Crises of Ocularcentrism 33 subject and object - and the necessary hierarchy between them - breaks down. “The crisis of ocularcentrism comes when it is no longer acceptable to oscillate between these two models or to assume a necessary hierarchy between them” (Jay, 1988, p. 316). This crisis has been triggered and maintained by the baroque vision of hermeneutic inquiry and a “loss of faith in the objectivist epistemology” (Jay, 1988, p. 318). In order to grasp how new representation and interaction techniques afforded by tangible media operate, a topic of crucial import for chapter 5, I will argue that they must be understood through the epistemic condition of ocularcentrism before they can be considered within an emerging multisensory one.

In the first half of the chapter, a method known as diffractive reading will be employed to read key dualistic concepts that underlie the dissertation’s core themes, laying the groundwork for analyses that follow in the remaining chapters. It will demonstrate how these epistemic conditions rely on particular relational phenomena being framed in binary terms. Examples of this framing that the chapter addresses include subject|object, material|digital, and embodied|disembodied.3 A number of related tertiary dualisms, including visual|multisensory, enact|perform, and nature|culture, will also come up at different points of the dissertation. Through this diffractive reading, I position these dualisms as entangled phenomena rather than binary or dialectical pairs.

The chapter performs an important review of concepts related to representation, interac- tion, and visualization by examining how various disciplines (and scholars) have theorized

3I introduce here a neological use of what is known in programming, data science, and systems administration as a pipe. The pipe, represented by the graphical symbol | is a unidirectional data channel that can be used for inter-process communication. In other words, the output of a process feeds directly as input into the next process. In software, this is chronological and continuous. In the way I introduce its use here, the temporal direction is not fixed. While not entirely recursive, the point of origin in a relationship between phenomena is not perfectly clear. Visualization has a direct bearing on how we think about representation. Ideas about representation, in turn, influence how we think about visualization, even though entirely separate fields are devoted to considering these related concepts. In this sense, the pipe, and how it will be used throughout this text, is meant to evoke an entangled relationship. Chapter 2. Crises of Ocularcentrism 34 and utilized them. While each subsequent chapter also surveys the necessary literature to support its arguments, this chapter introduces the general epistemic condition of oc- ularcentrism to which they all refer. Importantly, the next chapter will examine modern infovis and its historical precedents, bringing arguments about ocularcentrism made by

Jay (and others) to bear on contemporary practices.4 For this to happen, the episte- mology of infovis will need to be understood through a consideration of the biases that ocularcentric interaction mandates in different epistemic practices.

While there is already a substantive body of literature that addresses the dualistic con- cepts at the foundation of ocularcentric epistemology, tracing them historically or con- necting them to specific technologies and interaction modes, there is a need for applied treatments, rather than simple literature reviews, within contemporary scholarship on representation in STS, information, and associated fields. As a consequence, I have cho- sen to ground my survey of ocularcentrism through the presentation of the dissertation’s

first case study, a project I undertook to prepare tactile renderings of photographs for an exhibit at the Royal Ontario Museum. This project is examined in detail, providing insights into questions of scale, perspective, and mimesis. The case will demonstrate how denaturalization, as a method, becomes possible not just only through reflexive critique, but through the deliberate creation of new forms of media. In this case, it is media that is not fixed to either a 2D or 3D context - a theme that will be revisited at various points throughout the dissertation. Through the case study, the naturalized ideal that human sensemaking practices are primarily visual, a foundation that is rooted in Carte- sian perspectivalism, is surfaced.5 Various modes of object interaction, including modes that emphasize touch, unsettle this notion.

4The arguments upon which this chapter relies come primarily from Crary (1992), Jay (1991), Levin (1993), and Nancy (2007). 5Fleckenstein (2008, p. 86) defines the core tenets of Cartesian perspectivalism as “disembodied rationality, quantifiable realities, and linear causality.” See Jay (1991) for more on this subject. Chapter 2. Crises of Ocularcentrism 35

The chapter’s argument is scaffolded on the following claims. Information media is structured by specific entanglements, including: the relationship between subject and observed phenomena (which are never truly separate); the relationship between the ma- terial and digital worlds (which are in a constant state of tension); and, finally, a fluid sensory order, which is contextualized by the ongoing discussion around embodied vs. disembodied ways of knowing. The naturalized modes of interaction that are inherent to visualization practice, including disembodied ocularcentric objectivity, are rooted in these entangled relations. At a concrete level, in the architecture of screen interfaces, oc- ularcentrism structures modern visualization practice through epistemic conditions that include God’s-eye view, static perspectival arrangements, and reified scale relationships.

By the chapter’s end, an additional argument, that the so-called ‘crisis of ocularcentrism’ actually entails multiple crises, will have taken shape.6 Vision, as the primary structuring mechanism of objectivity, the predominant epistemic virtue of our era, mediates every truth claim constructed using media which is not exclusively visual (e.g. tactile inter- faces). Objectivity, as a consequence, remains yoked to a visual epistemology, even when it is applied to truth-making practices that are not exclusively visual. Despite this, the material traces and subjective interpretations of human evidence makers are never far from the surface of any objective account, as Daston and Galison (2007) have argued and this dissertation’s case studies will demonstrate.

Opportunities to extend the design space of infovis are structurally constrained, however, by a limited conception of sensory interaction that never conceals its bias toward vision.

While I am opposed to separating the senses from each other, I must do so for analytic purposes - even though it should be clear by the end of the dissertation that they cannot ever be fully separated (Hamilakis, 2013, p. 411). The sensory modalities that this chap- ter is concerned with are vision and touch, via the transduction of visual objects into

6The undermining of ‘scientific truth’ by the aesthetics of new digital media, as an example, is one of these manufactured crises that will be considered in greater detail in the following chapter. Chapter 2. Crises of Ocularcentrism 36

tactile reliefs. An important aim of this chapter will be to begin illuminating how ocu-

larcentrism, through screen-based experiences, underpins many interaction technologies

and methods that are used in knowledge-making practices. This is an important consid-

eration for the case studies in chapters 4 and 5, both of which focus on non-ocularcentric

modes of interaction.

The naturalizations revealed in the dissertation’s chapters can be traced to dualistic the-

ories, including a separation of mind and body, that are at least as old as the Cartesian

revolution. We must begin the project of denaturalizing visualization by surfacing these

dualisms. But the chapter has a secondary scholarly aim: to argue for reading diffrac-

tively, a methodological approach that involves “reading insights through one another in

ways that help illuminate differences as they emerge” in order to illuminate “what gets

excluded, and how those exclusions matter” (Barad, 2007, p. 30). Diffractive reading

disentangles complex relationships like the dualistic entanglements that have been men-

tioned. This strategy is a necessary element of an orientation to infovis that embraces

disambiguation and complexity, humanistic and interpretive ideas, and denaturalization

as a method. Diffractive reading isn’t a new method, as will be described later on, but

this chapter applies it to new areas (visualization and HCI) where it would not typically

be encountered.7 Why is an entire chapter on ocularcentrism necessary? Because it is at the core of visual representation, arguably the most important epistemic practice, ubiquitous from scientific imaging in experimental physics to big data map-making in computational sociology.

7Although, it might be noted, diffractive reading would not be out of line in third wave HCI. Chapter 2. Crises of Ocularcentrism 37

Entanglements of Representation

What is entailed in the investigation of entanglements?

Barad, 2007, p. 75

Before outlining how diffractive reading enables a more complete perspective on ocular- centrism and its effects, I will expand on one of the key concepts in this dissertation: rep- resentation. In the introductory chapter, I described visual representation as something that mediates virtually every knowledge practice, listing examples such as photographs, abstract images, physical analogues, sketches, animated videos, engineering diagrams, and data graphics.8 Specifically, I noted how representational media are crucial to the interpretation, display, and demonstration of evidence (scientific evidence, in particular).

Although definitions of representation are varied, this elaboration captures the scope of how I intend to use it:

“... representation involves lengthy struggles with research materials to recon-

struct them in a way that facilitates analysis, for example through coding and

highlighting key features of interest and aligning them with particular con-

cepts and theories. This treatment of representation in and as practice has

since spurred a rich body of ethnographic, historical, and discourse-analytic

inquiries that demonstrate how the circumstances of knowledge production are

folded into epistemological claims and ontological orderings.”9

8It should be stated that this meaning of representation is distinct from that used in fields like cognitive science or philosophy of mind, as well as that used in the contemporary museological sense of which cultures or themes are represented in museums - in other words, “the space between things and ways of conceptualizing them” (Lord, 2006, p. 5). 9See the introduction to Coopmans et al. (2014) and the 1990 introduction on which it is based (Lynch and Woolgar, 1990). Chapter 2. Crises of Ocularcentrism 38

Chia (2000) refers to representation as “the common currency for communication ex- changes.”

Whether in scientific,10 humanistic, or art communication, representation enriches other epistemic practices (e.g. written description) by linking objects to images, composites, or other visual constructions. The selection of representational technologies, editing and curation of images, and choice of publication/presentation modes determines the value of this process.11 Representation helps concepts scale (when they are naturally abstract, for example). It makes epistemic objects tractable - both mutable and mobile in Latour’s terms.12 We rely on centuries of representational conventions when it comes time to render content or findings publishable for different audiences.

Kress and Leeuwen (2006) acknowledge a broad “‘agenda of concern with multimodality,’ a rapidly growing realization that representation is always multiple.”13 The representa- tional media - infovis - that is dealt with in this dissertation is frequently shared across modes of interaction, from screen-based visuals to audio-based sonification. In this sense, it can be considered “multimodal representation,” although this term, as we’ll see in the following chapter, also applies to media that is fluid across sensory boundaries, or media in which different types (text, audio, images) share the same real estate. Because infovis is increasingly not frozen in time and space, it is often temporally and spatially mul- timodal as well. Importantly, its growing reliance on textual description (e.g. through

10I will refer to both social and natural science with this term unless specifically stated. 11Adam Nocek (2015) has recently argued that “representation, instead of objectivity, is the epistemic norm that determines the scientific value of images,” claiming that representation has undergone numer- ous transformations in the history of scientific epistemology, but is moving toward a logic of flexibility that corresponds to neoliberal market values. What he means by this is that specific methodological and tool choices, rather than adherence to conventions of objectivity, are what produce epistemological validity. Nocek draws heavily on Daston and Galison (2007) and an understanding of objectivity that places it firmly at the centre of contemporary debates about visual knowledge production, as well as a definition of representation that is firmly Deleuzian. 12See Latour (1983). 13In their work, they caution against worrying about attempts to find a “definitive grammar of images,” even though figures like Tufte and Wilkinson, who we’ll read about in the following chapter, insist that this is a major project of infovis. Chapter 2. Crises of Ocularcentrism 39 narrative and storytelling, which will be described in the next chapter), ensures that visualization media is moving beyond the exclusive domain of images.

Multimodal representation, in these various senses, is performative media. Performa- tivity, in this specific context, is a power struggle between static and interactive media, between text and image, and, importantly, between vision and the rest of the sensory order. It is a territorial contest over the grounds for truth-making. Representations do not just help “depict particular realities; they enact those realities” (Hillier, 2008). As

Karen Barad (2007, p. 91) argues: “practices of knowing are specific material engage- ments that participate in (re) configuring the world. Which practices we enact matter in both senses of the word. Making knowledge is not simply about making facts but about making worlds, or rather, it is about making specific worldly configurations.”14

Representation in science is bound up in the epistemic practices that more fully con- stitute scientific objectivity. This includes the “the gestures, techniques, habits, and temperament ingrained by training and daily repetition” (Daston and Galison, 2007, p.

52). But scientific representation (and, by extension, data visualization) isn’t just about abstract or symbolic placeholding. This is a flaw in a purely semiological reading of the term. Any contemporary definition must take into account truth and authenticity, material practices, temporality, persistence and fluidity of evidence, and even interaction modality. A performative understanding of representation has the capacity to account for these additional facets, but also to trouble them, as performativity and representation are typically at odds. That said, “performative representation” is not an entirely new concept.15

14Consider, also, this statement from Barad (2003, p. 802): “A performative understanding of dis- cursive practices challenges the representationalist belief in the power of words to represent preexisting things. Performativity, properly construed, is not an invitation to turn everything (including material bodies) into words; on the contrary, performativity is precisely a contestation of the excessive power granted to language to determine what is real.” 15Use of this concept is described in Hillier (2008) (architecture/spatial planning) and Dirksmeier and Helbrecht (2008) (qualitative social science). Chapter 2. Crises of Ocularcentrism 40

Figure 2.1: The evidentiary process, according to Tufte.

With these points in mind, “visualization” and “representation” are often conflated, even though they have quite different meanings. Visualization is a kind of representation, but not all representation is visualization. Today’s infovis, with numerous interaction possibilities that thwart the stability and “fixity” of visual evidence that scientific objec- tivity mandates, is more akin to a “software performance.”16 This despite the fact that the foremost voice in infovis advocating for static, clear, and immutable presentations of evidence, Edward Tufte, casually uses the term “visual representation” almost inter- changeably with visualization.17 This passage, in which he abbreviates the evidentiary process, is telling:

“Between the initial data collection and the final published report falls the

shadow of the evidence reduction, construction, and representation process:

data are selected, sorted, edited, summarized, massaged, and arranged into

published graphs, diagrams, images, charts, tables, numbers, words. In this

sequence of representation (refer to Figure 2.1), a report represents some data

which represents the physical world” (Tufte, 2006, p. 147).

One scholar has done much to shape our understanding of the complexity of modern representational issues in scientific visualization: Michael Lynch. Representation has frequently been elevated to a “master status” that encompasses many other important epistemic practices, Lynch laments. He has reconsidered his own attention to the study

16See Lev Manovich on the notion of visualization as a software performance: https://www. chronicle.com/article/The-Algorithms-of-Our-Lives-/143557/. 17See, for example, chapter 4 in Tufte (2006). Chapter 2. Crises of Ocularcentrism 41 of representation, refocusing instead toward “examining what people do when they en- gage in an activity that makes one or another ‘representation’ perspicuous; learning some of their practices, if only the simplest of them; taking notes, tape recording, or otherwise describing what people say and do (knowing full well the endlessness of the descriptive task); and then playing these observations off against established versions of representation” (Lynch, 1994). It is in the collaborative sensemaking process, not just in the design of technologies and their outputs, that visualization scholars might find their most interesting problems, if we follow Lynch’s logic.

This leaves us with an important question to ponder: when is infovis not an exclusively representational practice? Can it be decoupled from the politics and epistemological baggage of representation? Should we, following Lynch, study the histories, practices, and epistemic values of the infovis community? Can studying (and engaging directly with) its tools help reveal these histories, practices, and values? Importantly, when does infovis become less about representation and more about analysis, interaction, or engagement? “What comes first, the data or its representation?” might not be the best question to ask, as separating them, or treating them dialectically, is a genuinely bad strategy (Johri, Roth, and Olds, 2013). Data and representation, I argue, are entangled.

Diffractive Reading

“Diffractive reading” was first proposed by Donna Haraway, who has long advocated for an interventionist approach to scientific practice (“world-making” in her terms): “what we need is to make a difference in material-semiotic apparatuses, to diffract the rays of technoscience so that we get more promising interference patterns on the recording films of our lives and bodies” (Haraway, 1997). Diffractive reading, simply, is reading texts, objects, tools, or approaches through each other rather than in comparison or hindsight. Chapter 2. Crises of Ocularcentrism 42

How do they influence each other? How are their contexts tangled up? “Diffraction

involves reading insights through one another in ways that help illuminate differences

as they emerge: how different differences get made, what gets excluded, and how those

exclusions matter” (Barad, 2007, p. 30). It is an attempt to make differences while

“recording interactions, interference, and reinforcement”18 and to “disrupt linear and fixed

causalities” (Van der Tuin, 2011).19

Van der Tuin (2011) provides us with a detailed guide to using diffractive reading as

a methodological tool. Drawing on Haraway (2004) and Barad (2007), she points out

how the emerging field of new materialism breaks from representationalism (and reflex-

ive practice) by posing “a semiotics of non-fixity.”20 Diffractive reading is a counter to simple “reflexivity,” the sacred cow of interpretive scholarship - specifically, to reflexive approaches based on representationalism. Barad, more than anyone, has taken reflexivity to task. “Reflexivity is founded on representationalism. Reflexivity takes for granted the idea that representations reflect (social or natural) reality. That is, reflexivity is based on the belief that practices of representing have no effect on the objects of investigation and that we have a kind of access to representations that we don’t have to the objects themselves. Reflexivity, like reflection, still holds the world at a distance” (Barad, 2007, p. 87). Diffraction, as a counter, allows for intervening in the world.

In one important passage, Barad unravels representationalism’s need to keep the subject and object separated: “mirrors are an often-used metaphor for representationalism and related questions of reflexivity ... reflexivity is nothing more than iterative mimesis: even in its attempts to put the investigative subject back into the picture, reflexivity does nothing more than mirror mirroring. Representation raised to the nth power does not

18https://web.archive.org/web/20150502153021/http://newmaterialistscartographies. wikispaces.com/Diffraction 19For more on diffractive methodologies, see Barad (2014), Mazzei (2014), and Coleman (2014). For its application as a methodological tool, see Larson and Phillips (2013) and Bozalek and Zembylas (2017). 20See also Van der Tuin and Dolphijn (2010) and Van der Tuin (2014). Chapter 2. Crises of Ocularcentrism 43 disrupt the geometry that holds object and subject at a distance as the very condition for knowledge’s possibility” (Barad, 2007, pp. 86-87).21 In essence, diffraction is a cutting away, a revelation of difference, of what is left out, while reflexive representationalist objectivity is about revealing a mirror image of the object. Diffraction does “not displace the same elsewhere, in more or less distorted form” (Haraway, 1997, p. 273). For Barad and Haraway, reflexivity is about reflecting on representations while diffraction is about accounting for how practices matter. This is an important facet of denaturalization as a component of the methodological apparatus that I propose in this dissertation.

Denaturalization is about getting the general public, infovis practitioners, and researchers to account for how representational practices that underlie visual communication matter to them (and to the broader world). I describe how this might happen in the dissertation’s

final chapter.

Where do the naturalizations that will be revealed in each chapter - fixed scale relation- ships; adherence to minimalist aesthetics - come from? What foundations can we trace them to? To understand them, we must understand the epistemic milieus in which they arise and the entangled phenomena (e.g. subject|object) at their heart. In the follow- ing section, I sketch out a brief trajectory of a number of entanglements at the root of contemporary epistemic activity.

Naturalization: Cartesian Ocularcentrism

If ocularcentrism - and, consequentially, visual perception - are at the core of most epistemic practices that entail interacting with data, then tracing how this epistemic

21Barad has a comparison matrix that outlines further differences between reflection and diffraction. See Barad (2007, pp. 89–90). Reflexive objectivity is about representations, Barad says in this chart. It is about finding accurate representations. Things are objective referents. Accountability “entails finding an authentic mirror representation of separate things.” Chapter 2. Crises of Ocularcentrism 44 condition came to be is of the utmost importance. A complete history of ocularcentrism is not the main focus of this chapter, nor is it even truly possible. My concern, here, is with how ocularcentrism has been naturalized as an effect of visual sensemaking practices.

In other words, I am concerned with how it underlies the visual epistemology at the root of many contemporary knowledge practices. Denaturalizing this epistemology is one of the fundamental aims of this dissertation. How does ocularcentrism shape epistemic practice? Are non-ocularcentric visual practices possible? To begin to answer these questions, we must situate contemporary ocularcentrism historically.

Contemporary ocularcentric bias in media (specifically, visualization media) has numer- ous potential starting points. Elizabeth Eisenstein, for example, claims it stems from the printing press (Eisenstein, 1980). A crucial moment when visual perception was sepa- rated from other modes of sensory interaction, however, can be traced quite specifically to the 17th Century and the Cartesian revolution that marked it. While the scope of this ‘revolution’ is ill-defined and means different things to different disciplines, there are four primary threads from it that we find intertwined in modern representation and visualization practices.

The Cartesian split between mind and body is at the root of the embodied|disembodied dualism that is brought to bear on chapter 5’s arguments, and is the first important we must understand. Descartes, while not the first to separate mind from body, makes this dualism a cornerstone of his ideas. It is the most integral and persistent element of Cartesian epistemology, and arises from Descartes’ need to divide the internal and external into separate spheres in which the internal would be the domain of representation and the external world would be the domain of natural objects to which representations would refer. This division is what makes it possible for representational practices like screen-based visualization to rely on disembodied interaction. Rouse (1996, p. 209) points to this division as the birth of representationalism. Chapter 2. Crises of Ocularcentrism 45

The second, equally important, trajectory that requires acknowledgment is Descartes’

introduction of the grid as a visual structuring device. Cartesian space (or the Cartesian

coordinate system), and the Cartesian grid that maps it, is the foundation of computer

graphics (both 2D and 3D). This rationalization of space, in which lines and points could

“serve as key markers on a surface plane” created a “systematic set of graphic relations”

and, more importantly, produced the capacity to map empirical measurements to scalable

spatial configurations (Drucker, 2014, p. 90). Additionally, it resulted in the development

of a wide array of new instruments for measuring and rationalizing space and time, and

for representing the invisible. In fact, it can be argued that there is no more important

structural element in contemporary data visualization than the grid that traces its origins

to Descartes.

The third trajectory, closely connected to the introduction of the Cartesian grid, is what

is known as Cartesian perspectivalism. It lies at the root of the case study this chapter

builds. Cartesian perspectivalism - or, the Cartesian perspective - requires that objects

are both external to vision and extended before it (Hoel and Carusi, 2018, p. 6).22 In

Cartesian perspectivalism, corporeal bodies are translated into geometric objects that correspond to the eye and mind’s natural geometry (according to Descartes). Stereo vision, for example, enables the abstract calculation of distance that can then enable adaptations to representations in grid space. Cartesian perspectivalism relies on three interrelated phenomena according to Fleckenstein (2008, p. 86): disembodied rational- ity, quantifiable realities, and linear causality. This causal architecture is necessary for imagining the unfolding of future action, something that animation, for example, relies on.

The fourth pillar of ocularcentrism (and ocularcentric interaction) is one that is less known, although it is arguably the most important to my claims. Johanna Drucker has

22See also Merleau-Ponty (1964). Chapter 2. Crises of Ocularcentrism 46

called mathesis Descartes’ ‘alphabet of reason’ (Drucker, 2014, pp. 109-112). What she is referring to is the moment at which the search, by Leibniz, for a calculus that would govern interactions between objects was mapped to the coordinate system devised by Descartes. This moment, when everything from bodies to landscapes began to be represented algebraically as geometrical forms, is when it also became possible to graph any and all relations between them. It is the birth of visualization, and also the point at which claims about a rational, mathematical order governing all forms of knowledge started to take shape. The three previous pillars of a Cartesian basis for ocularcentrism lay the foundation for this one. Without the mind-body dualism, a stable and measur- able system of spatial coordinates, and a mode of sensory abstraction, mathesis could not have come to be. Together, these four pillars, each of which can be traced to the revolution wrought by Descartes, undergird not only contemporary visualization prac- tice, but virtually all contemporary interactive computing. They are the foundation of knowledge-making practices in art, science, humanities, and beyond.

Case Study: Photographic Reliefs

This first case study of the dissertation has, at initial glance, little to do with infovis.

It describes a research and design project I undertook in collaboration with the Royal

Ontario Museum to prepare touchable objects, based on pieces in a photography exhibit, for blind and visually impaired museum visitors. Contemporary museums, especially of science and natural history, are routinely positioned as sites for the dissemination of knowledge. As a consequence, they are ideal partners for considering the role of objects that engage new audiences in the public communication of science. By describing a traditionally visual epistemic space having to make way for non-visual interactions, this case study provides a precedent for how the entangled relationship between visual and tactile interaction operates in an epistemic context where the communication of scientific Chapter 2. Crises of Ocularcentrism 47 knowledge is valued. Additionally, through it I developed a new 3D printing approach that was, to my knowledge, previously unused for the preparation of tactile models (but has since been utilized in a number of important museum exhibits).

Ocularcentrism is pervasive in museums. “Touch with the eyes” remains an almost ubiq- uitous governing law, even if it has not always been the norm.23 As a consequence, museums are ideal sites for examining how novel modes of non-visual interaction can be disruptive to epistemic practices. Museums privilege visual experience, even when attempts are made to encourage touch and multisensory interaction.24 Despite this in- herent ocularcentrism, there is a long and complicated history of touch/tactile interaction in museum contexts through the use of ‘touch tours’ for blind audiences and what are often called, in the parlance of the museum world, ‘touchables.’ These are objects, typi- cally replicas, that are designed to be touched and often feature some interesting tactile characteristics.

A number of questions underpin the work in this section: How does unsettling ocular- centric practice in places like museums unsettle it in other informational contexts? Is this a challenge or threat to information or cultural institutions? Can there be a middle ground that enables ocular and “anti-ocular” to cohabit the same space? Additionally, design insights discussed in this case study build toward the empirical study described in chapter 5. In that chapter, I describe a variety of tactile data graphics and how they have been interpreted by blind users. Aligning these distinct information contexts is a general problem with most physical representational media - that they have the potential to reinforce ocularcentric biases.

23See Candlin (2004). 24See the discussion in Wood and Latham (2011) around new opportunities for touch in museums. Chapter 2. Crises of Ocularcentrism 48

Project Description

The initial goal of my collaboration with the Royal Ontario Museum was to explore how 3D printed tactile objects might be deployed in museum exhibits that rely heavily on visual interaction (e.g. photo exhibits). Techniques that I developed through it informed the approach to denaturalization that I would later develop through the projects described in subsequent chapters. In this work, I experimented with various methods for moving representational objects across 2D and 3D modes of engagement. While it was not an information visualization project per se, it gave me significant insight into how Cartesian perspective is naturalized in infovis, especially in the development of 3D data graphics. This case study does not deal with representing “information” in the same sense that data is brought to life using infovis media. That said, the content that was represented was a kind of scientific information that had to go through a process of transformation before it could be presented to the public. While it is somewhat reductive to call the objects I prepared “data,” the concept of digital surrogate, which has been used extensively in the field of digital heritage to describe digital objects that are designed to stand in for a material object but retain their own distinctive qualities, might help us understand the similarities in preparation and context.25

I was first contacted by the ROM’s senior inclusion manager in the fall of 2014. The museum wanted to experiment with different techniques for a pilot 3D printing project that would be unveiled in a showcase exhibition titled “Wildlife Photographer of the

Year.” This exhibition was to run from November 2014 to March 2015, and would feature large photographic displays. After a number of design conversations with the exhibit design and inclusion/accessibility managers at the ROM, it was agreed that I would incorporate a design approach emphasizing tactile interaction through the generation

25For more on this concept, see Newell (2012). Chapter 2. Crises of Ocularcentrism 49 of a number of 3D-printed “touchables” based on existing 2D digital images from the exhibition. While I was already familiar with techniques and software applications to generate 3D-printable designs from 2D images, I spent considerable time experimenting with new methods that might satisfy the project’s requirements. This entailed months of prototyping and testing models based on five original images that were selected.

While there are a handful of terms for approaches and techniques I used - displacement mapping; depth mapping; height mapping26 - they can generally be grouped under an umbrella term such as ‘physical 2.5D’ when applied to processes that rely on 3D-printed

(or CNC/laser engraved) output. The method generally entails converting a 2D image plane into a mesh, tessellating or subdividing the mesh, and then deforming its geometry based on either pre-determined values27 or through digital sculpting techniques. There are long traditions of material artistic techniques that similar 3D printing techniques derive from - and that museums are already well-acquainted with - such as relief sculpt- ing or lithophane engraving, but the use of digital methods for this work was relatively uncharted territory for museums at the time, and promises interesting and creative pos- sibilities for new experiences that can appeal to both non-sighted and sighted visitors.

My design entailed generating base 3D meshes using two techniques. The first involved processing a digital image through a Python script that I adapted for the purpose of creating 3D files with low-enough polygon counts to be sculptable. The second involved importing images as planes into Blender and creating a custom displacement map that could be modified. While there are a plethora of tutorials and dedicated applications for doing low-level lithophanes and reliefs, ostensibly for the purpose of 3D printing,28

26A discussion of the technical differences between these terms can be found here: https://gamedev.stackexchange.com/questions/65755/whats-the-difference-between- displacement-mapping-and-height-mapping. 27Depending on the software, grayscale, brightness, contrast, etc. 28See, for example, Image to Lithophane, a web app that uses the three.js JavaScript library: http: //3dp.rocks/lithophane/; George Hart’s Cookie Roller Generator, generated in Mathematica: https: //www.georgehart.com/rollers/; PhotoToMesh, a fairly simple Windows-only paid application: http://www.ransen.com/phototomesh/; Amanda Ghassaei’s Processing script, Lithograph3DPrint, Chapter 2. Crises of Ocularcentrism 50

Figure 2.2: One of the base images used in the case study. Chapter 2. Crises of Ocularcentrism 51

Figure 2.3: Digitally sculpting a 3D model of the image to ensure suitable tactile properties would be contained in the physical rendering. and numerous advanced software applications provide similar features,29 this project re- quired a considerable amount of digital sculpting expertise once base meshes had been generated.30

It is in this cleanup and sculpting process that significant mistakes can be made affecting the 3D printed output. If, for example, one doesn’t understand how a particular 3D printer will support overhanging geometry, then preparing a model with sections where support material will be difficult to remove can result in a print failure. Significant time can be wasted cleaning up or refining background detail that will not render in a printed version, when it could simply be smoothed. It is in the sculpting process that software that uses Marius Watz’s Modelbuilder geometry library: https://github.com/amandaghassaei/ Lithograph3DPrint. 29Maya, Rhino, Blender, and ZBrush, for example, all feature some way to do similar procedures. 30The following YouTube tutorials for generating meshes provide an overview: https://www.youtube. com/watch?v=f_ym_O_qyfk and https://www.youtube.com/watch?v=-fe2zxcKSic. Chapter 2. Crises of Ocularcentrism 52 features and UI elements can impact one’s workflow most. I tested sculpting options in

Blender, MeshMixer, and Mudbox, given that the first two are free and well documented, and the third is freely available to students (and relatively affordable otherwise), and my sculpting setup included the use of a large Wacom tablet and stylus. Due to a design decision to emphasize certain muscular features of the ‘lions’ image, for example, pressure sensitivity for sculpting tools, a feature in Mudbox, made a large impact on my workflow, as it gave me the most ‘life-like’ feel when scraping or inflating (the two tools I used the most).

When preparing for specific 3D printed outputs, design techniques that can negatively impact the printing process need to be given due consideration. For example, models must have manifold geometry (i.e. the designer must make sure that all internal faces are removed, or use a boolean union to join them).31 Additionally, flat bases aligned with the printing surface are important in some cases, but can cause print failures in others.

Finally, high polygon counts are not only unnecessary in most cases, but can have a negative impact on the slicing process. Consider, for example, a challenge I faced with one method for generating printable files using a displacement mapping technique. The

Python script I used takes a base .jpg (or similar image file) and produces a very high poly

.stl file (500,000 to well over a million triangles in the images I used). The .stl it outputs has a flat, adjustable-height base, ideal for 3D printing or CNC milling. I preferred to start with these high poly count models, in order to ensure highest possible resolution during the design process, and to decimate them as needed throughout design iterations.

Working with such a high poly count would prove taxing for even the most powerful computers, though, so decimating the model and re-topologizing became necessary fairly early during the process. In the decimation process, though, the geometry of the base can be compromised, and I needed to be somewhat conservative in order to ensure that

31A useful practice is to render the model solid and then leave it up to whatever slicing software you use to determine infill percentage. Chapter 2. Crises of Ocularcentrism 53

Figure 2.4: An early prototype of a piece that ended up in the exhibit.

the final models would be printable on a variety of printers. I bring these issues up

because there is a persistent fallacy that 3D printing does not require substantial material

expertise, something that my years of experience with the medium have demonstrated to

be completely false. This has implications for designers wishing to “translate” 2D data

graphics, images, scanned artifacts, and all manner of other things into 3D printable

objects.

For the models that would be installed in the ROM exhibit, I decided to print on an

Objet Connex500 printer using Objet’s proprietary transparent VeroClear photopoly-

mer, a “rigid, nearly colorless material featuring proven dimensional stability for general

purpose, fine-detail model building” according to the manufacturer.32 This choice was made for two reasons: 1) it offered a very high resolution quality and 2) I had a need to print the touchables in a flat, horizontal orientation, and could not afford any potential

32https://www.javelin-tech.com/3d/stratasys-materials/rigid-transparent/ Chapter 2. Crises of Ocularcentrism 54

Figure 2.5: An image of the touchable exhibit piece on display at the ROM in a panel that included braille, visual text, and photographic images. for the object’s base to lift while printing, something common with most conventional desktop fused deposition modelling (FDM) printers, even ones with heated beds. While

I have had success printing flat models before, none of the machines at my disposal were reliable enough.

A number of challenges I faced during the experimental design phase prior to installation are salient, as many of them turned up when I began creating the tactile data models that will be described in chapter 5. These include the following:

• Aforementioned issues with flat printing. Even with glass and heated print beds,

adhesive materials, and ‘maker hacks’ like hairspray, numerous prototypes I pre-

pared experienced some degree of corner lifting during the print process. Because

many data representation techniques require rectangular bases, printing in a flat

orientation creates various problems.

• Pose estimation. Capturing the orientation of the photographer, especially when Chapter 2. Crises of Ocularcentrism 55

Figure 2.6: Another of the base images used in the case study.

an image is removed from its context, is a particular challenge in this kind of work.

For example, consider the photo of a sea turtle I worked with. Is the turtle looking

and swimming forward? Should one of its fins jut out or protrude more than the

other? Should its body project outward or recess below the surface of the print?

Should its left shoulder be more prominent? Many of these same questions, which

also relate to longstanding issues about figure and ground in visual studies, return

when producing physical data representations.

• Sculpting time. Digital artists with considerable sculpting experience will still end

up devoting ample time to preparing touchable models, something that might deter

museums (or other supporters) from engaging heavily with such a process. Software

that can generate topology according to image context, rather than light and con-

trast values, would save time. Consider, again, the turtle photo alluded to above. If

an algorithm was used to detect that the subject turtle was in water, for example, Chapter 2. Crises of Ocularcentrism 56

a pre-set for generating topology or orienting the primary model to de-emphasize

the water entirely might be useful. Algorithmic backgrounds would also be useful

for some data representations. Would they diminish the data representation’s truth

value, though?

• High poly count in the initial base mesh. In preparing high-resolution meshes, an

artist must be careful to watch their polygon count when subdividing for greater

resolution. Preparing meshes for printing requires having the experience with spe-

cific printers to know whether a model will slice properly (and in a reasonable

amount of time) for each machine.

• Tactile representational language. Features that can easily overlay conventional

representations of a standard tactile notation or symbolic system that non-sighted

visitors would be accustomed to (e.g. continuous lines running the length of a

feature or body to represent hair) are often ideal. Questions about whether to

embed braille or other tactile semiotic systems into physical objects produce a host

of other concerns that will be discussed in chapter 5.

• ‘Too much’ 3D. Initially, my collaborators at the ROM wanted the output to really

“burst out of the surface” and come to life. Accounting for the additional material

cost, potential for damage, etc. was an early design challenge. Furthermore, I

wondered, would it still be “representative” of a photograph if it burst out of the

surface? This question around object authenticity is one that theorists in various

fields have been forced to grapple with as technologies like 3D printing carry the

capacity to trouble an object’s structural fixity.33 A related issue, correspondence

between the 2D representation and its real-world counterpart, returns when we

examine the rendering of 3D data graphics (such as 3D pie charts).

• Printing negative versus positive space, and choosing between projected versus re-

cessed features. In some cases, one might elect to recess certain features below the

33See Younan and Treadaway (2015). Chapter 2. Crises of Ocularcentrism 57

surface of the base. This can create an issue for printing if they get pushed too

far below the surface that will sit on the print bed, but it can also produce some

interesting results for flat touchable models. Again, questions of figure and ground

that were alluded to earlier are important here. As will be shown in chapter 5, this

question isn’t as easy to answer when dealing with blind users who have extensive

experience touching objects that protrude. Recessed features, while aesthetically

interesting, can prove to be jarring to the visually impaired.

Naturalizations in 3D Translation

How do specific perspectival arrangements become naturalized? Cartesian ocularcentrism, as I’ve noted, relies on the separation of mind and body and consequent erasure of the subject body. An aspect of this perspectival naturalization that was revealed through this research project has to do with the question of the body’s erasure from the site of interaction. Where does the body go when it is re-inserted into the interface of interac- tion? Does a visitor approach the piece the same way they would if it were a photograph?

Should it hang on a wall, or be laid horizontal on a table or other surface? Asking these questions about audience interaction when I was designing these objects surfaced how easily they had been taken for granted.

The second, and arguably more important, aspect of perspectival naturalization that was surfaced had to do with the entangled concepts of pose estimation and foreshortening.

Pose estimation, discussed above, relates to the position and orientation of an object. In a photographic display, it also must take into account the position and orientation of the photographer. There is a subtle politics of pose estimation that I had to acknowledge in the design. Illuminating the place of the image capturer and the medium of represen- tation (which, in other representational contexts, would be tantamount to highlighting Chapter 2. Crises of Ocularcentrism 58 signatures, marks on pages, etc.) is a subjective choice that photographers must take

- one that echoes Bolter and Grusin’s notion of hypermediacy, in which the interface is meant to be apparent (Bolter and Grusin, 2000). How does this work in the context of translation to tactile? Should all traces of the photographer be effaced from the object?

Even in an exhibit on photography? Photographers and painters can make their audience look a particular way by directing their view.34 Is a tactile artist doing the same thing when they exaggerate an estimated pose, direction, depth, or background contrast?

Foreshortening relates to how an object is portrayed as having less depth or perspectival distance. How does one go about translating depth information from a visual image to a tactile? This problem came to light when I had to figure out a method for determining how to either include or obfuscate perspectival information in one of the images that was intended to go in display. The photograph, of a sea turtle underwater, was especially challenging to transform into a relief due to the prominence of one of the image subject’s

flippers. My colleague at the ROM was interested in exaggerating this feature of the sea turtle’s anatomy, in effect relying on a form of foreshortening, as a way to provide an intriguing facet of the touchable piece that visually impaired visitors might focus on.

Would it have been a misrepresentation to do so? In the space of tactile data graphics, the answer would surely be a resounding “yes!” But this wasn’t a data graphic, and fore- shortening doesn’t operate in the same way when other sensory modalities are involved.

This object was to rely primarily on tactile interaction. Furthermore, without colour information, it would be difficult to know whether something had been foreshortened in the first place. The fact that I printed the object in a clear polymer complicated this.

When the 3D design process “translates” an object from 2D, we lose relevant spatial in- formation that sighted people would use to know how or why something (e.g. a turtle)

34See chapters 2, 3, and 4 in Alpers (1983), in which she describes how Dutch masters developed innovative geometric conventions to dinstinguish between “optical” and “perspectival” images. Use of the camera obscura by Vermeer and others to create “shortcuts to perspective” exemplify how artists in this period directed the gaze of their audience. Chapter 2. Crises of Ocularcentrism 59

Figure 2.7: One of the base images used in the case study. is what it appears to be. Does a blind person have that same experience when they wouldn’t necessarily have that pre-existing visual-spatial bias?

Finally, it is worth recognizing that certain types of images afford more interesting in- teractions based on what they contain. A beach scene, for example, with a clear horizon line that dominates the image, provides a tactile separation. Objects (and subjects) that are familiar to blind and visually impaired users will provide them with initial referents.

With the images at my disposal, it was clear that the lion, which has been iconized in various ways, might be a familiar image to blind visitors. Would a sea turtle be familiar enough to them? When I began prototyping different physical configurations of this piece by printing them in diverse materials and at different scales, the material pushed back.

It became clear that exaggerating the prominence of the front flipper could produce a clear point of discussion for blind visitors, but it could also lead to uncertainty about what the subject of the image was. On a more mundane note, it would also create a risk Chapter 2. Crises of Ocularcentrism 60 of the flipper breaking where it meets the surface of the object as a result of prolonged tactile interaction.

So, in this case, pose estimation and foreshortening weren’t just technical problems.

They were design problems with important sociocultural considerations that had to be accounted for, as my goal was to produce models that various users, both sighted and non- sighted, could appreciate. But they were also interaction problems, in that anything more than a 2D-and-a-half relief would require more rigid and durable materials to withstand regular focused touching. And they were semiotic problems, in that decisions about whether or not to include noticeable features on the surface of the objects, and whether or not to eschew existing tactile graphic design practices (e.g. cross-hatched lines to signify hair) had to be given deep and careful consideration. Determining how much “depth” to give the objects also meant having to take into account real estate for other tactile features

(like braille). Finally, they were representational problems, in that making a choice to use correspondence-level granular detail meant that blind visitors who are accustomed to specific tactile conventions might not find meaning in the touch interaction. Removing the background, while reducing the complexity of the tactile surface, also meant troubling the object’s meaning. With each of these problems, vision and touch were entangled, pointing to the need for new methods of design and novel interaction modalities that can take into account these sorts of issues. Based on this work, designers who undertake similar projects that seek to translate between the flatland of paper or screen to 3D should be cautious if they wish to avoid making mistakes that can produce a poor - or, worse, meaningless - experience for either sighted or vision impaired users. My design process had to factor in multimodal exhibit displays that included text and photos. Designers working in this space need also to anticipate the multimodal needs of a non-sighted audience. This is an insight that has significant relevance for the domain of infovis, as we’ll see in chapter 5. Chapter 2. Crises of Ocularcentrism 61

These prototyping experiments enabled me to imagine alternatives to the naturalized

Cartesian perspectival arrangements that are taken for granted in museum interaction, however. This initial foray into denaturalizing a set of epistemic conditions that pervade an important information context set the stage for how I would imagine other digital objects transferring into the 3D tactile domain. Since working on this project with the

ROM, I have undertaken similar projects to create touchable objects from collection items at the Bata Shoe Museum and the Art Gallery of Ontario, and a number of related exhibits have since been initiated at museums around the world.35 That said, a framework for deploying, interpreting, and evaluating the efficacy of non-ocularcentric touch objects does not currently exist. Whether such a framework is even possible will be touched on in greater detail in chapter 5.

Method: Transduction

The problem with pervasive ocularcentrism isn’t just that it privileges visual sensemaking, or even that it reinforces normative epistemic biases - it’s that it often fails to account for the rich multisensory nature that describes so many encounters with information. When one stands five feet from a photo exhibit in a museum, they are involved in a concert of sensory activity that includes the murmur of the audience, the smell of adjacent artifacts, and even the warmth of the light penetrating through a window and onto their skin.

Translating visual objects to tactile or other sensory modalities reinforces their visual biases and semiotic constructions, and often fails to account for this multisensoriality. As such, translation, a verb I’ve used throughout this chapter, is inadequate for describing what is going on when a primarily visual object is deliberately made more available to

35For example, the Prado had an exhibit in 2015 that had touchable paintings for visually im- paired visitors. See https://www.npr.org/sections/parallels/2015/05/26/408543587/do-touch- the-artwork-at-prados-exhibit-for-the-blind. The Canadian Museum of Human Rights held an exhibit in 2016 along similar lines. See https://humanrights.ca/exhibit/sight-unseen. Chapter 2. Crises of Ocularcentrism 62 the other senses.

The kind of work I’ve described so far, in which information objects migrate across a gradient of sensory modalities that are never fully discrete, requires a more flexible verb to describe what is going on. At no point in my ROM collaboration was I ever really translating photographs to touchables. At all stages of the design process, it was clear that an entirely new object was being created. Whether through my creative license as a designer to make figure/ground exaggerations, or through the recognition that many blind visitors would not have pre-existing visual referents for the tactile interactions I was designing, the resulting objects stood on their own. A host of questions were raised. What gets occluded or obfuscated when phenomena stretch across modes (e.g. digital-material; ocular-tactile), whether the boundaries of these modes are clear or not? Why are certain things ignored, left out, seen and not seen? What are the expansions and reductions that happen in this migration? What are the tensions that lie between different layers of interaction? What are the translator-designer’s epistemic commitments?

To this end, I propose extending a methodological term, “transduction,” that technol- ogy historian William J. Turkel has recently used to describe creative opportunities in the context of humanities computing. Turkel builds on an earlier definition by Mitchell

Whitelaw, who writes: “strictly transduction only refers to transformations between dif- ferent energy types; here I want to extend it to talk about all the propagating matter and energy within something like a computer, as well as those between that system and the rest of the world.”36 Turkel advocates reframing “digitization and materialization as forms of transduction,” and suggests that, following HCI pioneer Mark Weiser, points of transduction can be thought of as “seams” between materials. More importantly, they

36http://teemingvoid.blogspot.com/2009/01/transduction-transmateriality-and.html. See, also, Simondon (2009), as well as Hansen (2006) and Roberts (2017) for applications in media theory and digital fabrication that are grounded in a Simondonian reading of this concept. This inter- pretation recognizes transduction as a process that renders previously incompatible forces compatible with each other. Chapter 2. Crises of Ocularcentrism 63

Figure 2.8: A diagram depicting how software and urban infrastructure are entangled, found on the website of the Programmable City research project at Maynooth University. might operate as seams between material and digital instantiations, across “analogue and digital, in either direction” (Turkel, 2011).

I first started using this term after an informal conversation with Turkel. Later, I came across its use in a completely different context, by a group of dashboard visualization researchers I worked on a project with in Ireland. In the context they invoked the term, it was used to describe processes that reshape (e.g. how code reshapes the city through mediation, augmentation, facilitation, and regulation).37 The way I propose it, transduction is not a process of correspondence, equivalence, or isomorphic representation across media, scales, dimensions, or sensory and interaction modalities. It is a process that frames visual interaction as an explicit multisensory enterprise that not only re- shapes its users, but creates the conditions for new use possibilities. An interesting example of this in the world of data visualization includes Phantom Terrains, by Frank

Swain and Daniel Jones, who turn the “characteristics of wireless networks into sound. By streaming this signal to a pair of hearing aids, the listener is able to hear the changing landscapes of data that surround them” and, in the process, challenge “the notion of

37http://progcity.maynoothuniversity.ie/about/ Chapter 2. Crises of Ocularcentrism 64

assistive hearing technology as a prosthetic, re-imagining it as an enhancement that

can surpass the ability of normal human hearing.”38 A Cosmic Hunt in the Berber

Sky, by the design studio Accurat, is another fine example in which DNA visualization

metaphors are used to represent cosmic myths. In this graphic for Scientific American

magazine, mythemes are broken down into boolean values that are then transduced back

into a narrative flow diagram in which multiple ordering patterns can reveal overlap and

common paths between origin stories.39

A slightly more detailed illustration of how I actually use transduction as a method can

be demonstrated by a set of design experiments I undertook while collaborating with the

Art Gallery of Ontario to produce touchable models for an exhibit. This followed my

work at the ROM, and came on the heels of a project I undertook at the AGO to digitize

marble busts and prepare 3D printed reproductions for an exhibit on a Victorian-era

pantograph and its inventor, Benjamin Cheverton.40 When asked how I might convert

an abstract, “non-representational” painting by celebrated Canadian artist Paul Emile

Borduas into some kind of representation that would be accessible to visually impaired

visitors, my first inclination was to produce a tactile that closely mirrored the contours of

the existing artwork. Recognizing that the creative possibilities inherent in transducing a

piece of abstract art were more abundant than they would have been with a photograph

(and its subsequent pull toward correspondence representation), I experimented with

various different modes of engagement, including producing a “cylinder seal” that I used

to roll out reproductions of the image on various kinds of pliant material. I digitally

sketched recessed pockets where the jagged geometric shapes in the original would be,

then set out filling them with different materials to experiment with textures. Finally,

I filled the recessed sections with conductive materials, wired them to a Raspberry Pi

38http://phantomterrains.com/ 39For more on this project, see https://blogs.scientificamerican.com/sa-visual/the- evolution-of-a-scientific-american-graphic-cosmic-hunt/. 40I describe this project in much greater detail in Resch et al. (2018) Chapter 2. Crises of Ocularcentrism 65

Figure 2.9: An untitled painting by Paul Emile Borduas in the Art Gallery of Ontario. Its online collection record is available here: http: // ago. ca/ collection/ object/ agoid. 68399 . Chapter 2. Crises of Ocularcentrism 66

Figure 2.10: Digitally sculpting the Borduas model. microcomputer, and using capacitive touch sensors and a Python-based library, turned the pockets into soft buttons that controlled audio output from a sound gradient.

“Translation,” as a technical approach, is insufficient to fully capture what is happen- ing in the interstices between modes. This term has a lengthy epistemic history that is impossible to fully capture here. Johri, Roth, and Olds (2013), in their description of a “chain of inscription,” come close to outlining how translation marks difference be- tween categories or modes of interaction. Manovich (2002b, pp. 63–65) uses the term

“transcode” - the translation into new computational formats - to describe something close to what I am proposing, but he suggests that “cultural categories and concepts are substituted, on the level of meaning and/or the language, by new ones which derive from computer’s ontology, epistemology and pragmatics.”41 Substitution is not a feature of

41For more on the use of “translation” in media studies, see Manovich (2014). That said, a detailed ge- nealogy of the complex literature on translation is impossible here, as it has triggered extensive discussion Chapter 2. Crises of Ocularcentrism 67

Figure 2.11: Design experiments included cylinder seals that would roll out reliefs of images in soft, conductive materials designed for electronic prototyping engagements. transduction, though, as various modes of materiality and interaction at all iterations of an object’s life are recognized. In proposing transduction as a method for the toolbox of an non-ocularcentric epistemology, I don’t simply intend for its use in creative computing contexts. I believe it opens up new possibilities for denaturalizing the epistemic condi- tions at the heart contemporary ocularcentric sensemaking. Engaging in transduction as part of a larger process of denaturalization allows one to consider the questions raised throughout this chapter in greater depth, and in other contexts (such as infovis). Again, we can ask the following questions - What are the expansions and reductions that happen when objects move across sensory modalities? What are the tensions that lie between different layers of interaction? What are the translator’s epistemic commitments? - but in different epistemic domains. in fields ranging from philosophy of science to museology. Chapter 2. Crises of Ocularcentrism 68

Conclusion

This chapter opened by claiming that ocularcentrism underlies virtually every knowledge activity, naturalizing specific epistemic practices, types of knowledge objects, and modes of interaction at the expense of non-visual and multisensory ones. It asked what happens to these knowledge objects and modes of interaction when historically visual domains are unsettled by non-ocularcentric practices, providing context through a museum exhibit in which a predominantly visual experience was forced to be reimagined as a multisensory one. So, what are the epistemological consequences for denaturalizing visual information using modes of interaction that are non-visual (or, at least, less visual)? There is little to indicate that these other modes are less truthful or meaningful. Even though the eye,

“the window of the soul” in da Vinci’s terms, remains “the principal means by which the central sense can most completely and abundantly appreciate the infinite works of nature” for many, there are clear trends in interaction and computing that indicate a desire to augment this sensemaking capacity. Chapter 5 will attempt to outline what some of these trends are, as it addresses various design concerns that are raised when we attempt to challenge ocularcentric practices. Building on this chapter’s call for a transductive method, it will demonstrate how the translation of tropes, models, and sensemaking practices from ocularcentric to multisensory interaction contexts has the capacity to mask ocularcentric bias.

While the entangled Cartesian relations described early in this chapter - subject|object; mind|body - are implicated in maintaining the status of vision as the primary means of sensemaking, attempts to discretely separate them are constantly troubled by new interaction regimes. Engaging with these entanglements diffractively is a way to surface and denaturalize the epistemic conditions they prop up (e.g. Cartesian ocularcentrism), and thinking diffractively helps us recognize the fact that what appear as clear-cut dis- Chapter 2. Crises of Ocularcentrism 69 tinctions often aren’t. Visual phenomena cannot be fully disembodied. The discussion around foreshortening makes that clear. Digital interaction is built on a scaffold of ma- terial practices. The discussion about sculpting and relief techniques I used makes clear that I drew on non-digital precedents. The site of interpretation is not at the boundary between subject and object, it is somewhere in the interstices between them. As this chapter’s case study demonstrated, the objects regularly re-shape how their subjects will interact with them. When historically visual domains are unsettled by non-ocularcentric practices, we have an opportunity to investigate how much is actually going on at the seams. This is the real crisis of ocularcentrism: that it tries to occlude this activity from coming to light. Chapter 3

Visualization: Epistemic Technology

Summary: This chapter examines contemporary screen-based data visualization and some of the epistemic values and claims that structure it. It historicizes visualization and reviews literature that describes how this medium operates in a host of different contexts.

It describes how a structural formalism - a “grammar of graphics” - underpins numerous contemporary visualization technologies, making it difficult to propose new approaches to visualization. Through a case study describing experiments building screen-based mul- timodal interfaces, it contextualizes this challenge within a current debate around the epistemic values of truth and aesthetics. The chapter culminates by describing methods for engaging in critical approaches toward visualization.

Most information visualizations are acts of interpretation masquerading as presentation... they are arguments made in graphical form. Johanna Drucker (2014, p. 10)

70 Chapter 3. Visualization: Epistemic Technology 71

Introduction: The Epistemology of Infovis

Visualization is arguably the preeminent sensemaking tool of the modern research arena.

It is a common language between science and the public. It features prominently as

the medium in the interface to “smart” technologies. It operates at local, intimate,

and personal scales, as well as boundless, aggregate, global ones. Its epistemology is

in an ongoing state of definition, despite being centuries in the making. Its effects are

immediate and direct in a social world governed by the abstract logics of ubiquitous data

and computational infrastructures.

In this chapter, contemporary invofis is shown to be suspended in a state of tension

between two poles: aesthetics and truth. This tension arises, in part, out of the ob-

jective|subjective entanglement described in the previous chapter, and has significant

consequences for how infovis gets deployed in a diverse range of disciplines and contexts.

In the sections that follow, I situate infovis as an epistemic technology that is frequently leveraged to make objective claims at the expense of subjective ones. The chapter distin- guishes the somewhat nascent field of information visualization from its progenitor, the

field of statistical graphics, drawing to the surface epistemic conflicts that have emerged as a result of this divergence. The chapter also illuminates a key technical and theoretical mechanism: a relatively opaque grammar of graphics that serves to naturalize specific modes of evidence presentation. In the case study upon which the chapter’s arguments are scaffolded, information visualization is presented as a mediated form of representation that, until recently, has resisted multimodality. Near the end of the chapter, I present a methodological contribution to questions around how critical approaches to visualization might be formulated.

The central research question motivating the chapter (if not the entire dissertation) - how Chapter 3. Visualization: Epistemic Technology 72

do we denaturalize “the increasingly familiar” graphical visualizations that we regularly

encounter - is inspired by Johanna Drucker (2014). She has done more than any other

current scholar to imagine how critical visualization might take shape as a dedicated research concern. Her work has inspired additional research questions that guide the chapter: What are the epistemic commitments of infovis? Are they capable of being defined? To answer these questions, the chapter will illuminate epistemic virtues that have been naturalized in a specific media format: digital screen-based multimodal data visualization (e.g. dashboards).

Visualization is not just an instrumental technology. It doesn’t only mediate or perform functions on data... it positions its audience into various epistemic camps: information and graphical artists/aesthetes; statisticians who use visualization to make data-driven arguments; managers and analysts who draw on time series to issue probabilistic, pre- dictive statements. This partitioning enables the respective groups to construct often incommensurable truth claims from the same sets of data. A goal of this chapter is to demonstrate that, although structural mechanisms create distance between the epistemic commitments the various camps are aligned by, they are not, in fact, always incommen- surable. Infovis can pursue beauty and truth equally.

Epistemic Objects

In chapter 1, the concept of epistemic objects was used to describe how data graphics are employed in the interpretation, display, and demonstration of evidence. Drawing on ideas from Hans-Jörg Rheinberger (1997), and quoting Karin Knorr Cetina (2008, p. 191) who writes that epistemic objects are “defined by their lack of completeness of being” and claims that they are “always in a process of being materially defined,” I suggested that visualization techniques and technologies are exemplary of this concept. This claim Chapter 3. Visualization: Epistemic Technology 73 requires further elaboration, though, as epistemic objects have a number of additional characteristics that are applicable to infovis.

Knorr Cetina (2008, p. 191) suggests that epistemic objects are characterized by an un- folding ontology. Despite their “lack of completeness,” however, representational objects are generally materially instantiated in some concrete process or thing. Tarja Knuuttila

(2005, p. 1261) advocates turning away from a dyadic, isomorphic account in which representation is a relationship simply between the “real system and its abstract and the- oretical depiction.” A triadic relation, in which “interaction” is factored as the binding

“between material artifact and cognitive sensemaking,” allows for “indeterminateness” - for the possibility that our representational practices are fallible and subject to change over time. For Rheinberger (1997, p. 190), they are “question-generating.” He suggests that they “can be processes as well as things” and that they “have a complexity that increases under academic analysis rather than reducing or decreasing.” He emphasizes that an epistemic object’s current instantiation depends on how its future develops. In this sense, epistemic objects are not fixed temporally. While there are clear distinctions between them, epistemic objects are related to the concept of boundary objects in that they often derive their signifying power through ongoing practices of translation across domains.1

If we consider some of the more important data visualization technologies, including

IBM’s now-shuttered collaborative ManyEyes service or the Python library Matplotlib,2 we find that they typically exhibit characteristics of this “ongoing state of unfolding” as core developers add new functionality to extend early functions. Matplotlib, for example, began as a simple module that could replicate many of the functions researchers and scientists used in MATLAB. As it grew and its development community expanded, it

1See Star and Griesemer (1989). 2See http://www.bewitched.com/manyeyes.html and https://matplotlib.org/. Chapter 3. Visualization: Epistemic Technology 74

became the back-end of choice for many popular visualization libraries, from Seaborn to

Bokeh.3 In this sense, Matplotlib has matured well beyond its original mandate to provide a Python alternative to MATLAB’s scientific computing and visualization capacity, and is now arguably the most important visualization technology (despite its ongoing lack of completeness and unfolding ontology).

Epistemic Technologies

The power of the unaided mind is highly overrated. Without external aids, memory, thought, and reasoning are all constrained. But human intelligence is highly flexible and adaptive, superb at inventing procedures and objects that overcome its own limits. The real powers come from devising external aids that enhance cognitive abilities. Donald Norman (2014, p. 43)

Infovis is, by default, a technology of sensemaking. Epistemic technologies are computa-

tional technologies that amplify the knowledge-making capacities of humans, supporting

and facilitating the creation of, legitimation, and critique of truth claims (Hooker, 1987;

Ratto, 2011a). “Critique by redesign,” which will be described later on, is an example

of an infovis activity where the creation of, legitimation, and critique of truth claims is

encouraged and reinforced. Even when a grammar or formalist logic is baked into them,

epistemic technologies display many of the characteristics that define epistemic objects,

including an unfolding ontology and the capacity to generate new questions. They can

also be read as sociotechnical assemblages in that they comprise processes, technological

artifacts, and specific domains or contexts of activity. In many ways, epistemic tech-

nologies suspend their users somewhere between what Daston and Galison (2007) refer

to as the epistemic conditions of “mechanical objectivity” (in which the technology is

3See https://seaborn.pydata.org/ and https://bokeh.pydata.org/en/latest/. Chapter 3. Visualization: Epistemic Technology 75

expected to help nature reveal itself and protect against “subjective projections”) and

“trained judgment” (in which their users are expected to select and curate appropriate

evidence to support their truth claims).4

One key area where the ongoing, iterative, unfolding ontology of epistemic technologies is

manifested is at points of the data analysis pipeline where sketching and prototyping are

the primary activities. Sketching and prototyping, which are typically conflated activities

despite being quite distinct,5 can be important stages of exploratory data analysis (EDA), especially if there is a publication output that is intended. Exploratory design has been a hallmark of infovis since the introduction of computers. End-to-end tools like notebooks and interactive dashboards - which will be described further on - increasingly support the capacity to combine EDA and iterative prototyping while working toward a publishable visualization. Sketching, of course, also entails paper-based work. Many visualization professionals employ crude paper sketching as both an EDA and data shaping process

(i.e. sketching graphs to figure out what stories the data might tell), as well as a design process (i.e. trying out different colour palettes).6 Because visualization is typically a multi-stage process, initial “material” sketching and prototyping affords the possibility of denaturalizing assumptions even before the actual data is brought into the picture and digital sketching/prototyping begins.

According to Giorgia Lupi, Creative Director at Accurat, a global data-driven research and design firm that has been involved in data journalism work for a number of years, sketching, for many infovis designers, is “the principal method to uncover meaning in the things they analyze visually.”7 She breaks her own data sketching activities into three

4See also Christin (2016) for an analysis of mechanical objectivity and trained judgment in the space of contemporary data practices. 5Bill Buxton (2010, pp. 139–142) gives the most detailed description of this difference. 6Heller and Landers (2014) provide an excellent overview of this phenomenon. 7See https://medium.com/accurat-studio/sketching-with-data-opens-the-mind-s-eye- 92d78554565 as well as Lupi’s collaborations with Stefanie Posavec: http://www.dear-data.com/. Chapter 3. Visualization: Epistemic Technology 76 distinct stages. In the first, she focuses on “the overall organization of the information, in the macro categories of the data we are analyzing.” In this stage, the “actual” data does not need to be invoked, and can be replaced by an analogue. Lupi’s second stage focuses on design themes: figuring out shapes, colours, and features that need to be adopted or invented to suitably represent the data. In these first two stages, according to Lupi,

“drawing with data can also help raising new questions about the data itself.” This is an opportunity to reveal “analysis you couldn’t envision by only looking at the numbers.”

This is also a moment where denaturalization can be an important strategy if designers are willing to ask critical questions and deploy unconventional means to represent their data. Lupi’s third sketching stage entails prototyping using paper or digital tools - a practice that “facilitates the communication processes among designers, team-mates and clients.” Again, this is an opportunity for denaturalization, as the various stakeholders might be forced to acknowledge and account for their own epistemic biases.

Even when paper-based, these three stages of sketching and prototyping are bolstered by the idea that visualization design tools, as epistemic technologies, support the creation of, legitimation, and critique of truth claims. Lupi likens sketching-based exploration, or “drawing with data” as a “continuous state of becoming” and as “the tentative mani- festation of an insurgent if.” There are parallels here to the writing and editing process employed in the communication of scientific knowledge. Unfortunately, this process can be limited by the tendency to “start from what the tools you have can easily create, and maybe also from what we - as designers - feel more comfortable in doing with these tools” according to Lupi. Sketching and prototyping are core features of the data sensemaking process, and sketches and prototypes are necessary components of visualization’s capac- ity to act as an epistemic technology. This is further evidenced by the growing adoption of computational notebooks (e.g. Jupyter Notebook), which will be described later in this chapter. These relatively new technologies make iterative visualization prototyping Chapter 3. Visualization: Epistemic Technology 77 a core feature of exploratory design, and cement the idea that visualization is rarely a means to a fixed end.

Prominent software engineer and designer Bret Victor, who has been an influential voice in the discussion around “active documents” and interactive visualization tools, suggests that there are three primary motivations for why a person might turn to software: to learn, to create, and to communicate.8 He proposes that software applications be clas- sified into the following categories: information software, which serves the capacity to learn; manipulation software, which serves the human urge to create; and communication software, which serves the human urge to communicate. For visualization software to be classified as epistemic technology, it must possess the capacity to do all three of these things. Some would suggest that these separate capacities can be broken down into a core set of functionalities - essential ingredients for every visualization technology. The following section will attempt to illuminate what these characteristics might be, as well as trouble how they’ve come to be.

What is Infovis? Where does it Come From? What does it do?

Up to this point, I have not provided a detailed description of what infovis actually is and does. While there are numerous textbooks, histories, and manuals that each contribute to this story, I will sketch out a protracted history of select figures and examples in order to help us understand what information visualization actually is, where its roots trace from, and how it currently operates as an epistemic technology. In doing so, I will provide a foundation for thinking about how infovis works in non-traditional interaction contexts, such as the ones introduced in the following chapters (e.g. virtual reality).

This is not a comprehensive history of visual thinking or visual explanation, but an

8See Victor (2015). Chapter 3. Visualization: Epistemic Technology 78 opportunity to both highlight pivotal moments in visualization history and call attention to some omissions and inaccuracies in the orthodox narrative. Additionally, the following sections will try to illuminate how infovis is, in fact, a somewhat new and divergent field

(that happens to have very deep roots). In doing so, these sections will provide some answers to the question of how infovis structures knowledge.

Historical Roots

One important reason for re-examining the history of data visualization is to resist the reductive and oftentimes inaccurate narratives that are frequently peddled on infovis blogs. Another is to call attention to the fact that these graphical techniques did not emerge singularly. Many, for example, are tightly integrated with cartographic techniques that have a much longer epistemological trajectory. That said, there are (at least) three distinct eras of infovis. The pre-history of visualization, arguably the most relevant to this dissertation’s interest in tangible/tactile objects, includes numerous examples that can be loosely categorized as “data objects.” These include well-described examples like

Inca quipus (coloured strings that stored numeric values encoded in knots, in use up to the 17th century) and Polynesian stick charts (navigational aids that represented ocean swell patterns). While there is no true genesis for the flat, paper-based data graphic, some historians have suggested that going back further than the 1600s is futile.9 The seeds for many contemporary practices appear to have been put in place during the mid-to-late 18th century. For example, the modern tendency to represent by dot - to

9We might start in 1628 with van Langren’s “degrees of longitude,” which is considered by some to be the first known data graph despite not being published until 1644. Alternatively, Christoph Scheiner’s 1630 sunspot small multiples have been a starting point for some. We could go back even further, however, to Pliny in the 9th Century. Various interesting projects trace different histories. Among them, Howard Wainer (1990) and Michael Friendly (2006) have provided the most interesting and sustained work in outlining a history of infovis. Friendly’s various articles and crucial “Milestones Project” stand out: http://datavis.ca/milestones/. RJ Andrews’ recent interactive timeline is another excellent project: http://infowetrust.com/scroll/. Chapter 3. Visualization: Epistemic Technology 79

abstract the human - might be traced to this period. Notably, pioneering mechanical

displays introduced various interactive analog reading techniques that portend today’s

screen-based interactions.10

If we are to trace any kind of genesis moment, however, we should probably start in the

1800s with William Playfair. Playfair is said to have invented many of the visualization

tropes that we use today, including bar, pie, and line charts. Edward Tufte, Leland

Wilkinson, and Howard Wainer, three of the most respected voices in the infovis commu-

nity who we’ll encounter throughout this chapter, have written extensively about him.

All hold him as a paragon of statistical excellence for his graphical innovations (Tufte,

1983; Wainer, 1996; Wainer and Velleman, 2001; Wilkinson, 2007). They, among many

others, have contributed to the legend of Playfair - an almost orthodox belief system that

claims his innovative techniques forever inscribed how visual graphics should reveal the

truth in data, and that deviating from their graphical conventions is a recipe for disaster.

Despite Playfair’s brilliance, there is a counter-narrative that is rarely mentioned. He

was an active counterfeiter and propagandist and, in the words, of James Watt, a “blun-

derer.”11 These facets of his personality do not discount his innovative contributions, but

are worth mentioning because they have been conveniently left absent from the myth of

Playfair’s objectivity that is re-told every so often in the infovis literature.

According to contemporary infovis lore, the first organized era of statistical graphics - its

“golden age” - ran from approximately 1850 to 1900 (Friendly, 2008). Various luminaries

feature in this apocryphal 19th century narrative, the most prominent among them being

John Snow, Florence Nightingale, and a visionary Frenchman named Charles Joseph

10An interesting example of this is Jacques Barbeu-Dubourg’s machine chronologique, which is profiled here: http://dataphys.org/list/barbeu-dubourgs-machine-chronologique/. 11See https://www.wsj.com/articles/review-playfair-plotted-for-england-1515790469, as well as Costigan-Eaves and Macdonald-Ross (1990). Bruce Berkowitz (2018) does a fantastic job of highlighting some of the less-known aspects of Playfair’s life in his recent book. Chapter 3. Visualization: Epistemic Technology 80

Figure 3.1: Top: A series of pie and slope charts from Playfair’s “Statistical Breviary” depicting the extent, population, and revenue of the principal nations of Europe in 1804. Bottom: A series of flow maps from Minard depicting European raw cotton imports in 1858, 1864, and 1865. Chapter 3. Visualization: Epistemic Technology 81

Minard.12 Like Nightingale and Snow, Minard not only invented pioneering graphical techniques, but he engaged in what might now be referred to as “applied statistics.” He cut his teeth working as a civil engineer on bridges, dams, and canals. His most famous graphic, a flow diagram depicting the diminishing size of Napoleon’s army during the

Russian campaign of 1812 that exhausted it, is held by Tufte and others to be one of the greatest statistical graphics ever produced. Minard’s output was limited, however, as he only produced 51 thematic maps during his career. Many of them mixed visual tropes - cartograms overlaid with pie charts, for example - something that the defenders of graphical simplicity typically frown upon today.

John Snow is often written of as a founder of epidemiology and public health. His innovative methods retain influence in fields ranging from geography to data science.

Snow’s legend is derived from his pioneering work in discovering and representing the source of a cholera epidemic in 1850s London. Snow’s archetypal cholera map is now a pillar of data visualization pedagogy, and tutorials for reconstructing it are regularly featured in visualization texts, online instructionals, etc. As the story goes, Snow mapped cholera deaths during the outbreak, eventually tracing their source to a contaminated well in Soho. After removing the Broad Street pump handle, the outbreak supposedly ceased immediately. In reality, graphing the actual data using modern techniques reveals that removal of the pump’s handle didn’t actually end the outbreak. The peak of the cholera outbreak happened before removal, and it was already in decline by the time

Snow made his discovery.13 Alberto Cairo, who has been among the most vocal critics of the mythmaking hagiographies prevalent in infovis, has described Henry W. Acland’s work to present alternative hypotheses in the Broad Street case: “we immortalize the

12It’s also worth acknowledging the crucial role of Karl Pearson in the field of statistics, but that is a separate history. 13This tutorial, which recreates and analyzes Snow’s work using modern Python-based tools, makes this evident: https://www.datacamp.com/projects/132. Chapter 3. Visualization: Epistemic Technology 82

Figure 3.2: Top: Snow’s famous map of the 1854 Broad Street cholera outbreak. Bottom: Nightingale’s “coxcomb” diagrams depicting causes of British mortalities in the Crimean War in 1855 and 1856. Chapter 3. Visualization: Epistemic Technology 83

Snows of this world and condemn the Aclands to oblivion,” Cairo laments.14

Florence Nightingale, the “lady with the ,” is famous as the founder of modern nursing. Outside of the visualization community, it is not particularly well-known that she was an innovative and pioneering statistician who used data visualization to promote what might now be termed social justice imperatives. The radial “coxcomb” diagrams she invented15 to depict mortality resulting from unsanitary conditions during the Crimean

War are frequently held up by graphic designers as exemplary aesthetic contributions to the field. This despite the fact that circular charts such as hers are now frequently pilloried in the infovis community as confusing and often misrepresentative due to perceptual issues. Infovis historians love to reference Nightingale, but don’t seem to find much use for the innovations she came up with.

In 1860, Oliver Wendell Holmes, famed jurist and long-serving member of the U.S.

Supreme Court, wrote “The two dominant words of our time are law and average, both pointing to the uniformity of the order of being in which we live. Statistics have tabu- lated everything - population, growth, wealth, crime, disease. ... The Positive Philosophy of Comte has only given expression to the observing and computing mind of the nine- teenth century” (Holmes, 1892, p. 180). Near the end of this first great age of statistical graphics, there was a sense that a newly engaged citizenry was taking shape - one that was capable of reading and parsing tables and statistical graphics, of “thinking statisti- cally.”16 This moment might be thought of as the intellectual precursor to today’s age of data-driven civic engagement.

There are numerous milestones from the early 20th century that might be included in our history, but the advent of Cold War-era computing (and simultaneous birth of in-

14http://www.peachpit.com/articles/article.aspx?p=2048358 15Which were, in fact, predated by André-Michel Guerry’s less known polar charts. 16https://www.smithsonianmag.com/history/surprising-history-infographic-180959563/ Chapter 3. Visualization: Epistemic Technology 84 teractive computing via the likes of Ivan Sutherland at MIT) paved the way for the era of computational data analysis and visualization, which continues today. Three figures, in particular, stand out from the pack: John Tukey, Jacques Bertin, and William Cleve- land. Tukey, a statistician at Princeton, effectively created the field of computational data visualization and, with it, exploratory data analysis. The importance of EDA can- not be overstated. While interactive visualization supports communication, validation, and replication of findings, exploratory data analysis is its bedrock. Notably, Tukey’s many contributions were not only technical. He was also a mentor to Tufte, whose “Vi- sual Display of Quantitative Information” became one of the most popular texts in the

field to cross over to the general public.

Bertin’s work was relatively unknown in North America until Howard Wainer commis- sioned the English translation of his seminal text, “Semiology of Graphics.” Bertin was active between the 1950s and 1980s, and spent much of his career working as a cartog- rapher. He is crucial to this history, as he synthesized a formalist theory of graphics that had a profound influence on Tufte, and later Wilkinson, two visualization theorists whose work is deeply felt today. Bertin’s unique approach, to classify “aspects of signi-

fiers - size, shape, color, and so on,” helped produce a rational and systematic way for writing algorithms that would ascribe meaning to them in graphical contexts. According to Wilkinson (2007), prior to Bertin’s innovative work “charts were seen as miscellaneous collections of widgets - tick marks, points, lines, labels, legends” that didn’t have any kind of structured, geometrical order. Bertin’s innovations made them computable and, in effect, scalable. Bertin was also arguably the first to introduce visual perception stud- ies as the justification for his graphical choices. Wilkinson notes that Bertin’s work was governed by the idea that “visual perception operated according to rules that could be followed to express information visually in ways that represented it intuitively, clearly, accurately, and efficiently.” The emphasis on clarity and accuracy, based on claims made Chapter 3. Visualization: Epistemic Technology 85 according to experimental findings from the study of visual perception, is perhaps his greatest legacy (and one that we will take up later in this chapter).

Cleveland, a statistician at Bell Labs, “has done the most to interest practitioners in perceptual issues” according to Wilkinson (2007), who writes that he “adopted a narrowly psychophysical approach and has always been suspicious of cognitive theories of graph reading. His perceptual hierarchy of graphic elements (position, length, angle, area, volume, color) is taken as gospel by those designing graphics. The legacy of Cleveland’s research can be found in statistical graphics packages such as S, R, Stata, and SYSTAT.”

This influence, like Tukey’s introduction of EDA and Bertin’s graphical formalism, cannot be overstated. Wilkinson, the author of the highly influential text “The Grammar of

Graphics” (that we’ll read about further on) acknowledges throughout his book how

Cleveland’s ideas are now embedded in the technologies of contemporary visualization.

Modern Era

Visualization began to separate from statistics and emerge as a distinct academic con- cern (if not an explicit discipline) at the end of the 1990s. Card, Mackinley, and Schnei- derman’s book “Readings in Information Visualization: Using Vision to Think” offered arguably the first proper outline for the field (Card, Mackinlay, and Shneiderman, 1999).

Since then, a number of distinctive trends have taken shape. For example, creative data artists who took visualization as a unique and flexible medium began to emerge by the mid-2000s. They included the likes of Martin Wattenberg and Fernanda Viégas,17 Jer

Thorp, and Ben Fry. Moritz Stefaner, a prominent visualization professional and vocal advocate for straddling both worlds - the artistic and the instrumental - suggests that this was the moment when designers started “getting creative with data - beyond just

17Who list 2008 as the “the tipping point of a field that used to be locked away in its academic vault” (Viégas and Wattenberg, 2008). Chapter 3. Visualization: Epistemic Technology 86 making nicer charts.”18 In academia, Shneiderman’s influence was felt across the field of HCI, where the likes of Sheelagh Carpendale, Jeffrey Heer, and Jean-Daniel Fekete pushed the bounds of what data interaction research could look like. At the intersection of academia and public engagement, figures like Edward Tufte and Alberto Cairo pur- sued research while maintaining active speaking and teaching portfolios. Since the early teens, the emergence of both big data and the nascent field of data science, which can be traced directly to Tukey’s work in the 60s, have prompted new visualization practices that rely on ephemeral, on-the-fly data, and place an emphasis on multivariate analysis.

Despite a recent swell of interest in the field, possibly due to the emergence of data sci- ence as a sought-after career, infovis is still relatively new as a scholarly discipline and professional concern. There are a few key characteristics that this modern era might be marked by. According to Viégas and Wattenberg, who have recently assumed the leadership of Google’s data visualization research group, visualization has “become an es- sential medium for journalists, scientists, and anyone else who needs to understand data” despite being “far from understood.”19 Giorgia Lupi has noted this as an opportunity to inject complexity, critique, and humanistic values into visualization practice. She has been highly critical of one of the most persistent themes of the last twenty years of info- vis development: the tendency to claim that information graphics have an innate power to “simplify complexity.” She argues, instead, that visualization should illuminate com- plexity.20 “The phenomena that rule our world are by definition complex, multifaceted and mostly difficult to grasp, so why would anyone want to dumb them down to make crucial decisions or deliver important messages?” Lupi suggests that we are at a time where more “meaningful and thoughtful visualization” can be pursued. She claims this

18https://medium.com/visualizing-the-field/there-be-dragons-dataviz-in-the- industry-652e712394a0 19See https://medium.com/@hint_fm/design-and-redesign-4ab77206cf9 and (Viégas and Wat- tenberg, 2015). 20https://medium.com/@giorgialupi/data-humanism-the-revolution-will-be-visualized- 31486a30dbfb Chapter 3. Visualization: Epistemic Technology 87 will be a visualization revolution in which personalization, intimacy, and context are the predominant design values.

What this next stage will be framed by, I suggest throughout the remainder of this chap- ter, is an epistemic disjuncture between the ardent positivism of the statistical graphics community and the personalized, subjective creativity of the growing artistic data com- munity (that has only picked up steam since 2005). This sort of disjuncture is what results with epistemic technologies when new patterns of interaction are forced to fit alongside more established ones. It’s not quite a Kuhnian paradigm shift, but there is certainly a palpable friction in the visualization community as truth and beauty are pursued in parallel.

What is a Graph?

The simple graph has brought more information to the data analyst’s mind than any other device. John Tukey (1962, p. 49)

This friction stems partly from indeterminacy over what a graph actually is and does. The term graph comes from the Greek graphein, which means “to write” or “to record.” But is a graph the same thing as a chart? These days, the terms are mostly interchangeable, but important figures in the infovis world have strong opinions about this semantic blurring.

Leland Wilkinson, in “The Grammar of Graphics,” makes the argument that charts are artistic structures - that when we talk about charts, what we are really talking about is a typology of chart objects. Graphs, on the other hand, are about the underlying data structure. “A graph is an abstract chart,” Wilkinson suggests (Wilkinson, 2007, p. 111).

It is, he claims, a mathematical construct, while a chart is merely a representational Chapter 3. Visualization: Epistemic Technology 88

construct (Wilkinson, 2006, p. 2).21

This sort of language is frequently echoed on the technical side of the infovis spectrum.

Mike Bostock, the author of D3, the ubiquitous JavaScript library that runs most con-

temporary web-based visualizations, and a former graphics editor at the New York Times

who helped herald the current age of data journalism, has proclaimed that “D3 espouses

abstractions that are useful for any visualization application and rejects the tyranny of

charts.”22 When Bostock speaks of abstractions, he means the same thing that Wilkinson

does when he speaks of graphs. He even refers to Wilkinson directly in making his case

for D3 as the standard library of web-based data visualization. Bostock’s self-authored

bio notably describes him as “building a better computational medium.”

Tufte frequently uses the term “analytical graphics” and sometimes blurs this with graph-

ical evidence, visual explanations, etc. A handful of other terms are occasionally swapped

for graph or chart. Among them, plot is the closest, although it has a more direct, literal meaning. When used, it generally refers to the output from plotting statistical data into a diagram - a representation of data points - as well as the frame of dimensions in which these points are arranged. Less common - or at least waning in popularity - is diagram, statistical diagram, or diagrammatic representation (which comes from Herbert Simon).23

“Diagram” refers to a much wider category of objects that includes process models, ar- chitectural representations, engineering sketches, and similar graphics that would not fall under the definition of chart, plot, or graph - even if they describe phenomena that come from similar worlds.

This semantic indeterminacy may partly derive from the “different goals” of the infovis

21An insightful discussion about the difference between these terms can be found here: https://english.stackexchange.com/questions/43027/whats-the-difference-between-a- graph-a-chart-and-a-plot. 22https://medium.com/@mbostock/introducing-d3-scale-61980c51545f 23See Larkin and Simon (1987). Chapter 3. Visualization: Epistemic Technology 89 and statistical graphics communities, according to statistician Andrew Gelman (2011, pp.

4–5). Gelman claims that statistics is about model fitting, while graphics is occasionally about EDA, but more often about publishing - about “direct representations of data.”

Gelman and Unwin (2013, p. 25) have given the most thorough treatment of the different epistemic commitments between these communities:

One key difference between the two approaches is that Infovis prizes unique,

distinctive displays, while statisticians are always trying to develop generic

methods that have a similar look and feel across a wide range of applications...

Another important difference is in the expected audience. Statisticians assume

that their viewers are already interested and want to provide structured infor-

mation, often a carefully prepared argument. For statisticians, graphics are

part of an explanation. Even exploratory analysis typically has a clear struc-

ture. In contrast, Infovis designers want to draw attention to their graphics

and thus to the subject matter. For them, graphics are more of a door opener.

This is reflected in how both groups use interactivity. Infovis graphics often

have video or animation, which adds to the attraction and engages viewers,

who can control the animation and perhaps change colors or shapes, allowing

different perspectives on the images. Statisticians, when they use interactiv-

ity, use it to link to other graphics or to models, allowing viewers to explore

the argument.

This passage speaks to territorial contests that have played out in the statistics and infovis literature over the past decade. In many ways, this territorial boundary keeping has been a way for statisticians to retain a sense of domain authority, as they have been the first to decry the public (and, some would say, democratic) aims of many in the infovis community who use artistic and interactive approaches to promote greater Chapter 3. Visualization: Epistemic Technology 90 engagement. Notably, Wilkinson is a statistician first (his PhD is in psychology), despite having spent much of his career developing statistical software packages. Bostock is a computer scientist (his PhD was done under the supervision of Jeff Heer at Stanford).

Tufte’s background is in statistics and political science. Gelman is a statistician, and

Unwin is a statistician and graphics researcher. Among them, Tufte is the only one who ventures frequently into the artistic domain.

There is also a need to clarify visualization types used at different stages. Sketches may beget diagrams and/or prototypes, schematics which entail multiple diagrams stitched to- gether in sequence, iterations, and even multimodal documents like information graphics

(infographics). Infographics often include multiple visualizations alongside a persuasive textual description or story. Text is frequently as important as graphics in this medium, where interaction and dynamism generally take a back seat to effective graphic design.

Infographics are often constructed using design tools (e.g. Photoshop or Illustrator), rather than programmatically. They may tell a data story, but they lose their power when the dataset is swapped (while many visualization tools are capable of swapping datasets to revise and update graphics).24

An entirely separate category that would require substantially more attention to address is the use of geographic visualization alongside or as a component of information visu- alization. Increasingly, the line between these two distinct fields is blurred, and many visualization tools and media, from infographics to dashboards, now include mapping tools. Even though GIS deals similarly with questions about data representation, it would be impossible to survey the field of geographic visualization here. Needless to say,

24The “dataisbeautiful” subreddit on reddit.com, a popular forum for sharing custom visualization work, taking part in visualization challenges, and debating the merits of popular infovis work, has a rubric for helping its users determine whether their submission is an infographic or a visualization. It can be found here: https://www.reddit.com/r/dataisbeautiful/wiki/index. Susie Lu, a Senior Data Visualization Engineer at Netflix, has commented on the various stages of design that determine whether something is an infographic or a visualization. See: http://www.susielu.com/data-viz/d3- annotation-design-and-modes. Chapter 3. Visualization: Epistemic Technology 91

clarifying the language used to define and describe these related fields is important, if

only to say that visualization is a multi-faceted process that includes placing data in tables, putting them into relation, teasing out a story, etc.

What Do Graphs Do?

Diagrams are of great utility for illustrating certain quantities of vital statistics by conveying ideas on the subject through the eye, which cannot be so readily grasped when contained in

figures. Florence Nightingale25

Behind the flower, the snowflake, the solar magnetogram stand not only the scientist who sees

and the artist who depicts, but also a certain collective way of knowing. Daston and Galison (2007, p. 53)

Visualization is much more than just representation. It is a single part of an extensive

chain of interpretation that renders data sensible to human interpreters. It is not just

about turning data into graphical forms, but about seeing the world as representable.

Visualization structures knowledge “without the mediation of language.”26 It is a scaffold

for interpretation and meaning making. It is used to support evidence and argument,

but is bounded by the modes of interaction its designers choose to employ. Many of the

greatest theories in science have relied on visual representation to build their evidence

claims, from Feynman’s use of a novel diagrammatic language to make sense of and

disseminate his insights into the quantum world, to Darwin’s use of cladogram trees to

25https://archive.org/details/mortalityofbriti00lond 26https://medium.com/@giorgialupi/data-humanism-the-revolution-will-be-visualized- 31486a30dbfb Chapter 3. Visualization: Epistemic Technology 92 sketch out his theory of evolutionary change. What do graphs do, how do they work... how do they structure knowledge?

We might break the visualization process into various stages. The first would entail exploration and familiarization. Visualization, as a principal element of EDA, facilitates this. It enables the analyst- to ask where the data came from, to understand whether there are gaps in it, to determine whether it is trustworthy, and to find threads within it that might help construct an argument. Exploration may be a core part of the infovis pipeline, but visualization is not tantamount to analysis - it is a tool for analysis, and can help an analyst/interpreter both find and tell a story with data, the latter through the use of graphic design techniques. How does visualization work with EDA?

While tabular interfaces enable accurate, granular searching, the cognitive load they require to interpret data scales up exponentially. Visualization facilitates the capacity to search for trends - especially macro trends.

Following the exploration stage, pruning, selection, refinement, and model development normally take place. These actions typically fall in the domain of statistics and, while they sometimes make use of graphical tools, and are subject to increasing forms of au- tomation, they are out of my purview here. Together, though, they prepare data for narration, storytelling, animation, and performance. Bruno Latour (1983, p. 3) suggests that “the most powerful explanations, that is those that generate the most out of the least, are the ones that take writing and imaging craftsmanship into account.” At this stage, visualization can present signposts adjacent to descriptive text that may help the reader/audience stay anchored to the underlying data. Advocates of data storytelling will argue that visualization is simply one piece of a UI strategy that emphasizes piece-by- piece revelation. “Scrollytelling” interfaces and animated data graphics are both examples of this. In this sense, visualization can be described as storytelling media, where helping reveal the data’s story is a crucial function of what graphs do. The data designer has Chapter 3. Visualization: Epistemic Technology 93

agency here, as the story does not “reveal itself,” and the graphic hardly every “speaks

for itself,” contrary to positivist myth. It is revealed through very deliberate design,

discipline, and epistemic choices.27

Storytelling is the intermediary between exploration and publication. This speaks to visualization’s communicative purpose, which Stephen Few distinguishes from its sense- making (or analytical) purpose.28 Few argues that data is formless, and that visualization is merely a way to “give form to that which has none.” He likens this to a process of

“translation of the abstract into physical attributes of vision (length, position, size, shape, and color, to name a few).” This is the stage at which data stories are translated to in- tended audiences. Few’s reductive approach is not without criticism. By the end of this dissertation, it should be clear that data is never “formless” when it is in the wild, and treating it as such is dangerous.

The question of how infovis structures knowledge is at least partly determined by ques- tions of graphical perception. A (visual) graph/chart/plot/visualization relies on the graphical perception modes available to its interpreter. This means it requires experi- ence interpreting geometric shapes like rectangles and circles, as well as the capacity to infer meaning from their size, area, volume, and orientation. As such, arguments about what is truthful are generally associated with claims that rely on support from perceptual studies. Few claims that “we must follow design principles that are derived from an understanding of human perception.”29 What he really means is that, in most cases, we must follow design principles based on our understanding of visual perception.

The authoritative study in infovis is Cleveland and McGill (1984, p. 532), which ranks

27See https://www.propublica.org/article/when-the-designer-shows-up-in-the-design. 28See his definition of “data visualization” from the 2nd edition of the Encyclopedia of Human-Computer Interaction: https://www.interaction-design.org/literature/book/the- encyclopedia-of-human-computer-interaction-2nd-ed/data-visualization-for-human- perception. 29https://www.interaction-design.org/literature/book/the-encyclopedia-of-human- computer-interaction-2nd-ed/data-visualization-for-human-perception Chapter 3. Visualization: Epistemic Technology 94 the elementary perceptual tasks30 we use to make sense of quantitative information from graphs. While the field of graphical perception has made significant advancements since their writing, the basic perceptual operations that Cleveland and McGill describe remain accurate today, even if the perception studies they are derived from are not epistemolog- ically neutral.31

While the previous paragraphs have demonstrated what looks to be a kind of stepwise order - capture to EDA to sketching and prototyping to storytelling to publication - there isn’t always a sequential logic to how these specific visualization steps take place. Visu- alization professionals from different fields have different design processes and workflows that they follow. Some fields (e.g. experimental physics) place an emphasis on publish- ing visualizations in a pre-publication stage (e.g. on arXiv.org) and solicit feedback to iterate on the process. While the order of sequences isn’t always fixed, different disci- plinary visualization practices nearly always share the same specific, concrete goal: the stabilization of truth claims. “Those who discover an explanation are often those who construct its representation,” Tufte (1997, p. 9) writes. The most important function of what visualization does lies in its deep connection to the act of evidence construction. In doing so, it both reifies epistemic values and enforces participation in distinct epistemic communities.

Naturalization: Aesthetics vs Truth

In short: There are no rules. And here they are.

Scott McCloud (2006)

30These include inferring meaning from position (in both common and non-aligned scales); length; direction; angle; area; volume; curvature; and shading. 31See Colin Ware’s work in this area, especially Ware (2012). Chapter 3. Visualization: Epistemic Technology 95

Statistics, as a discipline, is grounded in a rigid positivist orientation. Its history is rid- dled with influential figures who champion a kind of determinism in which everything is measurable and, as a consequence, capable of being optimized. Adolphe Quetelet, for example, who developed a theoretical approach he termed “social physics” that posited an “average man” could be calculated by plotting the mean values of measured biometric variables, believed that all data, especially biometric data, should fit the normal dis- tribution. A history of measurement is not the task here, but it is worth noting that virtually all data collection and measurement methodologies have been developed by in- dividuals, like Quetelet, whose inherent biases are apparent.32 Why would we think that data visualization methodologies wouldn’t be?

While screen-based visualization naturalizes epistemic conditions like ocularcentric inter- action, it also obscures an underlying formalism that its software interfaces are scaffolded on. In the following sections, I will discuss this formalism, as well as an idea linked to it: that there is a grammar of graphics to which information designers must either adhere or diverge from at their peril. To explain this mechanism, I will first describe its reliance on an epistemology that Johanna Drucker has described as “mathesis.” Finally, i will discuss how it contributes to a naturalized epistemic value that aesthetics and truth are incommensurable in the infovis domain.

Mathesis and Graphesis

Images have a history, but so do concepts of vision and these are embedded in the attitudes of their times and cultures as assumptions guiding the production and use of images for scientific or humanistic knowledge. Johanna Drucker (2014, p. 19)

32Jen Christiansen, Senior Graphics Editor at Scientific American, discusses this issue in great detail in the following presentation: https://blogs.scientificamerican.com/sa-visual/visualizing- science-illustration-and-beyond/ Chapter 3. Visualization: Epistemic Technology 96

Graphical formalism relies on the existence of an underlying epistemelogical condition, what Johanna Drucker describes in her writings on the relationship between what she terms mathesis and graphesis. Mathesis, according to Drucker, is “knowledge represented in mathematical form, with the assumption that it is an unambiguous representation of thought.” It implies a condition in which knowledge and its representation are to be taken as a “perfect, symbolic, logical mathematical form.” It is the underlying logic for the contemporary condition in which “trust in numbers” has become the ethos of data-driven life (Drucker, 2001, p. 141). This definition entails more than a positivist epistemology, although it is deeply tied to such a worldview - and to the “objective view” that 19th century atlas makers cultivated when they “invited nature to paint its own self- portrait” (Daston and Galison, 2007, p. 113). N. Katherine Hayles, building on a feminist critique of the history of logic by Andrea Nye, suggests that the social implications of such an epistemology are often “masked by being presented as preexisting laws of nature”

(Hayles, 1999, p. 60).

Graphesis, on the other hand, is “knowledge manifest in visual and graphic form” in which knowledge and its representation are encountered as replete, instantiated, dis- crete, particular, and embodied forms of information. It is, Drucker writes, “premised on the irreducibility of material to code as a system of exchange; it is always a system in which there is loss and gain in any transformation that occurs as a part of the processing of information” (Drucker, 2001, p. 145). Unlike human language, Drucker (2014, pp. 24–

26) notes, “which has a grammar, or mathematics, which operates on explicit protocols, visual images are not governed by principles in which a finite set of components is com- bined in accord with stable, fixed, and finite rules.” Despite this, she acknowledges that our systematic use of images, visualization forms, icons, etc. “have created standards and consensus across a wide variety of disciplines.” These standards become an architecture of communication as they stabilize. They include standard styles, texture and colour Chapter 3. Visualization: Epistemic Technology 97 conventions, and the use of popular materials. Drucker notes how, when taken together, they align “with semantic elements.” Relations, composition, sequence, and narrative, on the other hand, perform syntactic functions. Tracing this to the “rational ordering principles” of Enlightenment thought, “and the articulation of universal formal principles to modernists trying to find a scientific basis for visual work,” Drucker explains how “the gap between the use of visual images to communicate knowledge and the development of the concept of a ‘language of graphics’ was only closed in the twentieth century when for- malized rules of visual communication were articulated in very deliberate terms.” What do these formalized rules look like when they are embedded in graphical technologies?

Grammar of Graphics

Graphesis and mathesis are in a perpetual state of entangled tension, no more so than in the space where data interaction and interface design meet. A “grammar of graphics” is not a software-specific phenomenon. As pointed out earlier in the discussion of Bertin, there are semiotic conventions that are naturalized in our modern sensemaking practices.

“The analysis of graphics as a system, one that could be governed by predictable rules, explicitly articulated, arose within the visual arts. Specifically, these systems of rules arose in the arena of applied drawing useful for industry and engineering. In these realms drawing was more linked to surface organization of elements that provided plans and patterns for production than to the creation of pictorial illusion” (Drucker, 2014, p. 27).

In Drucker’s work, she traces graphical formalism from Victorian-era ornamental design to typographic developments of the interwar years to modern user interface development.

It is in the latter development where we find an intriguing set of structuring principles that go unrecognized by most infovis users.

According to Leland Wilkinson (2006), a grammar of graphics is, simply, a formalist Chapter 3. Visualization: Epistemic Technology 98

approach to the development of data visualization technologies. It is an attempt to for-

malize the representation of statistical evidence. Wilkinson’s paramount concern is to

operationalize graphical clarity which, according to Kostelnick (2008) cannot be divorced

from the rhetoric of scientific practice, and is a much more multifaceted (and ambiguous)

process than most software designers are willing to admit. It includes aesthetics, percep-

tion, cognition, the “socializing influence of learning and experience” and “the exigencies

of the rhetorical situation.”

The importance and influence of Wilkinson’s text cannot be understated, as it is fre-

quently referenced as a sourcebook for designers, as well as embedded in some of the

most common infovis technologies available today. There are two different ways we need

to consider what a grammar of graphics is and what it does: semiotic, and technical.

What is a grammar of graphics in the first, semiotic sense? Jacques Bertin’s “Semiology

of Graphics” is often regarded as the hallmark text for understanding how meaning is

made from graphical information symbols. Bertin’s semiology is a kind of grammar - a

logical, structuralist foundation, based on perceptual studies, for how we interpret the

signs of information graphics. Manifest in visualization practice, this is an imposition of

form that enforces epistemic rules for the visualization community. These might be easy

to denaturalize if they weren’t so pervasive.33

In the second, technical sense of what a grammar of graphics is, we must consider how it functions in today’s software. Statistician Hadley Wickham, the outspoken architect of numerous widely-used data processing, analysis, and visualization tools for the R language (including ggplot2 and tidyr), writes: a “grammar of graphics is a tool that enables us to concisely describe the components of a graphic” (Wickham, 2010). In a sense, it is both a structural architecture and a template for design. Wickham, more than

33See various visualization selection tools: https://datavizcatalogue.com/; https: //datavizproject.com/; https://www.data-to-viz.com/caveats.html; http://www.vizwiz. com/2018/07/visual-vocabulary.html. Chapter 3. Visualization: Epistemic Technology 99 anyone else, has been responsible for implementing such a system in a suite of tools that form a concrete pillar at the foundation of the emerging field of data science. Wickham’s suite of “tidy” data tools, for example, include methods for implementing Wilkinson’s grammar. “Tidy visualisation tools only need to be input-tidy as their output is visual.

Domain specific languages work particularly well for the visualisation of tidy datasets because they can describe a visualisation as a mapping between variables and aesthetic properties of the graph (e.g., position, size, shape and colour). This is the idea behind the grammar of graphics (Wilkinson, 2006), and the layered grammar of graphics (Wickham,

2010), an extension tailored specifically for R” (Wickham et al., 2014). Wickham’s tools have been described as providing a “consistent API with sane defaults. The consistent interface makes it easier to iterate rapidly with low cognitive overhead. The sane defaults makes it easy to drop plots right into an email or presentation.”34 Again, it might be easy to dismiss Wickham’s universal design rules if they weren’t woven into many of the most popular visualization tools. Even Matplotlib, which isn’t an instantiation of Wilkinson’s ideas as ggplot2 is, has its own formal structure, resulting in its widespread use as the backend for most Python-based visualization libraries.

To conclude, a grammar of graphics is not just a system of design principles. It is the embedding of that system, its logic, and its designer’s epistemology into future technolo- gies. Furthermore, it provides a means for validating truth claims by measuring how well representations adhere to its rules. Importantly, it becomes the basis for visualization pedagogy because it makes visualization easier to teach. Why might such a thing be problematic? Because it also forecloses the ability to critique knowledge claims from a position outside of this normative one, and this runs counter to what epistemic tech- nologies should be capable of. It might be that we need a more encompassing term to describe what is going on here, as grammar - in both the semiotic and technical senses -

34http://pythonplot.com/ Chapter 3. Visualization: Epistemic Technology 100

Figure 3.3: A diagram of the graphical structure of a figure generated by Matplotlib. for this diagram can be found here: https: // matplotlib. org/ gallery/ showcase/ anatomy. html Chapter 3. Visualization: Epistemic Technology 101 fails to define these various phenomena, which include forms, structure, embedded logic, naturalization processes, epistemic boundary keeping, and, lest we forget, moments of resistance and creative pressure against the boundaries of this structuring mechanism.

Aesthetics|Truth Entanglement

With a structural grammar of graphics underwritten by the condition of mathesis, the resulting media environment finds aesthetics and truth caught in a kind of epistemic entanglement. Truth in this equation is an umbrella term for trust and evidence. How do infovis designers, professionals, and researchers express their commitments to these concepts? How are discipline-specific epistemologies tied to them?

In chapter 4, I will go into greater detail about a flawed idea that data visualization and art can never be the same. Moritz Stefaner, arguing against the idea that aesthetics and information are incommensurable, suggests that “aesthetic value goes way beyond pure pleasure or decoration, and arises from a couple of other factors, some of which are novelty, allusions, cultural references, unfolding of perceptual experiences.” He notes that he has “seen literally any combination of aesthetics and informativeness” in his time as a prominent data designer, asking “what if. . . these two are maybe actually independent dimensions, not arch enemies?”35 He is, however, one of a new breed of data designers willing to push against Tuftean minimalism and statistical graphic dogma. Among his peers, Fernanda Viégas and Martin Wattenberg have made a career out of bridging these domains. In tracing the rise of artistic visualization, they lament the fact that “traditional analytic visualization tools have sought to minimize distortions, since these may interfere with dispassionate analysis” and suggest that we “embrace the fact that visualizations can be used to persuade as well as analyze” (Viégas and Wattenberg, 2007). Their prominent

35http://well-formed-data.net/archives/1210/little-boxes Chapter 3. Visualization: Epistemic Technology 102 role leading a team at Google trying to make machine learning less opaque to the public is a testament to this. The fact that they undertake this mission by producing highly interactive, expressive, and - dare I say - beautiful visualization work is a demonstration of their claims, and a refutation of those who argues that aesthetic features have little value in information graphics.36

But it’s also possible that many in the visualization community practice a form of self- surveillance, not unlike that described by Daston and Galison (2007, p. 174), in order to maintain the appearance of legitimacy within their professional community. Others, yet, risk ostracization from the “serious” side of the field, even while their public reach is considerable (Lupi, for example, who produces unique, idiosyncratic, and well-received visualization projects from both quantitative and qualitative datasets37). Coming back to the earlier question about whether data visualization is about reducing complexity and increasing clarity, various recent projects have pointed out that interactivity makes it possible to let the user grapple with ambiguity as part of the sensemaking process.

This isn’t just about aesthetics vs truth. It’s about what constitutes knowledge making in scientific domains, as if knowledge somehow means numerical, neutral, and truthful, but not qualitative, humanistic, or artistic.

Case Study: Prototyping Multimodal Visualization Interfaces

Nowhere are these entangled tensions more evident than when visualization interfaces or experiences mix different modes of representation and interaction. The term multi- modal generally has one of three meanings. The first, what we might call multimodal representation, describes the mixing of different visualization tropes (i.e. a histogram

36See http://hint.fm/. 37See Lupi’s “bruises” piece: https://medium.com/@giorgialupi/bruises-the-data-we-dont- see-1fdec00d0036. Chapter 3. Visualization: Epistemic Technology 103 alongside a scatterplot, bordered by an interactive map) in a single screen environment.

Contemporary dashboards are examples of this. The second, what we might call multi- modal interaction, mixes different interaction modalities (i.e. gesture and voice input) in a single experience. The third use of the term indicates multiple sensory modes engaged at a single time. For this, I will use the term multisensory rather than multimodal.38

In the following sections, I will describe my experiences prototyping multimodal data interfaces for two different data projects. In each case, I drew inspiration from the user experience of what has become the de facto interface of data-driven engagement: the dashboard. This is an especially interesting epistemic technology, as it has no single normative user, is found in areas ranging from finance to self-tracking to civic experience, and is still in its relative infancy (and, as such, is not bound by the same structural conventions that individual data graphics are). In the cases that I will describe, I was forced to reconcile with my own epistemic biases about dashboard design. When I realized that this interface metaphor was insufficient for the kinds of data presentation I was interested in, it surfaced questions of why and how the dashboard had been naturalized as the contemporary interface to data in the first place.

Visualizing Craft Expertise

The first data project focused on combining multiple views on a dataset created to cap- ture “craft expertise.” In 2015, I was contacted by Arno Verhoeven, a researcher in the

University of Edinburgh’s College of Art. Verhoeven had been involved in an interna- tional collaboration to understand the craft practices of expert artisans in Canada and

Scotland. These experts were taking part in a two-year residency between Aberdeenshire in Scotland and the Art Gallery of Burlington in Canada. Through this international

38See Morana Alač (2011), whose work on multimodal semiotic interaction is among the few projects that really bridges these three separate meanings. Chapter 3. Visualization: Epistemic Technology 104 collaboration, known as the Naked Craft Network, Verhoeven was developing a repository of craft expertise by collecting gestural feedback from hand-worn inertial measurement unit (IMU) sensors and mapping it to video from head-worn cameras.39

During the group’s time in Canada, Verhoeven and I experimented with various ways of visualizing (and “physicalizing”) gestural data collected from the hand-worn IMUs. We sought to both digitize and materialize gestures using a combination of Python visu- alization and 3D design tools. To make sense of these disparate pieces, I attempted to prototype a dashboard-like UI that would feature an animation of an expert’s hand move- ments through space, a 3D scatterplot of both hands in action, some descriptive text, and video from the head-mounted cameras. At the time, mature dashboard libraries

(e.g. Plotly’s Dash) were not yet available, so my interface prototypes were written from scratch in html. Our intention was to be able to use the various connected visual pieces to describe moments of expertise, with the assistance of the experts, during an exhibit opening at the Art Gallery of Burlington. For this, Verhoeven and I focused on a single example of data captured from an expert linocut artist, and I prepared a Python-based

3D scatterplot that could be manipulated for different views, as well as a 3D-printed data sculpture modelled on the scatterplot.

Two issues came up. The first had to do with a sense of implied causality when look- ing at the 3D scatterplot, even before knowing where the start and end points were.40

Recognizing a large cluster of points on one hand, the artist was asked to describe her hand motions. We assumed that she had slowed down at that physical location, indi- cating some measure of precise gesture. In fact, it was indicative of her hand resting.

This could be partly corroborated by watching the step-by-step animation of the hand’s

39For this project, we used WAX9 streaming IMU data loggers from Axivity. https://axivity.com/. More information about the Naked Craft Network can be found here: https://www.eca.ed.ac.uk/ research/naked-craft-network. 40For a good survey of this problem, see (Bergstrom and West, 2018). Chapter 3. Visualization: Epistemic Technology 105

Figure 3.4: A 3D scatterplot spatial representation of an expert linoleum artist’s hand move- ments. movement unfolding, although it was no substitute for the video. In this sense, reducing

“expertise” to a cluster in a scatterplot that, in a dashboard where an interpreter would be pre-disposed to focusing on it, was especially problematic.

Two additional issues came up through this work. The first had to do with a tension between creating an interface for EDA that would enable the crafters to interpret what their hands were doing, or an interface for publication and storytelling that could isolate moments of so-called expertise to demonstrate - with data visualization - what had been taking place. In effect, to find interesting stories and use the interface to reveal them.

For EDA, a dashboard would be sufficient. For storytelling, we would need either a clear, linear narrative, or an evocative object that could help articulate the story. We chose the latter, and I successfully modelled and printed “data sculptures” of select “moments of craft expertise.”

The second and equally important issue that came up had to do with the fact that there Chapter 3. Visualization: Epistemic Technology 106

Figure 3.5: A 1:1 scale 3D-printed data sculpture of the hand movements of an expert linoleum artist (represented by the cluster of orange points in the previous figure). were, at the time, scant screen-based resources for combining 3D digitized objects with

2D visualizations. This carried with it the implication that any 3D objects created during our prototyping would be little more than artistic sculptures that would have to stand on their own. While this has changed somewhat in the time since, there remains a resistance in the visualization community to comparing 2D and 3D objects in the same visual space.

3D objects, for better or worse, are often treated as “art” objects.

Self Tracking Narratives

Following this work, I was contacted by a group at Intel Labs who were conducting a study on wearable technologies used to measure caregiver stress. This study was un- dertaken in a quantified self/self-tracking context, and my design work was part of an ongoing effort to build interface prototypes that would facilitate combining qualitative and quantitative data. For this project, a diverse range of biometric data was collected Chapter 3. Visualization: Epistemic Technology 107 from caregivers, presenting a rich opportunity for exploring dynamic, multimodal ap- proaches to visual analytics that would emphasize the caregiver’s role in the act of data analysis and interpretation. Our aim was to prototype visualization interfaces that would open this dataset to modes of interaction that emphasize dynamic manipulation, with a goal of providing useful insights for the development of hardware and software of interest to the broad self-tracking community (and beyond).

My work was arranged thematically around an exploration of the temporal aspects of caregiver stress. Caregiver experiences are often under or misrepresented in the process of data interpretation. The challenge of offering alternative understandings of stress to ones that only take into account reductive physiological measures requires techniques that afford subjective, auto-phenomenological reporting of affective conditions by indi- vidual caregivers. Such techniques can provide a means to extend and augment external, objective reporting by examiners and data analysts. By preparing prototypes for an af- fective analytics interface that caregivers could use to make sense of their own biometric data (in ways that highlight temporal experience), my aim was to widen an ecology of methods and modes of visual experience emerging at the intersection of visual analytics, self-tracking, and wearable computing.

The scope of my work for this project included planning, design, and prototyping of a limited set of functional models for a multimodal affective analytics interface. These models would use existing self-tracking data, collected from a wearable Empatica E4 wristband that was used to collect electrodermal activity, heart rate, and blood volume pulse data (among other things). My additional goal was to prepare an interface that could scale to real-time, dynamically-generated data.

A single design choice stands out from this project. It came about when I realized that a dashboard metaphor would be insufficient for telling the caregiver’s story. Because our Chapter 3. Visualization: Epistemic Technology 108

Figure 3.6: A section of the narrative describing a research subject, followed by a brief description of the data capture device’s biometric measures, and an interactive time-series interface for studying a day worth of heart rate data. Chapter 3. Visualization: Epistemic Technology 109

Figure 3.7: An interactive visualization I designed for comparing biometric measures from the caregiver and the person they were caring for. Brushing and zooming for context enabled the ability to pinpoint specific points during the day when physiological indicators marked moments of stress, which could then be cross-referenced with image data captured from a chest-worn camera.

goal was to prototype an interface that gave equal weight to more qualitative, subjective

measures, like journal notes and images taken from a chest-worn camera, I had to balance

the real estate needed for visualizing numerical data (using time-series charts that I wrote

from scratch using the D3 JavaScript visualization library) and a chronological timeline

for presenting images and text. Because of this, I elected to use a scrolling interface

design paradigm.41 This enabled me to focus on key moments of the day, such as periods throughout the night when a caregiver was awakened by an alarm from her diabetic son’s blood glucose meter.

Rather than tell the story of a single research participant, a dashboard would have enabled

EDA to a far greater degree, but it would not have presented the same contextual insights.

Scrolling (and scrollytelling) interfaces are commonly used for publication. Although the

41Since then, “scrollytelling” interfaces, which reveal interface elements as a user scrolls down a page, have become popular for online data stories. Chapter 3. Visualization: Epistemic Technology 110 data graphics I used were interactive, they were also intended to be an “authoritative text” to tell a story to potential funders or make a case for new products. While still an interpretive interface, as the user would be required to scroll, navigate, and ask new questions, this publication-focused design would bound the kind of knowledge that could be made from the dataset. Why is this an important consideration? Because scrolling data stories are frequently found in data journalism contexts, where a minimal amount of interaction obscures the fact that there is an underlying story that the authors and editors are trying to tell. And like the data “sculptures” I described in the previous case, they are frequently criticized for not being neutral, for celebrating subjective interpretation, or for focusing too much on aesthetics.

Connections

What connects these visualization projects is a common thread. Despite a prevalent design bias that multimodal interfaces should resemble their Bloomberg terminal origins, or that scrolling interactive data stories should draw their data-graphical elements from the dashboard interface, there is not an accepted interface grammar for what goes on

(or in) the variety of new products that exemplify these multimodal trends. And yet, despite this, there is a prevailing idea that there is a common interaction grammar, adopted directly from UX/UI principles for web interaction, that should govern their use. What makes such a grammar untenable in this context is that the worlds it seeks to unite are epistemically divergent: quantitative|qualitative (objective|subjective); 2D|3D; static|dynamic.

The failure to design an effective dashboard-like design in the former case resulted in a need to create data sculptures to serve as narrative aids. In the latter case, electing to go with a storytelling interface made it possible to combine the different effects that Chapter 3. Visualization: Epistemic Technology 111 were necessary to describe the case, but proved to conflict with the secondary goal of providing a suitable interface for EDA. In both cases, I was hesitant to move away from a dashboard metaphor for fear that it could be criticized for subjective bias, for favouring storytelling versus evidence revelation, or for having emphasized aesthetic concerns.

Method: Critical Visualization

The challenge for visualization is to simultaneously intensify the representationalism of its methods - call them to attention in a graphical, critical way, while undoing the belief system that representationalism supports - that a world can be known in some stable way. Johanna Drucker42

In the opening chapter, I wrote that the dissertation’s guiding theme is that contem- porary information visualization technologies are shaped by the predominant epistemic condition under which they arise: ocularcentric objectivity. The disparate fields that make up infovis require new methods and approaches in order to surface and denatural- ize both this condition, and the false dichotomy between aesthetics and truth that has been discussed throughout this chapter. The following two chapters will examine what happens when ocularcentrism is not the predominant guiding design principle - when the eye is de-centered and visualization is forced to “intensify the representationalism of its methods” so to speak. But we must also consider what happens when visualiza- tion remains on the screen, in its visual, ocularcentric milieu. Is it possible to reconcile the incommensurabilities and different visualization cultures - to pursue new methods that are both rigid and empirical while also critical and interpretive? To promote not only graphical criticism, but a critical orientation to visualization at the same time? To answer Drucker’s call to build “critical languages for the graphics that predominate in

42http://empyre.library.cornell.edu/phpBB2/viewtopic.php?t=1258&sid= 6519b1d356bda948077430f18d23bd37 Chapter 3. Visualization: Epistemic Technology 112 the networked environment” (Drucker, 2014, p. 17)? The following short sections list a handful of tools and methods that might fit into a methodological toolbox for researchers and professionals interested in pursuing critical visualization practice.

Designing Against Bias: Lena Groeger, an investigative data journalist at ProPub- lica and prominent voice in the visualization world, writes: “Once we acknowledge that as a community we have imperfect assumptions, we can work to get better. One way to do that is to intentionally design against bias.”43 Kate Crawford has proposed a short rubric for doing this: “With every big data set, we need to ask which people are excluded. Which places are less visible? What happens if you live in the shadow of big data sets?”44 She advocates that data scientists (and, by extension, visualization engineers) “take a page from social scientists” by asking where the data comes from, what methods were used to gather and analyze it, and what cognitive biases they might have. In other words, bring context-awareness, situatedness, and contingency to bear on visualization methods. Im- portantly, visualization designers should be willing to disclose these things in their final designs, as Dörk et al. (2013) advocate.

Designing Against Abstraction: This also includes resisting reductive abstraction.

In 1830, Armand Joseph Frère de Montizon published the first “dot map” in which a single dot represented the equivalent of 10,000 people in each French province, producing a visual of population density. Snow’s Cholera map a quarter decade later remains the benchmark for this technique to abstract human lives or deaths. Drucker (2014, p. 136) asks: “who are those dots? Each individual has a profile, age, size, health, economic potential, family and social role. In short, each dot represents a life, and no life is identical.” The Wee People typeface responds to this problem by “making it easy to

43https://www.propublica.org/article/when-the-designer-shows-up-in-the-design 44https://hbr.org/2013/04/the-hidden-biases-in-big-data Chapter 3. Visualization: Epistemic Technology 113 build web graphics featuring little people instead of dots.”45 Coming up with methods to counter the tyranny of dots is a first step toward trying to re-inscribe the human in the abstraction.

Critique by Redesign: Viégas and Wattenberg recently argued that, because visual- ization is still a new field, with plenty of room for improvement, we “need more criticism, and redesign is an essential part of visualization criticism.”46 They claim that perceptual psychology and related sciences can only provide so much guidance for the field. “Human judgment remains essential to the process.” They favour a respectful method of critique where designers submit their work to the broader community and iterate based on feed- back. Elements of this are already at play in reddit’s “dataisbeautiful” subreddit, which has a “make it better monday” feature, as well as weekly redesign collective projects like

“Makeover Monday.”47 Those who advocate such an approach should be cautious, how- ever, to avoid indoctrinating novices into specific epistemic cultures, or toward design ideals like minimalism which is a hallmark of Tufte’s redesign projects. In a way, though, redesign can perform like replication ostensibly does in experimental science - with a goal to produce stronger evidence.

Conclusion

Giorgia Lupi, who I discussed earlier in the chapter, has recently articulated a program for what she refers to as “data humanism.” Her work that best exemplifies data humanism

45https://github.com/propublica/weepeople. Jake Harris of the New York Times wrote a thoughtful piece on designing with empathy that touches on these themes: https://source.opennews. org/articles/connecting-dots/. This type of visualization, from an otherwise terrific project which depicts victims of conflict in Colombia, speaks to the need for considering this issue: https: //centerforspatialresearch.github.io/colombia_site/applications/interactiveViz.html. 46https://medium.com/@hint_fm/design-and-redesign-4ab77206cf9 47See https://www.reddit.com/r/dataisbeautiful/ and https://www.makeovermonday.co.uk/. Chapter 3. Visualization: Epistemic Technology 114 is a piece visualizing the music of guitar player Kaki King.48 Lupi asks “Can we feel data? And can we see music?” and then creates a remarkably creative, nuanced, artistic, subjective representation that completely resists automation. She closes her description of the piece by stating: “This is a multimedia exploration of the role of data in our lives, and a call for Data Humanism, in which we reclaim subjectivity on how data is captured and collected, and embrace the subjectivity that invariably comes with how data is represented.”49

Johanna Drucker (2014, pp. 178–196), who has been the most vocal proponent for a kind of critical, humanistic visualization, echoes many of Lupi’s concerns, but with a far more incisive critique. “When we finally have humanist computer languages, in- terpretative interfaces, and information systems that can tolerate inconsistency among types of knowledge representation, classification, fluid ontologies, and navigation, then the humanist dialogue with digital environments will have at the very least advanced beyond complete submission to the terms set by disciplines whose fundamental beliefs are antithetical to interpretation.” But she adds a rejoinder to this, stating that “the interpretative and the empirical need not exclude each other,” and calls for a “graphic grammar” for an “emerging visual system inclined to present the embodied, situated, circumstantial, and fragmentary quality of knowledge.”

Outlining the elements of such a grammar here would be foolhardy. It will take the work of an emergent community of critical visualization researchers, practitioners, theorists, and professionals, coming from diverse domains ranging from computer science to statistics to history. As a community, we must become familiar with and push against the teleological history of infovis, and we must be willing to challenge the structural formalisms that shape

48http://giorgialupi.com/a-dialogue-between-four-hands-my-ongoing-collaboration- with-kaki-king/ 49Also see her pieces here: https://medium.com/@giorgialupi/data-humanism-the-revolution- will-be-visualized-31486a30dbfb and here: http://www.printmag.com/information-design/data- humanism-future-of-data-visualization/ Chapter 3. Visualization: Epistemic Technology 115 it. Together, we must be willing to call into question the orthodoxy of a field that relies primarily on perceptual studies (especially when psychology is currently facing a crisis of reproducibility for many of its key experimental findings50). We must be willing to pursue infovis in unfamiliar domains, such as qualitative research or humanistic inquiry.

We must be willing to challenge its normative and reified interaction conventions (which

I will discuss further in the following two chapters). Finally, we must put to bed the tired fallacy that truth and beauty are somehow incapable of guiding design equally.

This chapter asked how we can denaturalize the “increasingly familiar” interface of visual- ization? To this question, I have demonstrated that various methods of reflexive inquiry will be necessary. Through case studies of my own prototyping process, I have described some of these methods. The chapter also asked what the epistemic commitments of the infovis community are, and traced a brief history of the field to outline some of them. We might have been told that the primary epistemic commitment is to reducing cognitive load or producing technologies that reduce complexity. I have given examples that show this isn’t necessarily true. We might discover that many researchers and professionals will claim it is the use of graphical techniques to pursue truth. This, as well, has been demonstrated to be untrue in many cases. Aesthetically, graphical minimalism is as prevalent as UI fireworks, so there is no uniformity there either. Perhaps the reason that

Manuel Lima’s important 2009 manifesto that sought to outline the field’s epistemology was so contentious is because the epistemic commitments that infovis practitioners and researchers bring to the table are actually incredibly diverse.51

Information visualization has taken shape over (mostly) the past century as an ocular- centric enterprise. The previous chapter highlighted some problems with the ocularcentric legacy of representational practices. The stakes are high for developing visualization prac-

50See https://www.theatlantic.com/science/archive/2018/11/psychologys-replication- crisis-real/576223/. 51http://www.visualcomplexity.com/vc/blog/?p=644 Chapter 3. Visualization: Epistemic Technology 116 tices that are both accessible and valid from a usability perspective and, in many ways, these are things that a grammar of graphics purports to enact. But there are many po- tential drawbacks to relying on strictly formalist tools. First and foremost, they re-state a rational positivist vision that the world is reducible and that human subjectivity is ul- timately capable of being characterized as simple rules. At its conclusion, this argument facilitates an automated approach to visualization based on a belief that reasoning can be reduced to binary conditional statements and decision trees. In this epistemology, contingency is invalidated by design. While there are surely many who wouldn’t see this as a problem, there is a chorus of influential voices in infovis who are expressing their concern about automated visualization tools. This is a very real professional issue for an industry that faces potential future labour challenges caused by automation.

In a more public sense, what this formalist epistemology and its embeddedness in tools also does is foster a culture in which reductive “in one chart” explanations proliferate.52

What do we lose if all it takes to understand the size of the economy, or the distribution of voters in a region, is the insertion of a .csv into a GUI outfitted with a button to visualize the data according to some pre-defined heuristic? The best data theorists, journalists, and storytellers know that this approach to reducing complexity is flawed, which is why there is a thirst for detailed, engaging information stories that weave rich narratives and data graphics together. In the following chapter, I will continue this line of questioning, but with a markedly different medium: virtual reality.

52See, for example, http://www.visualcapitalist.com/extraordinary-size-amazon-one- chart/. Chapter 4

Visualizing The Embodied Data Sublime

Summary: This chapter introduces the concept of the embodied data sublime. It presents a case study of an immersive virtual reality-based visualization environment and considers how scale relations that are naturalized in 2D screen environments are disrupted in VR.

The chapter asks what happens when the body is reinscribed in the interface of data interpretation. It closes by suggesting that using “estrangement” as a methodological approach has the potential to provoke intrigue, curiosity, and wonder in users, analysts, and data interpreters.

Observation has been over-rated... Don’t just peer: interfere.

Hacking (1983)

Introduction: The Data Sublime

If Cartesian ocularcentrism has resulted in the removal of the body from the interface of visualization, as I claim in chapter 2, what happens when it is reinscribed back in?

117 Chapter 4. Visualizing The Embodied Data Sublime 118

In that chapter, I examined entangled phenomena at the root of the epistemic condition of ocularcentrism, using a case study of museum tactile representation to demonstrate how naturalized perspectival arrangements get ported into various interaction contexts.

Chapter 3 built on the claims made in chapter 2 while examining the tension between truth and aesthetics that contemporary infovis is marked by. In this chapter, I synthesize these arguments and introduce the concept of the embodied data sublime, a theoretical and methodological construct that responds to these issues by deliberately unsettling naturalized perspectival arrangements and exacerbating the tension between truth and aesthetics. The embodied data sublime produces a space of interaction in which the full sensory order of the body is brought to bear on the practice of data interpretation. I describe this concept in a case study of an immersive experience that grounds the chapter.

Responding to the dissertation’s third research question,1 this case study demonstrates how critical making-inspired methodological experimentation can inform the development of alternative visualization methods. The embodied data sublime is instantiated at the intersection of data visualization and virtual reality and, when fully realized, invokes additional questions about naturalized scale relations. These include: What happens when the naturalized scale conventions (e.g. God’s eye perspective) pervasive in infovis are disrupted? What happens when the body is inserted back into the site of data interpretation? Is data visualization really an “anti-sublime” medium, as some claim?

The embodied data sublime extends the idea of the data sublime, a concept that has been invoked recently by a handful of scholars, each ascribing a somewhat different meaning.

A common thread between their uses suggests opacity, inconceivability, and requisite abstraction. Media and design scholar Claude Fortin (2016), for example, has written of the data sublime as “abstract manifestations of information technology in the every- day” - “data as a form that is virtual, transcendent, and beyond the reach of the sense

1How can critical making (Ratto, 2011b) inform the development of alternative visualization methods? Chapter 4. Visualizing The Embodied Data Sublime 119 apparatus.” Political economist William Davies uses the concept to comment on the “un- knowability” of big data, noting that “awe is not the Data Sublime’s only approach” as it

“redeems bureaucracy ... as a kind of nostalgia,” ultimately producing “an aesthetic which promises a higher order form of autonomy than that which is available through liberal appeals to consumer rationality.”2 These understandings build on earlier uses of the term, mostly coming out of visual studies and art history, which focus on the data sublime as chaotic spectacle. Davies, for example, draws on art historian Julian Stallabrass (2007, p. 82), who describes data exhibits, including Ben Rubin and Mark Hansen’s immersive

2002 Whitney piece, Listening Post,3 that provide “the viewer with the impression and spectacle of a chaotically complex and immensely large configuration of data,” acting

“much as renditions of mountain scenes and stormy seas did on nineteenth-century urban viewers.”

Alexandra Supper (2014) has written of the “auditory sublime,” a parallel condition in which sonified data representations produce an immersive and emotional digital medium based solely on audio interaction, in contrast to the “supposedly more detached sense of vision.” Supper builds on a claim made by Jonathan Sterne (2003, p. 15) that “hearing is a sense that immerses us in the world” while “vision is a sense that removes us from it.” The data sublime, as I employ it in this chapter, melds these understandings. It is not beyond the reach of the sense apparatus (as Fortin claims) at all, but it does rely on the data being encountered as transcendent. It aligns with Supper’s auditory sublime, even as it does not rely on purely auditory processing, in that it is both immersive and affective. It is the product of placing visual data in a multisensory context and deliberately crafting experiences that alienate the user from the content. To begin to grasp how invoking the sublime as a data design strategy can operate as a component of denaturalization, we must first come to grips with its long-established use as an important

2https://thenewinquiry.com/the-data-sublime/ 3See https://www.youtube.com/watch?v=dD36IajCz6A Chapter 4. Visualizing The Embodied Data Sublime 120 theoretical apparatus in art history, philosophy, and the history of technology.

The Sublime

The sublime is a concept in aesthetics that can be roughly taken to mean an immeasurable experience or feeling of greatness. Attempts to fully circumscribe this vast concept would be pointless, as it would take us down an ontological rabbit hole from Kant and Hegel to Lyotard,4 but any definition that emphasizes aesthetic grandeur and the spectacle of immense power will be a useful starting point. A Kantian reading initially situates the sublime in the natural environment (Kant, 1960). Here, the witness, in beholding nature, is made awe-struck, mired in a state of suspension by the dialectical relationship between the sublime and human limitation (Weiskel, 1976). The Grand Canyon, as an exemplar of the natural sublime, invokes the paralyzing affect of wonder thanks to a scale that, while possible to represent in photographs, is impossible to fully capture. It needs to be experienced, despite being nearly impossible to comprehend phenomenologically.

This is largely due to its vast scale, both spatial (it is 277 miles long and attains a depth of over a mile) and temporal (it was formed over millions of years), a consequence of which is a kind of alienation of the human from the natural world. While the natural sublime can be found in smaller-scale phenomena, it is typified by monuments of great physical immensity and scale. Its depiction in the paintings of Turner, for example, pales in comparison to the awesome power and surrender evoked by embodied presence. As such, the natural sublime exemplifies the “ultimate failure of representation” according to Nye (1994, p. 287).

In contrast to the natural sublime, the American technological sublime concerns a brand

of technological majesty that captivated early nineteenth century America – residue of

4See Lyotard (1994) and Pillow (2000). Chapter 4. Visualizing The Embodied Data Sublime 121

Figure 4.1: Turner’s “Snow Storm - Steam-Boat off a Harbour’s Mouth” which is recognized as an exemplary depiction of the sublime in which “nature and human culture work across and against each other” according to Sarah Monks (2010). Chapter 4. Visualizing The Embodied Data Sublime 122 which continues to influence our modern relationship with new technologies. Initially articulated by the historian Perry Miller in his study “The Life of The Mind in America,” the technological sublime yoked an almost religious veneration of the experience of new technologies to the sublimity and moral law within the character of the American people

(Miller, 1965). Historian David Nye extended Miller’s concept by connecting the religious sense of “awe” directed at new technologies to a kind of rationalization (Nye, 1994).

Whereas the natural sublime is constrained by the limits of humanity, the technological sublime is manifested in spectacular technologies that invoke human authority and reason to order a chaotic world. Built into Nye’s conception of the technological sublime is not only a veneration of technological objects, but a reverence toward the engineers and inventors who design and build them. These figures shape the awesome world around them, demonstrating the “limitless” potential of humanity. Unlike the natural sublime, which is constrained by the dialectical relationship between the sublime and human limitation, “the technological sublime does not endorse human limitations; rather, it manifests a split between those who understand and control machines and those who do not” Nye (1994, p. 60).

An additional aspect of the technological sublime highlighted by both Miller and Nye is its capacity to trigger the formation of publics. Whereas Miller connected the technological to the religious, noting the similarities that groups experienced as part of their religious lives with that of experiencing a new technology for the first time, Nye argued that the technological sublime connects people through a sense of community formed around a single, almost divine, purpose. Using the Erie Canal at Lockport, NY, as an example,

Nye illustrated how its construction was understood as a “product of democracy” that surpassed “in benefits to the human race the most splendid moments of ancient or modern history.” To see the canal, then, was not only to witness an act of magnificent engineering and innovation that physically ordered the world around it, but to participate in a shared Chapter 4. Visualizing The Embodied Data Sublime 123 democratic experience (Nye, 1994, pp. 35–36).

Yet, the technological sublime is not limited to the creation of political publics. Another example of the technological sublime leveraged by Nye is General Motors’ Futurama, an exhibit in the 1939 New York World’s Fair. Exhibit visitors were ushered through an experience that sought to demonstrate a vision of future transportation. Upon completion of the ride, visitors were given a pin that read “I have seen the future.” Here, the technological sublime contributed to the formation of a different kind of public, one not based on shared political or religious beliefs, but constructed around the ideals of shared participation in (or exclusion from) the ordering of the world through technological progress. In contrast to the natural sublime (in the Kantian sense), which symbolizes a kind of failure of representation, the technological sublime embodies man’s ability to construct an infinite and perfect world (Nye, 1994, p. 287). This is the grand dream of big data visualization, manifested in examples like the viral “Obama | One People - The

World” video, which represented the variation in phone call activity among U.S. states and foreign countries on Obama’s inauguration day as flows of people; Facebook’s vision of an almost perfectly connected planet which claimed “reaffirmation of the impact we have in connecting people”; or Uber’s numerous “data-dense” visualizations of the billions of GPS points handled by its platform each day.5

The Anti-Sublime

Rob Kosara, a prominent voice in the infovis community and Senior Research Scientist at

Tableau Software, has claimed that visualization belongs squarely in the domain of the anti-sublime: “One criterion, which I believe to be suited especially well for visualization, is the sublime. Art is sublime, visualization is not. Hence, visualization is not art.” He

5See http://senseable.mit.edu/obama/the_world.html, https://www.facebook.com/note. ?note_id=469716398919, and https://eng.uber.com/data-viz-intel/. Chapter 4. Visualizing The Embodied Data Sublime 124 elaborates, arguing that “not only is visualization not art, it is actually its opposite.” As if to drive his point home completely, he closes with the following passage:

“Given this criterion of the sublime for art, the natural thing to do is to look

for things that are not sublime, and see which of these two areas visualization

falls into. As it turns out, visualization in the technical sense is entirely

anti-sublime: there is no ambiguity, no uncanniness, and typically no sublime

quality at all. A visualization reveals its data right away, that’s the whole

point. We don’t want it to be difficult to get the data out of it, we design it

to speak as clear a language as we can.”6

In other words, data should reveal its meaning right away... with the help of good software, of course!

Kosara has practically made a career out of positioning himself as a voice of reason in infovis UI design, and his important position as a lead researcher with one of the most popular infovis platforms means that his opinions carry a great deal of weight. But is it really true that visualization and ambiguity are incommensurable? Is visualization merely a “utilitarian” medium as Warren Sack (2011) has claimed? Do visualization designers have to mark a stark choice between pragmatic and artistic concerns, as Kosara suggests?

As we saw in the previous chapter, this is not such a cut and dried argument.

Kosara’s arguments stem from what appears to be a shallow reading of a 2002 paper by media theorist Lev Manovich. Manovich describes the emerging (at the time) field of artistic data visualization - what we now see marketed in books and on countless blogs as “beautiful data” or “beautiful information” (Manovich, 2002a). In the paper, he

6Refer to https://eagereyes.org/criticism/visualization-can-never-be-art. Also see Kosara (2007). Chapter 4. Visualizing The Embodied Data Sublime 125 notes that he is often moved emotionally by such work because it carries “the promise of rendering the phenomena that are beyond the scale of human senses into something that is within our reach, something visible and tangible.” This promise makes data mapping into the exact opposite of the depleted representationalism of Romantic art concerned with the sublime, Manovich claims, stating that data visualization art is deliberately concerned with the anti-sublime. According to Manovich’s argument, if Romantic artists thought of certain phenomena and effects as un-representable, as beyond the limits of human senses and reason, then data visualization artists aim at precisely the opposite: to map such phenomena into representations whose scale is comparable to the scales of human perception and cognition. Countless recent data visualization projects purport to do this, as visualization has become the communication medium of data culture.

But Kosara appears to have failed to look beyond the few paragraphs that affirm his epistemic biases. Manovich ends the paper with a rejoinder: “rather than trying hard to pursue the anti-sublime ideal, data visualization artists should also not forget that art has the unique license to portray human subjectivity - including its fundamental new dimension of being ‘immersed in data.”’ In the nearly two decades since Manovich wrote this, data visualization artists have become fairly good at the former, but are only beginning to scratch the surface of portraying subjectivity.

Despite this, the use of statistical graphics to represent human subjectivity dates back to at least the 1930s (e.g. Jacob Moreno’s psychological geography research, which attempted to map the emotions of New Yorkers7). And while the term “virtual reality” also dates to the 1930s, originally proposed by Antonin Artaud in his description of the illusory role of actors in the theatre,8 contemporary technologies like immersive virtual

reality, which have their cultural genesis in the cyber media of the 1980s, stake a claim to a

7See https://www.nytimes.com/1933/04/03/archives/emotions-mapped-by-new-geography- charts-seek-to-portray-the.html. 8See Artaud (1977). Chapter 4. Visualizing The Embodied Data Sublime 126 different license for portraying human subjectivity. Today, we find ourselves at a junction where interactive data visualization and immersive computing have seemingly converged, and where the artistry that Manovich described has begun to seep into the rational,

“anti-sublime” space of infovis. Since at least the early 2000s, the field of artistic data visualization has broken new ground, often with practitioners who are willing to venture into the murky territory of visualizing subjectivity, affect, and embodied phenomena.

Immersive Visualization

Virtual reality interaction breaks many of the conventions of infovis interface design. That said, it doesn’t automatically produce a sublime experience, despite rendering an alien, unfamiliar space of interaction with poorly mapped gestures replacing naturalized ones.

A common critique of VR visualization environments is that, even when they’re visually spectacular, they are often simply unusable. In spite of this flaw, immersive visualization can be a powerful tool for denaturalizing infovis when the sublime is incorporated as a design inspiration.

There are numerous examples of immersive “caves” being used for scientific visualization and exploration. These include 3D representations of distant landscapes, architectural renderings, celestial objects, and assorted relata that belong to the research domains of archaeology, astronomy, and other natural and human sciences. Their inducement of awe and wonder has often been a byproduct of their novelty. Among the more interesting is a long-running project by Larry Smarr, the founder of the California Institute for Telecom- munications and Information Technology (Calit2), a $400 million research collaboration between UCSD and UCI. Smarr has spent much of the past decade collecting data about nearly all of his physiological functions and then coming up with novel techniques for visualizing them. His work to create an up-to-date 3D image of his insides has been pro- Chapter 4. Visualizing The Embodied Data Sublime 127

filed regularly in the popular media.9 In this immersive experience, Smarr “can not only chart the changes taking place inside his body; he can actually see them. As a result, he arguably knows more about his own inner workings than anyone else ever has. His goal, as he puts it, is for each of us to become ‘the CEO of our own body.”’ This vision of a rational, ordered sublime parallels the construction of “an infinite and perfect world” that was alluded to earlier with respect to the American technological sublime described by Nye.

Smarr’s recent work visualizing his microbiome and giving his personal physicians an opportunity to fly through and over his colon is a great example of how VR can be used to unsettle scale, especially as he has coupled it with an augmented reality view of digital renderings of his colon juxtaposed with high-definition colonoscopy images. Assuming, of course, that his physicians find ongoing value in the sort of omniscient and granular views that Smarr’s interactions facilitate. Despite over 20 years of development, however, immersive visualization environments remain confusing. In many cases, the inducement of wonder and alienation as a result of immersive experiences is sustained due to both deliberate and non-deliberate design choices. While Smarr and his ilk seek to produce a sensible digital reconstruction of an unfamiliar terrain, it can take considerable time for their audience to become acclimated to the technology.

A salient example of this is found in a relatively unknown early VR infovis project by the architecture firm Asymptote. In 1997, they were commissioned by the New York Stock

Exchange (NYSE) and the Securities Industry Automation Corporation (SIAC) to build a Three Dimensional Trading Floor.10 The 3DTF was intended to be a “multidimensional real-time interactive virtual world designed to enable the NYSE operations group to ac-

9See https://www.theatlantic.com/magazine/archive/2012/07/the-measured-man/309018/ and https://www.theatlantic.com/magazine/archive/2018/03/larry-smarr-the-man-who-saw- inside-himself/550883/. 10A project overview can be found here: https://www.cca.qc.ca/en/search/details/collection/ object/456079. Chapter 4. Visualizing The Embodied Data Sublime 128

Figure 4.2: A rendering of the 3DTF visualization platform. cess and navigate critical data and information.” While it contained volumetric graphics,

floating time series charts, live video feeds, and real-time stock indices, it was loosely based on the physical space of the NYSE trading floor. As such, it has been described as an “architectural manifestation of a data ecosystem” and bridges the space between abstract and correspondence theory representation.11

Among the more abstract VR interactions that verge on sublime territory, Aaron Brad- bury’s LoVR, an immersive visualization of chemical reactions in the brain, stands out.12

Unlike EyeWire, neuroscientist Sebastian Seung’s massive multiplayer game to crowd- generate a 3D map of the brain’s neural circuitry,13 LoVR resembles the 3D semiotic graphs of the early half of the 20th Century crossed with Russian Constructivist art in hypercolour. Steve Smith, President of Los Alamos Visualization Associates, and a pio- neer in immersive scientific visualization at Los Alamos National Laboratory, in making

11https://www.e-flux.com/architecture/post-internet-cities/140714/learning-from-the- virtual/ 12http://www.doclab.org/2015/lovr/ 13https://eyewire.org/explore Chapter 4. Visualizing The Embodied Data Sublime 129

Figure 4.3: A model of an internal network using what Smith described as a “Space Defense” metaphor. I would discover this only after constructing the VR sublime example that is profiled in this chapter’s case study. The aesthetic parallels between the two environments, separated by many years, are striking. Had I encountered it earlier, I would surely have taken this as a design inspiration. a case for abstract representation, wrote the following alongside a demonstration of a number of sublime early experiments:

“An important precept of our work is that the task of understanding abstract

information demands even more of the human than scientific visualization

and that one key to increasing our ability to handle the quantity, complexity

and subtlety is to use varying techniques to create a sense of presence or

Immersion. Firstly, the use of multiple senses , the use of large field-of-view

visual presentations (Powerwalls, CAVEs, Head Mounted Displays, etc), real-

time rendering and animation speed, 3D perspective, complex lighting models,

texturing, etc. naturally affords us a much larger encoding space. Secondly,

the use of higher-level metaphorical devices than simple geometry (colored

points, regions, even polygons and symbols) offers a wider range of subtlety.

And thirdly, the sense of presence seems to enhance the coupling of higher

level perceptual functions and lower level cognitive functions.”14

14http://dav.lbl.gov/archive/Events/SC04/ImmersiveInfoVisSC04/index.html Chapter 4. Visualizing The Embodied Data Sublime 130

Unlike immersive experiences that are meant to evoke real-world contexts, such as the architectural renderings and landscape representations alluded to earlier, the experiences

I’ve described don’t have physical analogues that are accessible to human perception.

Humans can’t physically inhabit a colon or a brain as they would a physical building like an Iroquois longhouse or a Roman temple.15 That said, the effects of immersion can still be pronounced. Slater and Wilbur (1997) describe immersion as the extent to which

“a display system can deliver an inclusive, extensive, surrounding and vivid illusion of virtual environment to a participant.” They distinguish between immersion and presence, and issue a number of dimensions that immersion can be measured according to. These include inclusiveness, or the degree to which physical reality is shut out; extensiveness, or the range of sensory modalities that are accommodated; surrounding, or the extent to which the VR environment is panoramic; vividness, or the resolution, fidelity, and variety of energy that gets simulated; matching, which describes the match between the participant’s proprioceptive feedback and information displayed; and, finally, plot, which describes whether there is a specific storyline or dynamic, with unfolding events that are distinct from those in the “real world.”

That said, most projects seek to leverage “intrinsic human pattern recognition” skills through the use of immersive VR in an infovis context (Donalek et al., 2014). The Google

News Lab, which has developed a handful of VR data visualizations in recent months, describe a number of grounded design rules they’ve uncovered to guide their experiments.

They suggest not inundating users with too much data in VR visualizations: “you’re putting users in an alternate reality and if that reality doesn’t behave in predictable ways, then it can make people want to vomit.” They advocate spreading information

“around the viewer” to prevent disorientation. They also suggest prompting users to look

15See, for example, William Michael Carter’s dissertation project on virtual longhouses at https: //ir.lib.uwo.ca/etd/4902/. It should be noted that many reconstructions of archaeological sites are purely speculative, and that epistemic conflicts often arise during their design. See Ratto (2011a). Chapter 4. Visualizing The Embodied Data Sublime 131 down, rather than up, to trigger gestures or discover textual instructions.16 Despite the dimensions and design ideas that have been mentioned, there are few accepted parameters for guiding the development of effective immersive environments for visualization, and

“interaction with visualizations in immersive environments is still an issue” (Butscher et al., 2018, p. 3). Should users rely on gestures, eye gaze, or peripherals to guide interactions and make selections? Should information be dynamic or static? Should screen-based interaction metaphors like brushing and zooming for context be adapted for immersive space? Should immersive visualization environments be collaborative? These are just a handful of the numerous open questions that designers in this space are faced with.

Naturalization: Disembodiment and Scale

In conventional statistical graphics, the scale divisions are equal units. In humanistic, interpretative graphics, they are not. Johanna Drucker (2014, p. 131)

The embodied data sublime presents an opportunity to consider how visualization natu- ralizes particular scale relations between user and interface that a) obscure an underlying ocularcentrism and b) complicate embodied, multisensory modes of interacting with data.

This naturalization occurs as a consequence of the removal of the body from the interface of data interpretation, an idea that was introduced in chapter 2 and will also be taken up in chapter 5. What this naturalization produces, is a condition in which certain scale re- lations (e.g. God’s eye perspective and its detached, omniscient overview) are treated as appropriate, or epistemically valid, while others (e.g. worm’s eye view and its immediate

16For more on this ongoing project, see: https://medium.com/google-news-lab/how-we-made-a- vr-data-visualization-998d8dcfdad0. Chapter 4. Visualizing The Embodied Data Sublime 132 granularity) are not.

In their proposal of the measuring body as a conceptual tool, Aud Sissel Hoel and Anna- maria Carusi (2018, p. 17) write: “mediation is not so much about incorporation as it is about the way that the perceiving body participates in a distributed system that goes be- yond the perceiving body, and that it cannot fully control.”17 N. Katherine Hayles (1999, p. 100) pinpoints the cybernetic revolution of the post-WWII period as a moment when the duality of bodies “both as material objects and as probability distributions,” which originated in the nineteenth century with the likes of Karl Pearson, began to pick up steam. “The emphasis on pattern constructed bodies as immaterial flows of information; the alternating emphasis on structure recognized that these ‘black boxes’ were heavy with materiality.” It is in this tension that the body is both naturalized as a statistical object and separated from media that seeks to control it.

Jonathan Sterne (2006, pp. 836–837) notes how “virtuality is supposed to separate the subject form the body” and highlights the material and psychoacoustic properties of embodied mp3 listening (in contradistinction to virtual reality experience, which relies primarily on visual and haptic interaction). “Scale matters here: quantitatively speaking, the coordinated movements of mp3 sonics are so much smaller than the movements produced by a socialized body, that we may be talking about a qualitative difference between listening practice as a technique of the body and the mp3 as a concordance of signals.” Recall the immersive dimension of “matching,” described earlier, that Slater and Wilbur (1997) outline. Mapping bodily gestures to scale movements in immersive environments relies on a perceived match between the body’s proprioceptive feedback and the information displayed. What happens when a user has no frame of reference for

17By ‘measuring body,’ the authors are posing a Merleau-Pontian notion “that arriving at a non- dualist ontology from the direction of his phenomenological grounding makes a difference and brings a distinctive contribution.” They formulate this through “a new conceptual tool, the ‘measuring body’, which brings bodies, symbolic systems and technologies into a new constellation that reconfigures agency and materiality.” Chapter 4. Visualizing The Embodied Data Sublime 133 how the information should be displayed? This occurs when (re)embodiment takes place and the user’s body is forced to navigate at the granular micro-level of individual data points. When the body is reinscribed in the data interface, new possibilities for creative and imaginative work arise. What would it look like, for example, to control movement through gestures that map to an avatar’s perceived physicality? This is beyond the scope of what I describe here, but future work in the space of interactive VR infovis will likely feature digital elements that correspond with a user’s body (e.g. a detached hand that can be used to swipe or grasp at data representations), but are controlled through more natural gesture interfaces rather than peripheral controllers (which typically map these interactions to movements on thumb pads and triggers).

Standard visualization tools don’t just distance the subject body from the data object, however. They incapacitate the subject’s sensory apparatus and fix the scale at which data can be appreciated visually. This typically means the subject has access to either micro or macro views, but rarely both at the same time. While a growing number of data visualization tools enable scale manipulation, nearly all constrain the scales at which an interpreter can experience data. Scale variance, which defines the relationship between changing size and changing form, is a bounded operation between points of expansion and contraction. The common use of “drill down” as a metaphor for focusing on specific sections of a data visualization, or subsetting for deeper and more varied views, helps naturalize these scales at which one is expected to encounter data. Rarely, even in the most beautiful “artistic” data experiences, does one get to drill down to the atomic scale.

(And even when this is possible, the experience remains typically ocularcentric.)

This despite the fact that scaling between the sub-atomic and the astronomical has been commonplace in scientific visualization for decades.18 Of course, I have focused on scale at

18Consider the groundbreaking 1977 film “Powers of Ten” by the Eames Office design firm - available to watch here http://www.eamesoffice.com/the-work/powers-of-ten/ - which “ illustrates the uni- verse as an arena of both continuity and change, of everyday picnics and cosmic mystery” in a single, Chapter 4. Visualizing The Embodied Data Sublime 134 the level of the interface. Scale has a very different meaning when we imagine visualizing planetary flows of data, mass migrations, or billions of GPS data points produced by automated vehicles. I will invoke this other big data-coloured meaning of scale again in the final chapter of the dissertation.

Case Study: The Embodied Data Sublime

During the Enlightenment, as the age of wonder begat a deep rationalization of na- ture (partly described in chapter 2), the prevailing ethos ascribed, to nature, an “anti- marvelous aesthetic” in which “uniformity and regularity was equivalent with beauty”

(Daston and Park, 1998, p. 358–361).19 This was later offset by a counter-Enlightenment

“wistful nostalgia for age of wonders” which had been “snuffed out by age of reason.” The data sublime is, in a way, a return to this wistful nostalgia. When noted data artist

Herwig Scherabon elects to represent emissions data over a city like Los Angeles with an

Anish Kapoor-esque translucent bubble, rather than with a cartogram or heatmap, he is operating with a foot in this domain.20

Because it is often hard to recognize the grandeur of the sublime in the scale relation- ships we are accustomed to working within, employing virtual reality as a medium in which relationships of scale become easier to traverse is one way to surface how they get naturalized in the first place. This kind of experimentation can purposefully help us understand the limits of the anti-sublime. What kind of gradual, calculated attention might embodiment afford when one inhabits the landscape of data in shifting temporal registers and across various scales? Hypothesizing its configuration as one that affords

continuous, 9-minute scale experiment. 19See, also, Daston’s description of the age of wonder in this recent essay: https://thepointmag. com/2014/examined-life/wonder-ends-inquiry. 20https://scherabon.com/El-Coloso Chapter 4. Visualizing The Embodied Data Sublime 135 both embodied and disembodied subjectivities, and providing an alternative to concepts like “dataspace,” the case study that follows describes my experiments with immersive VR data visualization to probe the virtual (and material) landscape of data. While VR does not entirely dispel of the vanishing point that ocularcentric representation requires, its boundlessness and removal of a fixed home begins to open up a more expansive horizon, both literally and figuratively.

Project Description

This case study extends the visualization work I describe in the previous chapter’s case study on visualizations and data materializations I produced for the Naked Craft project.

The techniques I used included time-mapped 3D animations, 3D scatter plots that de- picted clusters of gestural activity, and 3D-printed data sculptures depicting material records of craft “expertise.” While each of these provided a different window of insight,

I realized later that, in designing them, I tended to gravitate toward one-to-one scale mappings, and I wanted to see how this could be unsettled. Around the same time,

I attended a talk by Jeff Steward, Director of Emerging Technology at Harvard Art

Museums, in which he demonstrated an immersive exploration tool for discovering his museum’s “hidden data.”21

Not content with merely translating the 3D scatter plots into a VR environment and positioning the interpreter with a macro-level perspective - a literal translation of what they would encounter on a screen - I took as a design prompt the question “what would happen if we disrupt the scale relations that conventional data visualization enforce by creating an alien landscape?” This would be about estranging the interpreter from the data, deliberately making it difficult to “get data out of it” in Kosara’s terms. I wanted to

21https://vimeo.com/66672944 Chapter 4. Visualizing The Embodied Data Sublime 136

Figure 4.4: A contour plot representing a linocut artist’s hand moving above a table. reclaim the “territory of interpretation” from the “ruling authority of certainty” to quote

Drucker (2014, p. 136).

Figure 4.4 is a representation of three-dimensional gesture data. It is depicted using a stylized visualization form known as a contour plot. Contour plots are often used to represent deformations to 3D surfaces and, in some cases, for identifying clusters of activity in 3D space. Taking the scatter plot of a linoleum artist’s hand movement

(described in chapter 3) as my starting point, I wanted to identify concentrations of the artist’s hand at specific distances from the table on which it had its resting place. This

2D contour plot would suffice for this. Notably, it allowed me to aesthetize the data presentation in a way that recalls abstract art.

But what would that same 2D transduction of 3D embodied data resemble if it was re- turned to 3D space, I wondered? Figure 4.5 displays the same visualization, but stripped of colour (which is used to mark depth in 2D space) and rendered in an immersive VR Chapter 4. Visualizing The Embodied Data Sublime 137

Figure 4.5: A God’s eye perspective of the 3D visualization. environment that I designed in Unity, a video game rendering engine. While I used two

VR headsets to prototype this experience, an Oculus Rift DK2 and an HTC Vive, I settled on the latter as the platform of choice for this project. To provide interactivity,

I scripted various navigational options in Unity using the C# , eventually settling on providing the user with two navigational speeds: one that was deceptively slow and would force them to meander about the environment in a cautious way; and another that enabled them to zoom out for a wide perspective at a rapid - but disorienting - speed.

One enters the VR world with a worm’s eye perspective, which can been seen in figure 4.6.

They are asked to survey the landscape visually. While parallels can be made between the digital world they find themselves in and our own - structures that resemble plains, valleys, and mountains can be made out, for example - the jagged, polygonal nature of the environment makes it clear that it is an alien space. After a moment in which to orient oneself - the weight of the headset and occasionally flickering digital image require Chapter 4. Visualizing The Embodied Data Sublime 138

Figure 4.6: Immersed in the landscape. some sensory adjustment - the scene operator delivers two controllers to the interpreter, who is given a simple instruction to press on a trigger and begin to navigate this stark environment which is deliberately void of the semiotics of data or visualization. While somewhat familiar, the experience of walking by pressing triggers is an alien form of locomotion. Once the operator has a sense that the visitor is comfortable, they are asked to look up out of the alien world. In the distance, a pattern, constructed from the same dataset but rendered using a different visual metaphor, emerges. This cloud of datapoints creates an initial pattern that the visitor might use to escape the confusing landscape.

They must rely on it as a pole star.

What follows is a slow track through the environment as the interpreter seeks to reach a sense of stability and order.22 Left to their own devices, the interpreter can deploy multiple strategies for navigation. If they travel “as the crow flies,” they could find themselves trapped in the depths of a valley or facing a wall of mountains that are too

22A short video illustrating this can be seen at https://vimeo.com/314866187. Chapter 4. Visualizing The Embodied Data Sublime 139

Figure 4.7: My colleague, Daniel Southwick, trying to make sense of the landscape. steep to climb. If they take a more conservative approach, and choose to only walk along plains, they could find themselves lost in an environment with the cloud of datapoints obscured by the terrain. No matter the strategy they adopt, they need to spend time becoming familiar as, at some point, they may need to backtrack - in which case a sense of visual order will be essential.

When the visitor finally makes it to the data cloud, they are prompted by the scene operator to push the trigger on the opposite controller. Whereas one controller simulates walking, the other simulates flying, a rapid adjustment of scale that triggers a sense of discontinuity. The interpreter soon discovers that the cloud represents the exact same dataset as the environment they have just navigated. Can this make sense? All around are clusters of dots, while below is a continuous topography which, for the first time, can be seen in its entirety. It is at this moment that the visitor is at two disjointed scales with respect to their relationship to data. They are both immersed in a scatter plot projection and have an omniscient perspective at the same time. Chapter 4. Visualizing The Embodied Data Sublime 140

What is this experience like? It is intended to provoke both curiosity and frustration, but also occasionally provokes a sense of wonder. While the physics of the real world have been programmed into it, and its visual representation is vaguely familiar, it is never clear how one is meant to “read” this data visualization. They are forced to ask questions that can reveal their own epistemic biases, particularly if they have familiarity with the visualization tropes that are employed.

Denaturalizing Scale

Navigating the sublime requires grappling with unfamiliar scale variance. In order to make sense of things at different scales, we need the ability to change our scales of interpretation. This case study brings the taken-for-grantedness of infovis scale to the fore, and encourages us to seek alternative paths to meaning making than ones we might already be accustomed to. How does this occur, the entanglement between sublime and anti-sublime that takes place? It occurs because the navigator eventually begins to identify patterns.

This VR probe is an experiential and exploratory, rather than analytical, tool. It helps illuminate what happens when we bring our bodies to bear on engagements with data.

In this sense, I use immersive data visualization as a kind of critical probe rather than a mode of formal critique. This is probing designed to open up new modes of percep- tion, rather than make reason-based arguments, and is an attempt to wrestle with the non-linear and open-ended... the multivalent trajectories of data - to interleave and en- tangle multiple views at the same time. While it is an alternative to the ocularcentric representation of sorted, cleaned, classified, rationalized data that is increasingly reified and professionalized in the field of information, it is also a rejection of the wholesale translation of those forms into 3D space. Chapter 4. Visualizing The Embodied Data Sublime 141

Contemporary data analytics frequently relies on engaging with standardized datasets, normative representational templates, and software applications spanning a wide spec- trum of complexity and sophistication. Rarely does it encourage critical inquiry into assumptions made by data collectors, about the source of the data (i.e. what gets ob- scured or occluded), or the means of representing and interpreting data. This is especially true as it pertains to resistance toward creative expression and reinterpretation through physical data sculpture, narrativization, etc. Engagement with most contemporary vi- sualization media predisposes an interpreter to an ontology with a very particular set of scale relations. Whether these relations correspond with a human sensory order or the algorithmic capacity to shape the interpretive experience depends on the media under question. The physical act of inhabiting data, guided by a critical orientation toward both its sources and outcomes, is one potential avenue for denaturalizing the ontology of visualization.

Visually analyzing datasets that range in diversity from the local to the global, from the immediate to the vast, is a process in which varying scales of interpretation bias the relationship an interpreter has with the underlying data structure. For example, when an interpreter analyzes data on a screen, they relate to a data object at a 1:1 scale. In this case, the object might be a GUI view, the database itself, or the machine in or on which it resides.23 When an interpreter inhabits a data landscape, however, they operate at the scale of actual entries in the database (or, in the case of an object-relational model, the objects that are defined through the database’s ontology).

Clustering and aggregation, which draw an interpreter closer to the objects in the former instance, focusing a fixed gaze on relations at the supra-level, distance and alienate the interpreter from objects at a more granular level. This produces noise and complexity,

23This conception of “object” only bears a passing resemblance to the understanding of object that turns up in an object-relational model of a database. It is deliberately encompassing. Chapter 4. Visualizing The Embodied Data Sublime 142 compromising one’s ability to relate to data that might actually have personal, subjective value. What might it mean to be able to descend into a landscape of live, networked data, to find oneself as a data point within it? When we move from the omniscient view afforded by the graphical user interface, what do we lose - beyond a sense of perspective?

The perception of order and stability? How do the visualization platforms du jour, such as

Tableau or Lumira, control information and enforce pre-determined knowledge outcomes by fixing scale at a level that reduces perceptual anxiety by taming clusters? At the same time, what might we gain by immersing our bodies in big data? Can we recover the “human relations” rendered opaque through processes of aggregation and smoothing?

These are just a few of the questions this case study raises for much larger data contexts.

Method: Estrangement

Taking Jasanoff’s alien subjectivity as a design prompt, the unnatural, sublime data world that I constructed from a formerly sensible contour plot could be described by a specific methodological approach: estrangement. Ruth Knechtel (2010), in describing digital estrangement, asks how we can “defamiliarize” objects “in order to perceive our data and our users in new ways.”24 She invokes this sense of defamiliarization in its art historical context, in which art’s purpose is to make objects unfamiliar and alien. This reflexive process has parallels with denaturalization, which occurs through unsettling conditions that are treated as natural, part of a grand teleology, and not a result of deliberate social, structural, or institutional design. Estrangement - deliberately creating an initial sense of alienation among users to provoke questioning, curiosity, and serendipity - brings together defamiliarization and denaturalization.

24See, also, Salter (2015, pp. 177–178). Salter describes a tension between “distance and absorption,” drawing on Brecht’s notion of defamiliarization as a method to create distance between a spectator and an event, and Artaud’s notion of a theater of cruelty, which aims for an attack on the spectator’s senses. Chapter 4. Visualizing The Embodied Data Sublime 143

By inserting the body into both the site of interpretation and the data itself, I had to

first decenter the eye - to denaturalize the epistemic condition of ocularcentrism that data visualization presupposes. This revealed other naturalizations, including how, by erasing the body, visualization assumes generic, disembodied categories of user. But in a VR environment, with its 360 degree panorama, Cartesian axes are either no longer relevant or subject to abuse.25 Interpreting scale with 2D axes and frames is impossible, as the perceptual conventions (based on 2D screen-and-paper-based perception tests) are no longer valid. A VR designer can fix a visualization to a specific referent, but that doesn’t meant the user has any clear sense of reference (as noted by the Google News

Lab team in the earlier section on immersive VR). Designing visualization without axes, however, opens up the possibility of manipulation, misrepresentation, and, worse to the anti-sublime apostles, estrangement and alienation. If you don’t have axes, where are you supposed to put the data? Can the axes be inferred by the shape of the representation?

If you don’t have a clear demarcation between interface and user, where is the body’s point of origin supposed to be placed? The embodied data sublime, as a construct that induces alienation through estrangement and provokes these questions, is a prime site for denaturalizing infovis.

How can estrangement be deployed systematically, then? What if all data environments started from a point of ambiguity rather than purported clarity? Mushon Zer Aviv’s concept of reambiguation, as a strategy to recognize complexity and ambiguity, could be an important step toward estrangement as a data visualization method. Zer Aviv writes:

“Let’s not insist on packaging our lives in semantic machine-readable form, let’s practice humor, poetry and art - create signals that inspire humans and baffle machines. Rather

25This is troubled by the fact that VR typically blurs Cartesian and Euclidean perceptions of space. How a user experiences space in VR must be considered through what is made visible to them (and what is occluded), in addition to their embodied sense of location and origin in familiar Euclidian environments (versus the more detached, ocular sense of a landscape’s ongoing vastness courtesy of Cartesian conventions). Chapter 4. Visualizing The Embodied Data Sublime 144 than give in to Disambiguation, let’s celebrate Reambiguation!”26 His program has some parallels with the concept of seamful design27 in that it seeks to call attention to the parts of a data representation that lie in the seams and haven’t yet been smoothed. The embodied data sublime embraces this seamfulness and ambiguity, and is not intended to be immediately sensible. It is not designed for analysts accustomed to repetitive, formu- laic charts. But it can give a user with or without data experience a unique opportunity to interact with data in creative ways, and to possibly provoke in them a sense of intrigue about what they are encountering.

Conclusion

This chapter asked what happens when the naturalized scale conventions we find in screen-based infovis are disrupted. It asked what happens when the disembodied user is inscribed back into the interface of data interpretation. It wrestled with the question of whether data visualization really is - or should be - an “anti-sublime” medium, as influential figures like Rob Kosara claim. The case study I’ve drawn on to answer these questions provides an example that denaturalizes the specific, bounded scale relations that make embodied, immersive interaction such a difficult proposition for the infovis world. Claiming that embodied data sublime experiences will produce more truthful - or even meaningful - data experiences has not been my goal, however. What I have sought to do is show that, as in chapter 2, translation between contexts is insufficient. If we are to build engaging and, yes, valid immersive data interfaces, we must be prepared to envision new forms of interaction that are not determined by practices developed for 2D design. While my focus in this chapter has been to denaturalize scale relations, invoking the sublime also makes it possible to explore alternative temporalities, or to assemble

26Disambiguation, Zer Aviv notes, describes processes to reduce complexity and ambiguity. See https: //medium.com/@mushon/re-ambiguation-74cda587d609. 27See Chalmers (2003). Chapter 4. Visualizing The Embodied Data Sublime 145 configurations that afford both embodied and disembodied subjectivities simultaneously.

I don’t claim that this makes for better or even more sensible data visualization, but that, as a meta practice, it might help us think about what makes visualization useful in the

first place (a core function of any denaturalization process). In purely functional terms, however, embodied data sublime experiences could be suitable contexts for open-ended

EDA, but I hesitate to make that claim without experimenting on a wide range of data models first.

Claude Fortin (2016, p. 3) has recently argued that “the logic of the digital sublime and its coeval offshoot, the data sublime, leaves us in a troubled state of pleasure mixed with anxiety. The precipice at the edge of which we are standing today,” she writes, “is our inability to grasp what data truly represents, what it reveals, where it hides, and what it can do” as it becomes a catalyst for our experiences. I could take the easy road and claim that probing alien data topographies might afford new methods of sensemaking, but that would be disingenuous. The VR experiments I describe ultimately point to possible tactics; inasmuch as they are successful, they have helped initiate my own critical inquiry into what Luciano Floridi (2012, p. 437) calls the “black art” of analytics by allowing me to wonder what it might feel like to descend into the maelstrom of big data, to move from the furious and frenetic outside into the potential safety of the middle.

Recent attention toward data literacy, which I will discuss in the dissertation’s final chapter, signals a desire to reinscribe the human at the center of data analysis as a way to resolve social, cultural, and ethical issues raised by big data. The figure of the analyst, a product of expanded data science curricula, is crucial; they must learn to read and evaluate the epistemic legitimacy of data, including subjective and objective characteristics. But being able to validate the accuracy of data is insufficient if the analyst is also expected to interrupt contemporary trajectories of big data in addition to local regimes of data power. They must be able to surface, and subsequently interpret, Chapter 4. Visualizing The Embodied Data Sublime 146 the claims that data is being leveraged to produce. Surfacing these claims requires gazing upon the sublime moon without focusing too much on the finger pointing at it. Chapter 5

Beyond the Visual

Summary: This chapter introduces concepts related to tangible and multisensory data interaction. It examines how contemporary infovis risks destabilization by visualization approaches that break from 2D, screen-based representation. Specifically, it considers recent trends around multisensory visualization and 3D visualization. Through analysis of a study of 14 blind and visually impaired users interacting with tangible data graphics, it illuminates various challenges for designers working in the space of data materialization.

Trust the authority of the senses - there is a recreation of the world in the sensorial apparatus.

Thomas Aquinas1

This emphasis on sight is literate man’s mark and strength, but the other senses suffer correspondingly. If used at all, they are used like sight. All experience is translated into visual models. We say, “Let’s see what we can hear.” Ted Carpenter2

1See Mailer (2008, p. 65).

147 Chapter 5. Beyond the Visual 148

Introduction: Tangible and Multisensory Visualization

What happens when visualization moves off of screens altogether - when data and its various forms of representation shift from digital to material registers? In this chapter,

I consider how the body’s multifaceted sensory apparatus has been detached from the interface of visualization (a theme that follows from the previous chapter) and how it can be reconnected. How do the digital affordances of infovis differ from material ones?

How does the material world resist representational practices that have been crafted for digital, screen-based interaction? How and when does the material push back? These are just a few of the specific questions that ground this chapter’s themes.3

I will argue throughout the chapter that, with respect to contemporary infovis practice, material and digital affordances are not at all equivalent, even when data is translated

(or transduced) across different material scales. This is a consequence of a long-standing naturalized condition that treats material and digital as discrete, separate spheres of activity. An argument the chapter makes is that we lack sufficient methods to account for what happens when a digital, screen-based medium is forced to adapt to material, screen-less interaction. The consequences this claim has for the field of tangible infovis are significant. With the body removed from the site of interpretation, screen-based info- vis has naturalized semiotic conventions that are considered appropriate for digital-visual formats and interaction modalities. This erasure reinforces the mathesis of conventional infovis logic that was described in chapter 3. By re-inscribing the body into the data interface, as demonstrated in chapter 4’s case study, modes of interpretation that are

2http://www.anderbo.com/anderbo1/aessay-04.html 3This chapter, more than the others, speaks to all four of the dissertation’s key research questions. To reiterate, they are: (RQ1) What epistemic conditions are “concealed under a guise of familiarity” (Drucker, 2014) in infovis? (RQ2) How can material engagement surface and reveal them? (RQ3) How can critical making (Ratto, 2011b) inform the development of alternative visualization methods? (RQ4) How might these alternative methods help shape critical approaches to visualization in emerging fields like critical data studies, as well as long-standing fields like HCI? Chapter 5. Beyond the Visual 149 inclusive, multisensory, and also complementary to visual reasoning, can begin to take shape. Tangible infovis, as an embodied mode of data interaction, is an expression of this.

It relies on screens in certain contexts, but does not in many others, providing an oppor- tunity to understand entanglements between material|digital interfaces, visual|tactile (as well as auditory) modes of interaction, and, finally, inclusion|exclusion.

To elucidate these arguments, I will describe an empirical study of fourteen blind users that was undertaken to determine their capacity to interpret tactile 3D data graphics, and to provide insights for the design of new inclusive infovis objects. What this study reveals, among other things, is that a naturalized ideal claiming a separation of material and digital forms of interaction operates heavily in this space. This naturalization makes it possible for designers to assume that smooth translation of objects between sensory modalities is possible, a problem that was first addressed at the end of chapter 2. In fact, this seamless “translation” is a fallacy. Because visualization depends on semiotic conventions designed in - and for - visual space, designers of tangible, multisensory in- fovis are forced to adopt ocularcentric principles to the detriment of entire categories of users. As will be discussed in this chapter and the concluding chapter that follows, by excluding users with vision impairments from data sensemaking processes, it becomes possible to exclude various other categories of users as well, painting infovis as an ex- clusive and privileged domain of expertise. By recognizing multisensory visualization as its own unique design space with novel affordances and creative possibilities, influenced by but still distinct from visual representation, we have an opportunity to encourage the democratization of visualization practice - a key theme for the final chapter of the dissertation. Chapter 5. Beyond the Visual 150

Multisensory Interaction

The division of the senses has been - and is - an ongoing process. We might trace it to the Cartesian split that was described in chapter 2, but it is worth recalling Karl

Marx’s 1844 remark that “the forming of the five senses is a labor of the entire history of the world down to the present” (Marx, 1844). In this reading, the five primary senses are to be read as five separate senses - a hierarchy of visual (sight), tactile (touch), aural (hearing), olfactory (smell), and gustatory (taste) modes of interaction.4 Don Ihde

(2017) describes a “visualist preference” in scientific practice,5 which is rooted in the

ocularcentric naturalizations that were discussed in chapter 3. This bias extends well

beyond scientific practice, though, into various research domains outside of the empirical

sciences - including into the world of interaction design, where new multisensory processes

like sonification face a specific hurdle according to Ihde: “simply getting researchers to

try” them.6 Accounting for the Cartesian legacy of disembodiment in epistemic practices, beyond what others like Ihde have already done, is not my goal here. But this separation of the senses in interaction design stems from the earlier naturalization of disembodied interaction, and can only be countered by a more rich, phenomenological understanding of engagement and experience.7

A counter-response to the so-called visualist bias has taken two forms in the develop- ment of anti-ocular and multisensory ideas and approaches. The terms and the theories associated with each of these trends should not be conflated, however. Multisensory - as in “multisensory interaction” - does not have to be anti-ocular or anti-visual. Rather,

4These so-called primary senses should not, of course, be divorced from related concepts that inform our understanding of how the orchestra of sensory perception operates, including kinesthesia, propri- oception, balance, extero and interoception, chronoception, thermoception, and nociception. Notably, although these “secondary” senses generally act in concert, they too are typically separated. 5See also Cowen (2015). 6See Ihde (1999) for more on these themes . 7See Dourish (2004, pp. 103–121) for an overview of what this would look like. Chapter 5. Beyond the Visual 151 multisensory interaction can augment or enhance existing visual modalities, drawing on the “hybrid sensorium”8 as a fulcrum for meaning making. What might this more holistic and non-hierarchical grouping of the senses look like if it is going to be operationalized for interaction research? Ted Carpenter (1980), in writing against the visualist bias of his

field, anthropology, discussed the cultivation of “native authors - voices from the inside” as a promising shift. “These intruders into the visually oriented profession of Anthropology are always writing about how things smell, taste, feel, sound: toes gripping roots along a slippery bank; peppery food burning the rectum; ‘he became aware of gentle heat playing on his right cheek, and a fine smoke teasing his nostrils; while on the left he heard an odd gurgling sound.’” Carpenter goes on to describe “insider reports, with their descriptions of sensory awareness” that counteract the visualist tendencies of observational research.

Infusing such thick qualitative description into the space of interaction design would be a necessary point of departure.9

In recent years, interactive multimodal visualization has captivated a segment of the data design community interested in pushing the boundaries of the infovis medium. One site where there might be a suitable home for this is in the emergent space of multisensory interaction. Tatiana Tavares (2013), in sketching out parameters for what multisensory interaction design could look like, argues that it should consider user interface design as an intersection of three main aspects: pluralism (i.e. multiple users or devices), adaptability

(i.e. the user interface should be capable of being transformed), and cognitive ability

(i.e. designers need to take into account the users’ senses, perception capabilities, and emotions). In the context of data representation, Susanne Tak and Alexander Toet

(2013) provide a brief outline of a design framework that considers how data might be mapped to tactile (e.g. surface roughness), auditory (e.g. pitch), olfactory (e.g. smell

8(Drucker, 2001) 9See Classen (1997) for an overview of an “anthropology of the senses.” Some of the most exciting recent work in this domain that speaks to an interaction design and HCI audience has come out of the emergin field of sensory ethnography. For examples, see Pink (2009) and Pink et al. (2013). Chapter 5. Beyond the Visual 152 intensity), gustatory (e.g. sweetness), and vestibular parameters (e.g. vertigo). They refer to the outputs of these mappings as “sensifications.” In their outline, they lament “a lack of consistent rules and guidelines for the integrated design of multisensory displays

(data sensifications) and their user interfaces” and go on to suggest that “rigorous user studies are required to derive guidelines that ensure the consistency between different information channels.” What they propose, in effect, is a kind of grammar of multisensory graphics.10 The lack of consistent rules and guidelines can likely be attributed to the fact that projects in haptic/tactile, olfactory, auditory, and immersive/embodied forms of interaction involve quite disparate, and in many cases unique, bespoke computational interfaces and modalities. Locative media, mixed reality, wearables, and natural user interfaces all feature prominently.

A number of important themes and questions are raised by new developments in this space. These include: the quantification of multisensory and affective characteristics as a technological concern; resistance to subjective, interpretive, and artistic modes of repre- sentation in traditional scientific 3D visualization; a claim that ocularcentric interaction is already inherently multisensory; and, finally, the question of how ocularcentric phe- nomena are extended and naturalized into other sensory realms. Because multisensory interaction is a broad, emerging domain with few boundaries, my focus in this chapter will be limited mostly to one sensory modality: touch and tactile interaction. While I acknowledge the extensive and thought-provoking work on sensory studies in fields rang- ing from anthropology to museum studies, I will mostly focus my attention on trends related to interaction design.

In interaction design, the kind of phenomenological, situated, multisensory approaches

10For more on possibilities and directions in this emerging field, see Roberts et al. (2014), as well as the recent edited collection by McCormack et al. (2018) which features chapters on 3D visualization, data stories, and mixed reality, among various other concerns. This collection provides a thorough and comprehensive look at where multisensory visualization might go in the coming years. Chapter 5. Beyond the Visual 153 described by Tavares and the like most closely resemble work on affective computing, reflective design, and culturally embedded computing.11 Reviewing the current state of multisensory interaction research begs the question: which sensemaking acts do multi- sensory applications purportedly enhance? Data analytic tasks, as in the olfactory VR work that Niklas Elmqvist’s group at the University of Maryland has been working on recently (Patnaik, Batch, and Elmqvist, 2018)? Or the enhancement of geospatial under- standing, as in Kate McLean’s “smellwalks” and complementary “sensory maps.”12 This is an important question. If the definition of an interface is becoming more fluid because of trends in multisensory interaction, leading to debate about what an interface is, and is for, then we end up with a more interesting problem that lies at the heart of this chapter: what really is a visualization interface?13 In the following sections, I will attempt to answer this.

Tactile Infovis and Data “Physicalization”

If a multisensory infovis sub-field has started to take shape, its status is complicated by the fact that physical data visualization has existed for many centuries (despite remaining out of the mainstream of infovis). In this sense, infovis has an exceptionally long analog history that pre-dates modern computing. Examples from this history include the Inca quipus and South-Pacific navigational stick-charts referred to in chapter 3, as well as more recent objects that resemble contemporary infovis tropes, including Don Stedman’s 3D periodic table, produced in the 1940s while he was with the National Research Council here in Canada; Fritz Winckel’s 3D spectrogram from 1960; and Jacques Bertin’s re-

11See Boehner et al. (2005), Sengers et al. (2005), and Sengers et al. (2004). 12See https://sensorymaps.com/. 13This line of questions is inspired by Andy Clark (2007), whose paragraphs exploring the question “what is an interface?” came amid similar disruptive trends. Chapter 5. Beyond the Visual 154 orderable matrices, which he developed in 1968.14

The most extensive project to recover this history has been undertaken by Yvonne Jansen and Pierre Dragicevic, along with their associates in the Visual Analytics Project at the French Institute for Research in Computer Science and Automation.15 While their

work on “physicalization” (the term they prefer) has been exhaustive, it highlights a

lack of serious, coordinated, sustained efforts to pursue physical data visualization as a

research or design opportunity. Also absent from their otherwise fantastic collection are

examples from a domain that is much longer - the field of tactile art.16 While tactile art

has not, until quite recently, been concerned with data, there is a growing tendency for

techniques from this field to be incorporated into the space of physical data design. Many

contemporary examples blur the boundary between tactile art and data representation.

Notable among them are Nathalie Miebach’s weather data sculptures, which are as good

an example of transduction as anything I’ve come across, and provide ample evidence

that the entangled tension between aesthetics|truth does not have to be a barrier to data

engagement.17

Evaluation of interaction with physical data objects is a unique challenge. Various exper-

imental projects in HCI have developed and examined novel methods of interaction but,

as with interactive graphics, there is not any kind of grammar of physical data graphics.

Experimental measures like “two-handed interaction” and the capacity to obtain “quick

overviews” from tactile graphics are nowhere close to the level of universal acceptance

that screen-based usability heuristics (e.g. Nielsen’s “Error prevention” or “Aesthetic

and minimalist design”) benefit from.18 Experimental findings from a study by Yvonne

14See http://dataphys.org/list/3d-periodic-table/, http://dataphys.org/list/3d- spectrogram/, and http://dataphys.org/list/bertins-reorderable-matrices/. 15For an overview of their work, consult Jansen (2014), Jansen et al. (2015), and the wiki they’ve assembled at http://dataphys.org/. 16See Švankmajer (2014) for an excellent survey of this field. 17See http://nathaliemiebach.com/portfolio.html. 18See, for example, McGookin, Robertson, and Brewster (2010) and Taher et al. (2017). For Nielsen’s Chapter 5. Beyond the Visual 155

Figure 5.1: Nathalie Miebach’s “The Halloween Grace,” a piece that translates weather and ocean data from the Perfect Storm of October 1991 and depicts the collision of two major weather fronts merging. Chapter 5. Beyond the Visual 156

Jansen, Pierre Dragicevic, and Jean-Daniel Fekete (2013) suggest that “physical visual- izations should be built to support direct touch and not enclose data.” They evaluated physical 3D bar charts and demonstrated that, in certain cases, they could outperform screen-based counterparts. The ability to physically manipulate charts, they noted, was comparatively less important. With this lack of clear design guides in mind, tactile data interaction offers a uniquely different mode of sensory interaction from vision and, as a consequence, requires new usability and perception tests to be envisioned. While these sort of lab setting experiments will be necessary, qualitative methods designed for evaluating phenomenological interaction in epistemic sites like museums should also be considered.

Naturalization: Firewall Between Material and Digital

While there are a number of naturalized epistemic conditions that exist in the design space of multisensory infovis, including the ocularcentrism described in chapter 2 and the false dichotomy between aesthetics and truth described in chapter 3, arguably the most important one - from a design perspective, at least - posits that digital and material are entirely distinct and separate spheres. This naturalized condition places a firewall between the two, demarcating what types of designs and interactions belong to the digital realm (which is, for the most part, disembodied, as claimed in the previous chapter) and the material one (where the body cannot be erased). This is problematic, as the specificity of semiotic conventions designed for what Tufte (2006) calls the “flatland” of screen and paper are often overlooked when graphical objects are transduced and end up in the jagged topology of the real world. While this naturalized material|digital dualism would seem to insist on a separate set of semiotic parameters for physical graphics, the metaphors and tropes that prevail are frequently lifted wholesale from the visual world.

heuristics, see https://www.nngroup.com/articles/ten-usability-heuristics/. Chapter 5. Beyond the Visual 157

Because the normative interface of data interaction is disembodied, it is a taken-for- granted assumption that no comparable interface actually exists in the physical world

(certainly no computational interface). Physical data graphics trouble this, especially as they begin to mature beyond the static “model” stage and begin to resemble the interactive figures found in digital environments.

The problem with this firewall between material and digital has to do with the erroneous idea of translation, which suggests that digital objects can simply be ported into material contexts without undergoing substantive structural and semiotic transformations. This assumes, falsely, that interaction metaphors which are designed for the digital space are also portable to material space. In many cases, they are not. Common infovis tooltip interaction techniques, for example, including scrolling for scale changes, brushing for selection, panning, and coordinating multiple views typically require novel methods in the physical world. These can include complex assemblies of electromechanical switches and servo motors, or the simple re-arrangement of objects. But the challenge of accounting for perceptions of scale associated with 3D volume and area mean that physical data objects still need to factor in their surrounding physical context.

And there is an additional operation that needs to be addressed. As scholars in media archaeology have demonstrated, the digital always has a material substrate.19 The “myth

of an immateriality of information” (Parikka, 2015) leads to conditions in which physical

infovis projects fail to account for residue of their previous (or concomitant) digital

instantiation. In a similar sense to how researchers engaged in digitizing physical objects

have to account for their materiality, designers prototyping tangible infovis objects must

account for their digitality. Alexander Galloway (2014, p. xxxiv) notes that “digitality

is much more capacious than the computer,” hinting at the logical foundation at the

heart of the ontology of digital technologies. This foundation, deeply connected to the

19See Kirschenbaum (2004). Chapter 5. Beyond the Visual 158

Figure 5.2: An initial data tile layout presented in the case study. mathesis described in the previous chapter, is carried wherever the output of digital technologies goes. Ultimately, objects like tangible data models need to be considered as material|digital entanglements, and not as belonging to one realm separate from another.

Even in their material state, they are coloured by the politics, histories, and ontology of digitality.

Study: InclusiVis

To understand how this artificial separation of material and digital produces quite differ- ent design affordances and, ultimately, constrains the development of tactile infovis, we need to examine the entire chain of data selection, visualization, materialization/physi- calization, and in-context use. The final case that grounds this dissertation’s arguments stems from a design project and empirical study I carried out, beginning in the autumn Chapter 5. Beyond the Visual 159 of 2017, to generate and evaluate tactile data graphics for blind and visually impaired users. This research project, called InclusiVis, is a culmination of ideas presented in the three previous chapters - techniques learned; approaches and methods questioned; and a synthesis of theory and practical application. It is also a natural extension from the museum work described in the second chapter.

The context for this project resides in the recent shift toward data-driven engagement in many facets of social life. Big data has been variously described as an opportunity

(Lohr, 2012), a harbinger for the death of politics (Morozov, 2014), and a disruptor that

“waits for no one” (Maycotte, 2014). The ability to understand algorithmic manipulation of large datasets and the capacity to weigh the ethical impacts of data-driven decisions are crucial data literacy skills that are increasingly brought to bear on active, engaged citizenship. While the long-term effects of so-called “data-driven citizenship” have yet to be realized, the role of data visualization in its sensemaking apparatus is already apparent. Data-literate citizens must be able to read visualizations, which include both static and dynamic graphical representations of abstract data, frequently rely on visual metaphors, and are commonly rendered as screen-based media.

Most contemporary visualization technologies, however, are insufficient for users with visual restrictions. InclusiVis had, as its original aim, the exploration of alternative data visualization modalities, particularly tactile and auditory forms that afford greater accessibility for people with visual impairments. Its inspiration came from the following question: As the ability to interpret and analyze data becomes an increasingly significant aspect of informed citizenship, how can physical data objects support blind and visually impaired citizens? The long-term goal of this research is to inform the generation of novel accessible interfaces for interpretation of large datasets (e.g. tactile dashboards).

Throughout the project’s duration, I engaged directly with blind citizens in order to generate and evaluate physical data objects that employ 3D printed tactile features and, Chapter 5. Beyond the Visual 160 in some cases, embedded responsive electronics with audio and haptic feedback. In order to connect these data engagements with real-world experience, I purposefully focused on topical civic data as a way to ground my data design approach.

In the sections that follow, I will describe the four main components of the InclusiVis re- search project: filtering and selection of appropriate civic datasets; exploratory design of

3D printed tactile models; design-based evaluation with blind citizens; and dissemination and engagement outside of the academy. In doing so, I will present selected insights from the design process and evaluation sessions in which blind and visually impaired research participants assessed and discussed physical data objects I had prepared for the study.

Throughout these research sessions, participants provided rich feedback that extends be- yond usability criteria. They described what data experience has been like in their lives, communicating, among other things, an almost universal resignation in the face of data engagement - that it’s something completely alien to their lived experience. That data visualization is not media “designed for them” was a refrain expressed by nearly every person who participated in the study.

But participants also expressed thoughtful considerations around what appropriate forms of data engagement for the visually impaired might look like. One particularly valuable insight, which was discovered through both a study of the literature and survey of existing examples, as well as my iterative design process,20 is that it is difficult to prevent ocu- larcentric biases from influencing physical representations when converting screen-based visualizations into tangible objects. The study illuminated multiple instances where this happened. The tendency to incorporate and, in effect, naturalize visual biases results in two related phenomena: 1) decreased usability for blind and visually impaired users and

2) a prevailing sense that visualization and, in effect, numerical data, is not available to the blind and visually impaired community. By iteratively building on participants’

20Which was informed by participants’ feedback. Chapter 5. Beyond the Visual 161 feedback, I had an opportunity to reveal and correct for these “visualist biases” by de- naturalizing them and producing alternatives.

Summary of Study Design

Denaturalization, in the context of visualization design for blind users, requires de- centering the eye. As will be revealed throughout the following sections, this is not an easy task. Data sensemaking is so often a wholly visual process from start to finish, and tangible data sensemaking frequently replicates the visual biases inherent in infovis

- biases that both inadvertently and sometimes deliberately proliferate in the process of translation from screen to tactile. These biases may produce additional barriers to access - barriers rooted in naturalized assumptions that designers are not generally aware of. Batya Friedman and Helen Nissenbaum (1996) suggest two steps to counteract bias in design: first, “we need to be able to identify or ‘diagnose’ bias in any given system.

Second, we need to develop methods of avoiding bias in systems and correcting it when it is identified.” Toward this end, designers must develop “a good understanding of rele- vant biases out in the world.” Building this contextual knowledge in a space where blind citizens have to engage with a medium that was effectively never designed for them is a difficult starting point. We must first recognize that many of the conventions for in- clusive design that guide accessible infovis still primarily speak to visual experience (e.g. correcting colour scales for colour-blind users).

For this study, it was necessary to understand not just how blind people encounter data on screens, but how they encounter it out in the world, in their day-to-day lived experi- ence. To develop this understanding, I conducted fourteen semi-structured, participatory design-inspired interview sessions, each lasting approximately two hours (average inter- view duration: 01:58:00), with blind and visually impaired participants. Throughout the Chapter 5. Beyond the Visual 162 sessions, I referred to an interview guide that I had prepared in order to explain unclear contexts or prompt discussion about key topics (see AppendixA). Following this guide, I began each session by providing background on the project, including its rationale, which led into a series of questions about each participant’s professional background, their fa- miliarity with data and data graphics, and their experience with different modes of tactile interaction. I drew inspiration for the structure of these interviews from important works in the participatory design literature by Luck (2003), Spinuzzi (2005), and Kensing and

Blomberg (1998), which together point to a need to build empathy with participants and account for the unique contexts that they individually come from. This focus on placing the user’s needs and insights at the forefront of a design process is not uncommon in inclusive design research.

Of the participants in the study, all were legally blind, 4 retained some minimal degree of vision, and 10 were almost totally blind. To recruit participants, I initially contacted participants who had previously taken part in inclusive design studies at the University of Toronto’s Semaphore Research Cluster. I had prepared a research ethics proposal that indicated I would use a snowball method, starting initially with communication to professionals who I had previously been in contact with, or who had expressed interest in participating in research such as this through previous interactions. Purposive sampling would be used, as it was necessary to recruit participants who are familiar with some of the study content and its relation to their professional expertise. In the first phase of recruitment, one of my participants, who is well-connected in the Toronto blind and visually impaired community, asked if he could post a notice on an email list for people in the community. A flood of inquiries about the study came in. 17 potential participants with available schedules were contacted with invitation notices (see AppendixB), out of which 14 agreed to take part in the study.

Each session took place during weekday work hours, and participants were provided with Chapter 5. Beyond the Visual 163 light snacks and coffee. In 4 of the sessions, 2 to 3 master’s student research assistants from the Semaphore Research Cluster were present to observe and, in a few select mo- ments, take part in the resulting conversations. All research sessions were video recorded to capture both the audio conversation and tactile gestures, and I transcribed and ana- lyzed the videos using the ELAN video annotation software package.21 To analyze session videos, I employed a widely-used qualitative methodology known as inductive thematic analysis (ITA). The purpose of ITA is to identify “patterns of meaning” across a dataset by iteratively coding for variables that were unanticipated at the start of the interviewing process (e.g. “embodied data experience”). Using an inductive rather than deductive ap- proach to thematic analysis entails not trying to fit codes “into a preexisting coding frame or the researcher’s analytic preconcpetions.” This is a “data-driven approach.” Deductive thematic analysis, on the other hand “is driven by the researchers’ theoretical or analytic interest and may provide a more detailed analysis of some aspect of the data but tends to produce a less rich description of the overall data” (Nowell et al., 2017). This method is motivated by a desire to focus on providing answers to the research question(s) being addressed. In addition to coding for discussion variables, I also coded for gestures that were unanticipated (e.g. two-handed tracing).22

While I prepared various tactile data objects for the study, the goal of the interview sessions was not to conduct a perception or usability study that would determine, for example, whether certain 3D printing materials or specific geometric primitives (e.g. rectangular bars vs. cylinders) were optimal for tactile interaction. By employing a

21https://tla.mpi.nl/tools/tla-tools/elan/ 22Having previously used a grounded theory approach that drew inspiration from Charmaz (2006) in my master’s research, I was wary of using a method that would entail development of a new theory. I was particularly cautious about using a method that comes with a fair bit of epistemological baggage. A rigorous grounded theory methodology would require implementing a full set of grounded theory procedures, toward an outcome of producing a new “theory grounded in data.” ITA, on the other hand, has no such pre-stated theoretical outcome mandated in its application. My previous work surely shaped the patterns of meaning that would be discovered through ITA coding, but this method allowed for emergent findings. Chapter 5. Beyond the Visual 164 qualitative, interview-based methodology, I was able to probe about the participants’ level of graphical literacy, their experience with tactile interaction in related contexts (e.g. through touch tours in museums), and their interest in different data-driven processes by providing them with evocative examples. This enabled participants to provide design suggestions that I could iteratively build on for future sessions. Interview sessions ran until the summer of 2018, providing me with sufficient time to prepare a wide range of prototype graphics to use as prompts. An explicit goal of the interviews was to understand what a blind citizen’s typical engagement with civic data might be like. To do this, I leveraged a public transportation dataset made available by the City of Toronto through its open data portal. While ostensibly free and open, this dataset was published in an almost exclusively visual format, excluding blind and visually impaired citizens from fully engaging with it, making it both topical and of likely interest to many of the study participants.

Study Participants

Participants split evenly along gender expression lines, and ranged in age from university students to retirees. All were legally blind, with most experiencing almost total blindness

(only 4 had any ability to read a screen visually, and only with the assistance of digital accessibility tools). 9 participants read braille. Roughly a third of them had experienced degenerative vision loss later in life (within the past 15 years) and, as a consequence, had some degree of previous visual experience that influences their sensemaking capabilities.

All participants currently live in the Greater Toronto Area.23

• Ron is a long-time disability advocate who is passionate about accessibility in

public institutions like museums. He has lived in Toronto for most of his life. He

23Participant names have been replace with pseudonyms. Chapter 5. Beyond the Visual 165

started going blind as a child and has retained almost no vision.

• Mark is an accessible technology specialist who has taken part in various research

studies over the years. He works in a government policy context.

• Rowena is a partially-sighted researcher with a background in cognitive science.

She has low vision, but uses accessibility tools to read text on screens. She attended

the research session with her guide dog.

• Charlene is a writer and massage therapist. She is especially interested in astron-

omy data.

• Samia is retired. She spent much of her career working in banking and office

administration. She has been in Canada for 35 years, but grew up in the South

Pacific.

• Ulrich works as a volunteer ride coordinator. Having spent much of his life in

Germany, he currently splits his time between Toronto and Frankfurt, where he

works as a tour guide in a museum. He used to work in human resources.

• Eleanor is a volunteer with Toronto parks and recreation. She is a former high

school teacher. She began to lose her vision around a decade ago.

• Kendrick worked in finance prior to losing his vision. He has also been blind for

around 10 years.

• Mihali is a musician and composer who recently moved to Canada. His vision has

deteriorated over recent years, but he retains some visual ability.

• Mildred is a researcher who focuses on disability inclusion. She holds a doctorate

in education, and is originally from the Caribbean.

• Barrie is from Sri Lanka where he worked in a bank. He is a cricket fan. He

attended the session with his daughter.

• Tamsyn is studying to be a massage therapist. She uses voice interaction with her

iPhone for many tasks in her day-to-day experience.

• Randy volunteers with disability advocacy groups. He is involved in a drama Chapter 5. Beyond the Visual 166

group that puts on skits and plays, and enjoys listening to sports on the radio.

• Elmira is a retired researcher with a doctorate in Psychology. She has volunteered

as a braille instructor for a number of years.

Session Structure

Interviews included questions about each participant’s experience and familiarity with statistics and data visualization techniques; about the kinds of civic data they might find useful; and about their interest in using new digital tools to access data related to their civic experience (e.g. audio-based mobile navigational apps such as BlindSquare). The conversational flow of the interviews was guided by the use of tactile prompts, which participants were asked to reflect on in the context of specific data stories.

After gaining informed consent from each participant (see Appendix C), each session began with a short statement providing background on the project. This included a social rationale for the work. Following this, participants were asked to describe their educational and professional (or volunteer) background, as well as their level of familiarity and comfort with data interaction. When necessary, they were given examples that included sports statistics, health tracking apps, and demographic data related to civic life. Next, participants were asked about their background with tactile objects and haptic technologies, and were provided with samples of touchable objects for museum interaction if they needed further context.

Once participant background information had been ascertained, a specific civic data context was presented. Participants were provided with an initial grid-based layout of tangible data tiles to support their ability to understand the context (refer to Figure

5.2). As this part of the session unfolded, participants were given additional physical prompts to help direct the conversation. These included 3D maps, radial diagrams, Chapter 5. Beyond the Visual 167

Figure 5.3: Data tile and tactile dashboard prototypes. haptic controllers, prototype tactile dashboards, and a variety of other objects that will be described further on. Near the end of each session, participants were asked to discuss additional civic datasets that might be of interest to them, as well as other forms of data engagement that could benefit their daily experience.

Civic Data Context

Throughout an extensive period of design strategizing that pre-figured the research ses- sions, I focused my attention on datasets related to civic experience in Toronto. Specific datasets that I used to prototype visualizations included Toronto subway station ca- pacity and layout; granular population density and demographic data drawn from the most recent Canadian census; and neighbourhood-specific violent crime statistics avail- Chapter 5. Beyond the Visual 168 able through the Toronto Police Service’s Public Safety Data Portal. Through informal consultations with blind and sighted peers, I gained insight into the kinds of data-based questions and concerns that blind citizens might have. After weighing various options that would be of potential interest to blind citizens, I made a decision to focus on datasets that are purportedly open and accessible but, for one reason or another, are available through exclusively visual channels, thereby creating a barrier of access for blind users who might wish to query them. As a consequence, my focus shifted toward data that was of particular topical interest at the time - data that was in the public conversation, and about which data-based claims were being made. This enabled me to consider how specific policy decisions and public communiques were being made through exclusively visual media.

In November of 2017, the City of Toronto initiated a pilot project on King Street, a major downtown thoroughfare, to remove nearly all vehicle traffic and provide unimpeded access for streetcars. Toronto’s streetcar system is the second busiest light-rail network in North

America, and King Street is its heart. Recent densification and persistent traffic gridlock have made transit reliability a key civic issue in Toronto. The aim of the King Street pilot project is to “improve transit reliability, speed, and capacity.”24 At the time the pilot project commenced, a firestorm of media controversy ensued as business owners along

King Street claimed the project had resulted in an immediate and significant decrease in pedestrian traffic. One vocal restaurant owner suggested that King had become a “ghost town.”25 The City (and various councillors) countered these claims by proclaiming that the answers would lie in data, made available through monthly reports, related to reduced headway for streetcars and growing pedestrian numbers.

Throughout the course of the InclusiVis project, the city had not released a usable dataset

24http://spacing.ca/toronto/2018/11/15/three-ways-the-king-street-pilot-creates- more-livable-communities/ 25https://www.blogto.com/city/2018/02/people-are-playing-hockey-king-st-again/ Chapter 5. Beyond the Visual 169

(in a standard format like .csv). Nearly each month, it published a .pdf “dashboard” on its website.26 The City’s .pdf reports are filled with visual data graphics, but little descriptive text, thereby excluding blind and visually impaired citizens from engaging with them. While my decision to work with this dataset was hindered by the fact that the data was new and by no means complete, it provided a unique opportunity to work with a dataset that could prove to be consequential in an upcoming election. Because the data had not been released in a format that would make it easy to work with in contemporary analysis tools, I wrote a Python-based scraper27 to extract raw pedestrian and traffic volume data. Following this, I prepared a series of Jupyter notebooks that I used to create and disseminate visual prototypes of the data story that could be prepared for the blind study participants. These prototypes formed the backbone of nearly all 3D printed data objects that were used in the InclusiVis study (including replicas of the bar graphs contained in the City’s .pdf dashboards), as well as a later public-facing workshop that I would prepare for digital literacy specialists employed by the Toronto

Public Library.

Tactile Graphics

Drawing inspiration from a number of projects - including my own previous work - that make use of 3D design and printing to reimagine data visualization, my initial design goal was to prototype new methods for preparing and 3D printing data graphics inspired by visual analogues. Tangible data representations have existed for at least a century in various formats that resemble data visualization tropes common today. These include

26https://www.toronto.ca/city-government/planning-development/planning-studies- initiatives/king-street-pilot/data-reports-background-materials/. Data was finally made available in mid-October, 2018, at https://www.toronto.ca/city-government/data-research- maps/open-data/. 27While it would have been as easy to do by hand, I wanted to facilitate the process for other interested parties. Chapter 5. Beyond the Visual 170

Figure 5.4: Early Jupyter-based visualization prototyping. Chapter 5. Beyond the Visual 171

3D bar charts, tactile maps, physical flow diagrams, and layered area charts. Recent developments in 3D modelling and digital fabrication technology have inspired various projects and approaches that seek to make the field of data visualization more accessible for blind and visually impaired users.

Tangible data design for the blind and visually impaired should not be taken as little more than a sub-field of infovis. It is its own rich design space that poses challenges for researchers working on interactive alternatives to screen-based visualization. It also presents a range of new problems and concerns that the standard perceptual usability methods used in infovis research cannot fully account for. In this novel design context,

2D, screen-based visualization objects that are meant to serve as cognitive supports in the process of data analysis are typically re-interpreted as 3D, tangible, immersive, and multisensory physical objects. The problem of ocularcentric bias in what is typically considered a ‘translation’ process, however, serves as a crucial design challenge. In my work, overcoming this bias has meant doing away with the translation metaphor in favour of a transductive approach (described in chapter 2) that William Turkel (2011) has used to describe a necessary reframing of digitization as materialization. Transductive materialization is a process that draws attention to seams between materials, presenting new possibilities for creative expression.

My approach to rendering 3D printable objects from 2D visualizations includes both au- tomated script-based processes and custom artistic computer-aided design. Throughout the InclusiVis project, I prepared, among other things, physical bar graphs, maps, line and area charts, suspended 3D scatter plots, donut charts, and star plots/radar charts.

Each of the techniques I employed required custom de novo designs, despite the fact that

I typically start with visual inspirations that have extensive histories. Furthermore, each prototype I built has taken on a new life in physical form, as I have had to accommodate interactive features that one might encounter on a screen, or scale and perception issues Chapter 5. Beyond the Visual 172

Figure 5.5: 3D printed data maps depicting population change between 2011 and 2016 by city ward. that conventional screen-based visualization perception studies might call attention to

(e.g. cylinder volume when creating 3D donut charts). As a consequence, my method- ological choice to use iterative participatory design techniques to inform future design work was backdropped by a lack of appropriate precedents to draw on when attempting to assess the interpretability of these new data objects.

Dissemination and Public Engagement

The final component of the InclusiVis research design is worth describing here, as it includes the “actionable” component associated with my original mandate. My goal from the start was to develop new techniques and inform future design work in the context of accessible data analysis tools. Over the course of the project, I developed prototypes for tactile civic data dashboards. Many of these were included as interview prompts, but there was a larger goal of encouraging the development of these objects at locations Chapter 5. Beyond the Visual 173 throughout the city. Similar tactile interfaces, while rare, exist as maps at museums and public parks. My long-term aim is to develop these for a dynamic context in which they may be of practical use to both the blind and sighted community. Toward this end, I have been in ongoing consultation with employees of the Toronto Reference Library’s Digital

Innovation Hub. Toronto Reference Library is the centerpiece of the world’s busiest urban library system. With its non-circulating collection, Toronto Reference Library

(TRL) operates as more of a community center, and its Digital Innovation Hub has been at the forefront of library-based digital literacy initiatives in Canada. Located close to the city’s major subway interchange, thousands of people visit the library daily, including many people with disabilities.

In September 2018, I conducted a half-day instructional workshop for employees of the

TRL Digital Innovation Hub, along with other digital literacy specialists working in the library system’s network of new digital innovation hubs and popup learning labs. For this workshop, I walked participants through a handful of user-friendly methods for rendering

3D printable data representations from City of Toronto open data. I made instructions, data, and a Python-based workflow available online through a code repository on GitHub, and shared an open source Jupyter notebook to communicate the workflow.28 The pur- pose of the workshop was to seed these hubs, each of which is outfitted with 3D printers, with the tools and capacity to print custom data graphics for blind citizens who might request them. In addition to this, my long-term goal of creating an in situ tactile dash- board at TRL will only be possible if library employees have the ability to update or reconfigure data representations. In an age of dynamic, interactive data visualization, it would be a shame to promote static graphs that would soon be obsolete.

28The code repository is here: https://github.com/CriticalMaking/TPL and the notebook is acces- sible here: http://nbviewer.jupyter.org/github/CriticalMaking/TPL/blob/master/tpl.ipynb. Chapter 5. Beyond the Visual 174

Figure 5.6: Mark comparing pedestrian volume counts at different King Street intersections.

Findings and Key Insights

The InclusiVis project mandate of exploring accessible data representation techniques for blind citizens to engage with civic data has been constrained by a number of design challenges that will soon be revealed. These challenges partly stem from a tendency in the infovis world to separate digital and material, rather than treat them as entangled.

This tendency leads to the erroneous idea that objects can simply be ported from one domain to the other. But challenges also partly stemmed from the prevalence of deep- seated epistemic biases in the infovis world that have to do with what kind of objects are deemed “usable.” The sequential nature of the interviews allowed me to construct and examine new designs and approaches iteratively, as well as to consider design flaws

(e.g. illegible braille caused by “stringing,” a common problem faced with desktop 3D printers) that came up in initial prototype designs. However, I was largely unaware of how Chapter 5. Beyond the Visual 175 prominent my own embedded visual design biases were, even as I made attempts to move away from visual tropes altogether. Through ongoing iterative experimentation with rendering tangible representations from this dataset (and related Toronto civic datasets), and feedback from research participants that will be described in the sections to follow, it is evident that transforming 2D, screen-based civic dashboards and data visualizations into 3D tactile models has the potential to re-inscribe visual biases in ways that data designers may not be able to fully anticipate.

This, in effect, has the added potential to generate new and unexpected barriers to ac- cess, many of which can be traced back to naturalized assumptions about the epistemic validity of visual media and the inherent ocularcentrism of contemporary data interpre- tation practices. This problem - the epistemic biases of designers - constitutes a core pillar of exclusion by design, and yet is sadly under-recognized in the design literature on bias. This is a fundamentally different kind of exclusion by design than has been tra- ditionally discussed in the STS and design literature,29 and it has forced me to reconcile with a crucial, yet profoundly difficult question carrying the potential to undermine this entire research project: what happens when a designer’s visual biases aren’t denatural- ized enough? This question will frame the sections that follow, which provide detailed descriptions of the study’s major findings and key insights.

Abstract and Embodied Mental Models

Ron, has been almost completely blind since childhood. Having had encountered tactile maps at some point in his life, he couldn’t recall making much use of them, or even being able to really interpret them. The very first data interaction I presented him with involved evaluating the efficacy of 3D printed data tiles. These separate tiles were derived from

29E.g. the bridges of Robert Moses. Chapter 5. Beyond the Visual 176 bar charts depicting traffic volume at major intersections during morning and evening rush hours throughout the King Street pilot project. While Ron has years of experience advocating for accessible interfaces under his belt, and has deeply-held beliefs about what tools are effective in this kind of context, he was quite open to experiencing traffic volume in a new way. Having laid out the tiles along an impromptu city grid that matched the general location of each intersection in the actual Toronto city grid, I assumed that Ron, who walks downtown regularly and has a high degree of familiarity with the transit system and its subway locations, would be able to easily imagine the big spatial picture and each individual tile’s place within it. What I had taken for granted, however, was that he would not find meaning in the orientation of the tiles. I assumed that he would naturally wish to encounter them as one would on a screen - facing upward, in a vertical orientation. Almost immediately, Ron asked why the tiles depicting eastbound traffic were not laid out horizontally, with their data peaks pointing right to indicate cardinal direction. The direction of traffic flow was crucial to his embodied understanding of the city, as he walks against traffic on King Street regularly, using auditory signals to guide himself, and had noticed a significant decrease in (literal) traffic volume.

Ron had a very personal, idiosyncratic mental “map” of the parts of the city he had been forced to navigate without vision. His description of it related to the groundbreaking work of Kevin Lynch (1960), who has described how we build mental models through embodied experience, accumulating traces of paths, boundaries, distinct districts, nodes, and landmarks. Sighted people interpret a geospatial data landscape from an omniscient map-view perspective, while blind people, depending on their familiarity with maps, often see themselves in the map and situate themselves relationally according to spe- cific landmarks. In designing geospatial representations, visual bias toward a map-view perspective needs to be avoided.

An additional problem surfaced as I tried to logically direct Ron through a navigational Chapter 5. Beyond the Visual 177 path that moved across the grid’s tiles from SW to NE. In doing so, I had presented him with a layout that resembled a map as one would encounter it on a screen or paper, assuming that this base template would be familiar. Because he lives at the NE corner of the grid, however, and regularly walks downtown against the orientation that I had placed in front of him, my assistance was counter-intuitive. Ron described his mental model of the city not as an ordered grid over which he has a kind of God’s eye perspective, but as similar to “a big ball of cooked spaghetti” that one can turn around in one’s hand.

His home orientation, he described, is somewhere in the middle of the spaghetti. “It would make no sense to a visual person,” he said. The model I presented had attempted to translate, as directly as possible, the visual experience of encountering geolocative data, as one might on a civic data dashboard. This was a completely alien perspective to him, though. For Ron, his own embodied map unfolds as he moves through space, like a procedurally-generated game environment. As I presented the same data tiles to other participants, with adjustments to the base orientation and initial tactile experience, I found that there were two major differences in how they preferred to receive the tangible data objects.

Those who had either never had vision or had lost it at an early age found it confusing when the graphs were placed vertically, as one would typically encounter them in a

2D, screen or paper-based context. Those who had previous vision and lost it later in life, or were partially-sighted and had relied on “visual” tools in the past, generally preferred the vertical orientation because, in most cases, they were familiar with what bar graphs were and how they functioned. Many of them had internalized the standards of graphic representation from a sighted perspective. Furthermore, these participants, who maintained some sort of visually oriented information model, relied in many cases on a detached, God’s eye perspective to facilitate navigation. For them, relating to tactile objects as translations from their familiar visual analogues was perfectly acceptable. Chapter 5. Beyond the Visual 178

Those who had never had sight or lost their vision early in childhood did not typically share this view. Participants who had no experience with visual models like maps or grids situated themselves through memory or personal embodied experience, a number of them constructing abstract mental models that, to a sighted person, would be wholly incomplete. Reena, for example, relies on her father to drive her between her home and the university where she works. She has little sense of the city’s grid, and can only place key landmarks relative to her experience passing them on her route. According to her, they have little geospatial relation to each other in her mental model. These specific in- sights related to embodied experience caused me, the sighted designer, to reconsider how my own ocularcentric biases about the efficacy and validity of statistical graphics were shaping my design of tactile objects for users who may have no reason to consider these graphical tools at all. For me, bar graphs were the most common and best understood graphical trope to tell the story of grouped categorical data. Whether they had the same epistemological currency to the users I was claiming to design for was something I had not fully considered.

Scale Dissonance

A feature of visualization that makes it a core piece of the data analysis pipeline is that it offers the chance to discern patterns at different scales of interpretation. The distal sense of vision offers the possibility of quickly scanning an entire image, getting, in effect, a bird’s eye perspective. This macro read on a graph’s overall shape and meaning relies on various perceptual conventions, depending on the type of graph one is dealing with.

“Drilling down” to discover granular detail - moving to what is effectively a worm’s eye or micro perspective - can produce additional insights. Outside of immersive visualization these scale shifts rarely take on an embodied character in screen-based visualization. The data analyst or interpreter is effectively disembodied. Their body is not to be inscribed Chapter 5. Beyond the Visual 179

Figure 5.7: Ron reading braille descriptive text and tracing graph outlines on prototype tactile dashboard layouts. in the interface for fear of interrupting the God’s eye of objectivity. Communicating an information overview in order to discover broad trends, then, without losing granular detail, is a significant challenge when preparing tactile data graphics as the body is effectively reinscribed into the site of interpretation.

Finding the right scale for a tactile data object so that a participant could get an overview using their entire hand, and then explore with their fingertips key features, commonalities and disjunctures, braille text, and even the materiality of the object, factored into my design process. I observed that a number of the participants would feel the entire object, trace the outlines, and ask questions about the scale and meaning of different pieces, many ambidextrously. Mark, for example, was a participant with a background in accessible technology assessment. His considerable experience testing braille displays gave him a keen sense of whether the 3D printed braille text and other semiotic features were placed appropriately. I presented him with a number of small dashboard prototypes that Chapter 5. Beyond the Visual 180 were meant to test the side-by-side layout of multiple graphs30 placed alongside larger context views of specific graphs along with crucial braille descriptive text. Because these dashboard prototypes were, for the most part, modelled on screen-based UI templates,

I failed to consider whether the vertical placement of braille along the sides of certain graphs - a technique designed to accommodate 3D printing space constraints - would be easy to read or would interfere with the user’s ability to interpret what was going on in the graph itself.

Although most of the participants could jump between a “zoomed-out” big picture view and a granular focus using their fingertips, I found that this problem was compounded when I would hand singular data objects removed from their context and ask partici- pants to interpret them. This sort of separation can be an important part of dynamic interaction, but it can also place an undue cognitive burden on the user who is forced to remember the spacing, placement, and orientation of the tactile graphic if they wish to place it in relation to other objects in order to “zoom back out.” In the context of a tactile dashboard, what might be needed is not multimodal placement, but multi-depth representation, in which exploded views of different scales can be stacked on top of each other, or nested like a matryoshka doll.

Seamlessness

This problem of scale dissonance was additionally compounded when I tested prototypes that were wired with conductive tape to produce seamless capacitive touch buttons that would trigger audio descriptions of the data.31 In the visual graphic design world, seam- lessness and minimalism are often considered virtues. In the space of tactile interaction, it was a major design flaw, as participant users were generally unable to determine the

30Designed similarly to what is known in the visualization world as “small multiples.” 31Using familiar text-to-speech voices. Chapter 5. Beyond the Visual 181

Figure 5.8: A research participant interacting with a haptic interface that would provide audio playback of associated data from text-to-speech files. boundaries of the inputs, often triggering audio to play inadvertently. While this mixing of sensory modalities - tangible object with specific tactile features embedded with au- ditory feedback - was an interesting design experiment, the rapid fire of numerical data proved only to confuse participants while they were attempting to interpret the data at differing tactile scales.

Almost unequivocally, participants requested prominent buttons for interaction, stating that seamless interfaces, even if they provoked a serendipitous interaction, were counter to the goal of providing information and designing a usable interface. My own bias toward minimalist seamless design caused me to be completely oblivious to the fact that blind users might find this move toward the seamless even more alien than the digital interfaces they already encounter regularly. Chapter 5. Beyond the Visual 182

Design Insights

Despite the failure of this audio-haptic design experiment, the idea of audio-based feed- back proved to be a promising area of future research, as even the participants with considerable braille and tactile experience held great hope for voice and audio tools that would resemble the navigational apps and screen readers they had become accustomed to in the smartphone era. Many stated a preference toward querying an AI chatbot for data figures, even above being provided with interactive tactile models. Barring a responsive audio interface, various participants expressed an interest in having someone guide them through the data. Narration was of paramount interest, whether through audio description or substantive braille legends. Braille takes up considerable physical real estate, however, and a number of participants were content focusing on interpreting the graphical objects patiently, without having to learn everything about the data at first glance. This runs counter to some of the arguments made by Robert Kosara and Stephen

Few in the previous two chapters, and partly aligns with Mushon Zer Aviv’s argument in favour of “reambiguation” and complexity. In the context of blind data experience, taking time to make sense of data is worth the effort.

A significant concern for anybody working on inclusive tactile infovis is that much of the inclusive design literature that addresses infovis focuses on users with colour blindness.

This problem highlights a tendency to imagine universal users with similar needs. In a recent lecture,32 Sara Diamond, an important presence in the infovis world whose role as

President of a major art and design university33 gives her a vantage that many in infovis do not have, argued that designing for a universal audience is a mistake. Taking into account the different cultural lenses that users are influenced by is an important task in any HCI project, she noted, as different communities have aesthetic preferences and

32http://www.tux-hci.org/speaker/sara-diamond/ 33OCAD University in Toronto Chapter 5. Beyond the Visual 183

Figure 5.9: Barrie reading a braille legend on a cylindrical radial graph. metaphors that are specific to them. When the user is generalized, Diamond claimed, they are “stripped of subjectivity.” There is an issue related to this that is worth men- tioning: design in the space of tactile and inclusive infovis must take into account the data/statistical literacy that users possess. Barrie, a research participant from Sri Lanka, had some experience with statistical graphics in his childhood and teenage years. He sug- gested that basic graphs (like bar graphs) would be ideal, even when there is no baseline for understanding them. Metaphors have to be similar to things blind users might al- ready understand, he noted, and the tactile world is full of rectangular objects that can be drawn on for spatial reference. There are various visual infovis metaphors that would be unintuitive to blind users, Barrie argued, including some types of radial and circle diagrams (including pie charts).

With that in mind the question of what the best and most appropriate visualization metaphors for this work were came up numerous times. This is an open question, and the one-size-user fits all problem makes it difficult to answer. Blind users with previous Chapter 5. Beyond the Visual 184 visual experience, for example, might make better sense of bar graphs than those who have been blind from birth. Does that mean entirely new interaction metaphors need to be thought up? Not necessarily. Some participants, for example, liked the idea of using an already on-the-market “smart pen” or stylus that could trace around graphic geometry and respond to annotated triggers. Others - notably, participants who had been blind for most of their lives and, consequently, had a high degree of experience with ambidextrous tactile interaction - suggested that a stylus would inhibit one’s ability to make sense through feeling. Tracing with a finger provides all sorts of material cues that a stylus, even with haptic feedback, simply could not replicate.

New Questions and Research Opportunities

A number of new research questions were spurred by the study. They raise issues for future work that can be done in this area. I am going to share them here, along with some thoughts on how to make sense of them. The first has to do with comparing the initial starting points that blind and sighted analysts might begin from. What are the questions that a sighted analyst would ask of the data? What patterns would they look to recognize? Are these the same that blind analysts would search for? Would visualization literacy training create a shared semiotic environment and, if so, is that a good thing? A related question concerns what sort of insights a blind person might make that a sighted one might not (or even could not). In order to address these questions, a much larger sample of blind data analysts would need to be identified - a difficult task in and of itself.

Designing an experiment to compare how each group identifies salient questions would be challenging. A between subjects design with multiple conditions and types of data interaction would likely be necessary.

Related to this question about pattern recognition, how can we design for outliers in Chapter 5. Beyond the Visual 185 a way that blind and visually impaired users will notice them? An experiment that compares eye tracking data for sighted users with gesture tracking for blind and visually impaired users could focus on how much time each group spends identifying and focusing on clusters within given data visualizations. Is one group more likely to explore the entire visualization and identify outliers? Are blind users with ambidexterity more capable of using one hand for context and another for focus? These are just a few questions that could guide such an experiment.

Real-World Effects of Translation

3D printed tactile data visualization is an emerging research topic, and scant work has been done to determine the efficacy of innovative applications of it. Like the emerging research areas of immersive and multisensory visualization, data physicalization/materi- alization suffers from a critique that what it makes up for in aesthetics and novelty, it lacks in truthfulness. Regardless, researchers, data designers, and infovis practitioners are beginning to produce innovative work. Getting this into the hands of the general public - and the visually impaired community in particular - remains challenging. Even if we disseminate code, build tutorials, and design engaging examples, there is still a risk that the same visual tropes inherent in flat visualization practices will get ported into this new space, in the process limiting the usability and interaction possibilities that might exist unless they are surfaced and denaturalized early in the design process. As noted at various points in this dissertation, my findings suggest that the process of translating 2D, screen-based dashboards and data visualizations into 3D tactile models has the capacity to reinscribe visual biases that produce entirely new and unexpected barriers to access.

These additional barriers are rooted in naturalized assumptions about the inherent ocu- larcentric character of data interpretation that many designers are unlikely to be aware of.

What happens when epistemic biases like ocularcentrism are not denaturalized? People Chapter 5. Beyond the Visual 186 like the engaged citizens who have taken part in this study can be effectively erased from participating in the public sphere. In a data-driven society, where data representation helps determine the weight of political arguments, this is a consequence worth rooting out before it has a chance to produce the kind of data inequality that already exists in other marginalized communities.

Method: (Re)Materialization

Tangible data objects have been given many names: physicalizations, tangible infovis, data sculptures, physical models, etc. What most articulations of these media forms fail to account for is the fact that we are never dealing with a one-way transfer from digital to material. The term materialization, has come into vogue in some infovis quarters, where it belies the fact that the transductive processes required to produce these media are never uni-directional.34 The workflow of interaction designer John Fass, who has designed various “sculptures” of digital processes, has been described in the following way: “John’s primary method is materialisation, the use of physical materials to create objects. Unlike visualisation which is two-dimensional, materialisation implies tangibility and spatiality.”35 Fass’s work seeks to give shape - “tangibility and spatiality” - to the common metaphors for what are seen as ephemeral digital phenomena (e.g. “the cloud”).

The description of Fass’s work continues: “having to materially represent them forces an appreciation of the metaphors, what metaphors hide and reveal, and to think about how they help - or hinder - cutting through technical complexity.” The most insidious problem with this metaphor and this description of how Fass gives it shape, is that the cloud is not completely ephemeral and immaterial at all. It is made of datacentres, fibre lines, mobile phones, and network technicians. “Materialization” both effaces and fails to

34“Materialization” in this context is not to be confused with its use in a database UI sense, where it refers to views that contain the results of a query. 35https://visualisingadvocacy.org/blog/materialising-online-objects Chapter 5. Beyond the Visual 187 account for its already very material state.

But the material state of the physical data object also needs to be considered. Jansen,

Dragicevic, and Fekete (2013) acknowledge that, while the “dichotomy between real / physical and virtual / digital in HCI is an elusive concept” the shape and properties of the matter that data “physicalizations” is made up of is subject to change over time. Even a static physical model is never truly fixed. And there is a difference between modelling and so-called materialization. If a physical model can be the basis for a simulation,36 then the simulation’s digital aspects must be part of its ontology. This includes any adjacent computational data analysis. A model is usually a reproduction of a real object, often at a different scale or in an abstract configuration, while a visualization involves the process of visual mapping, through calculated design decisions related to visual variables like colour and position, in order to encode data (Jansen, Dragicevic, and Fekete, 2013).

Degrees of “fit” between the model and the real-world object that it is meant to represent contribute to this semantic distinction.37

So, we encounter a methodological dilemma when we try to create physical visualizations or materializations that obscure these various aspects of their materiality. I propose a more encompassing term: rematerialization. Rematerialization entails acknowledging that the pipeline of data visualization does not begin in the digital realm at all. It in- variably begins in the material world. It is never a linear process. There is a real-world material context for every tangible graph in both its origins and its output. Paolo Mag- audda (2013) argues for a similar use of this term in digital consumption of music, where he suggests considering materiality as a kind of bi-directional circuit. In my use of the term, I suggest we need to think of it as a material|digital entanglement, in which the

36E.g. the famous Mississippi Basin model, which can be read about here: http://www. atlasobscura.com/places/the-mississippi-river-basin-model or the Phillips Machine that mod- elled economic processes with flows of water, which can be read about here: https://opinionator. blogs.nytimes.com/2009/06/02/guest-column-like-water-for-money/?_r=0. 37See Knuuttila (2005, 2011), and Wartofsky (1979). Chapter 5. Beyond the Visual 188 state of each iteration influences the other. Digital manifestation and material instanti- ation, in this sense, behave as “affiliative objects” that are “fraught with significance for the relations they materialize” (Suchman, 2005). Rematerialization, then, as a method for producing tangible data objects, entails accounting for the entire chain of materiality, and mandates pushing back against the reductive, linear notion of translation.

Conclusion

3D media such as tactile models are not more persuasive than 2D media such as illustrations; both may include exaggeration and omission, and they should be interpreted in tandem, not in opposition. Jentery Sayers (2015, p. 173)

I began this chapter by asking how the digital affordances of screen infovis differ from the material affordances of tangible infovis. How does the material world resist repre- sentational practices that have been crafted for digital, screen-based interaction? How and when does the material push back? Through describing findings from the InclusiVis study, and reflexively critiquing my own design process throughout it, I have given var- ious examples of the material “pushing back” against the digital. In chapter 2 of the dissertation, I asked what happens when historically visual domains (like infovis) are unsettled by non-ocularcentric practices. I described a condition of ocularcentric objec- tivity that is certainly applicable in the data interaction context I’ve described in this chapter. While the case study I illustrated in that chapter was not in an infovis context per se, its technical methods strongly influenced my design approach to the InclusiVis project. That chapter’s methodological contribution - the use of “transduction” rather than “translation” - is also deeply connected to this chapter’s findings. Tangible data objects can never be translations of digital visualizations. In chapter 3, I asked how Chapter 5. Beyond the Visual 189 we might go about denaturalizing the “increasingly familiar” graphical visualizations we encounter, suggesting that a number of structural formations shape truth claims that are made about infovis (including that truth itself is somehow not compatible with artistic design). I presented ideas for developing an approach to critical visualization that in- cludes designing against bias, a component of which is to ask which groups of people are excluded in data design processes. The InclusiVis study is evidence of a very real group being effectively excluded from an important aspect of modern life.

In the previous chapter, I asked what would happen if naturalized scale conventions (such as God’s eye view) were disrupted by inserting the body into the site of data interpreta- tion. In this chapter, the question of scale is even more salient, as it demonstrates, again, the fallacy of translation. Simply translating dashboard views from the disembodied, dig- ital context produces scale dissonance in tangible graphics. The method that I proposed in the previous chapter, estrangement, suggests that we look for ways to defamiliarize objects “in order to perceive our data and our uses in new ways.” I connected this to seamful design and suggested calling attention to parts of the interface that haven’t been smoothed. In this chapter’s study, it became clear that I also needed to address parts of the interface that have been smoothed, as the seamless haptic interactions I designed produced an alien encounter for the study’s users. By now it should be clear that all of the dissertation’s naturalized epistemic conditions are at play in each of the diverse in- teraction case studies I have presented throughout the text. The methods I’ve suggested are not intended specifically for the cases that inspired them. They are intended to be part of a holistic approach to denaturalizing infovis in a range of emerging contexts - VR exploration; tactile graphics; multimodal storytelling; and others I haven’t documented, such as sonification.

When we reflexively examine and act on our own visual biases, it becomes easy to see how persistent they are. Attempts to reimagine data visualization - to make it more Chapter 5. Beyond the Visual 190 accessible and inclusive - frequently replicate inherent visual biases, though. Tangible data representations, from 3D maps to materialized bar charts, typically resemble the visual graphics from which they take inspiration. Even if visual features are withdrawn, these epistemic objects reside on a substrate of ocularcentric design tropes. When visual biases are allowed to infiltrate tangible data design processes, they both reproduce their ocularcentrism and produce additional barriers to data access. As data interpretation increasingly becomes a factor in civic experience, we must consider how normative as- sumptions (e.g. that dashboards should be shaped according to the user interface of screens) make it possible to exclude entire groups of people from engagement. If data literacy initiatives are to be taken seriously, each time a new interface, data portal, app, or hackathon is proposed, it will be of crucial importance to weigh whose agency is being reduced by design choices.

In focusing on specific datasets that have ocularcentric characteristics, I was forced to address the question: are there datasets that don’t? Could I have re-framed the question to InclusiVis study participants from “what data/datasets do you think might be valuable or appropriate for blind citizens” to “are there data/datasets that are particularly suited to blind citizens and the blind experience?” Sheila Jasanoff, in making her case for a method of surfacing naturalizations and subsequently denaturalizing them, advocates studying phenomena as if “through the eyes of visitors from other worlds.” Maybe, if my starting point had been to adopt this strategy - to adopt the blind perspective, rather than the perspective of an inclusive designer - I might not have had as many challenges transducing media. I might not have had to denaturalize through a process of reflexive self-critique. Today’s inclusive design movement envisions wholly new modes of interaction enabled by technologies that encourage multisensory and multiple-user experiences. And, yet, effective tangible visualization design for visually impaired users still needs to de-center the eye. Chapter 5. Beyond the Visual 191

This research suggests that it is imperative for technology designers and engineers who are working on tangible objects for blind and visually impaired users - in fact, all designers and engineers who are interested in working against inequality by design - to be aware of and address the epistemic biases that are naturalized in their design processes, software tools, etc. Paramount among these biases, in the space of data interaction at least, is a persistent ocularcentrism that can be traced to the Cartesian revolution. This is a story of inequality of access, but it is also a story of inequality by design. Any time evidence of inequality by design emerges, designers interested in denaturalizing their own biases are forced to ask which groups get excluded. There is an important flipside to this. Exclusion means some other group will be included, or is privileged. In the case of accessibility to civic data experience, we must ask who makes up this group. Is it an active process of exclusion that they either create or participate in? I suspect that the City of

Toronto’s graphic designer who prepared the King Street pilot documents was not trying to actively exclude blind citizens. Given the resources to produce multimodal (in the interaction sense) data experiences, I’m sure they would do their best to accommodate as wide a public as possible. My own tendency is to support this, rather than simply critique them. To this end, I have sought pathways to make my processes and findings accessible by sharing them with the City of Toronto’s open data team, as well as the

Toronto Public Library’s digital innovation network.

Despite claims that it will make inclusive technologies more prevalent, however, data- driven society continues to present an ominous vision of unequal access. From smart city infrastructure built for those who can afford it, to data mining in communities that lack the literacy to challenge privacy incursion, to inaccessible interfaces that assume blind users will never encounter them, the technologies of this momentous shift in social arrangement are a long way from being equality-driven. Naturalized assumptions about appropriate knowledge practices are too often taken for granted, even by those of us who Chapter 5. Beyond the Visual 192 profess to engage in inclusive design. In telling this story, I want to emphasize a specific methodological and design orientation. Revealing exclusion (i.e. which citizens or users are denied civic agency due to intentional and unintentional design choices, as well as which ones are granted greater agency) is only one half of the equation. The other half is about placing inclusion at the foundation of a data design practice by considering the needs and concerns of excluded communities first.

I conclude this chapter by revisiting the claim that new modes of multisensory visualiza- tion destabilize the ocularcentrism of visualization media. I don’t believe that is entirely true. If anything, they complement existing modes of visual interaction. At first glance, the quote form Aquinas that opens this chapter might not seem apropos. But it’s not an admonition to trust vision. It’s asking us to trust the hybrid sensorial apparatus - the various senses acting in concert to grant meaning to phenomena. Multisensory data interaction should not seek to translate or replace visualization media. It should aug- ment and complement it, even if we need to imagine entirely new metaphors to make this possible. Chapter 6

Information, Not Representation

Summary: As the dissertation comes to a close, we return to the question of why its themes matter. What does it entail to build responsible, inclusive, effective visualization media, and what might it take for that media to resonate with the public? This concluding chapter makes a case for bringing depth to representation and outlines its argument within a framework of critical information practice.

Representation fails to capture the affirmed world of difference. Representation has only a single centre, a unique and receding perspective, and in consequence a false depth. It mediates everything, but mobilises and moves nothing. Gilles Deleuze (1995, pp. 55–56) The Eames motto: ‘there was only communication: information, not representation.’

Orit Halpern (2015, p. 224) A ‘bit’ of information is definable as a difference which makes a difference.

Gregory Bateson (2000, p. 315)

193 Chapter 6. Information, Not Representation 194

Introduction: Critical Visualization as a Critical Data Practice

Data never “speaks for itself.” It doesn’t wait, latent and inert, for some objective observer to provide the means for it to be revealed (Gitelman, 2013). Information visualization is so much more than a vehicle for this supposed revelation! It is both a collaborative sense- making tool and a rhetorical object, as well as an important window into our collective epistemic practices. Visualization is what we use to infuse data with meaning. William

Playfair, the inventor of many of the common visualization tropes we use today, from the bar graph to the pie chart, is held up by the likes of Tufte, Cleveland, and Wilkinson as the pinnacle of statistical graphic excellence, a visionary who devised tools to coax beautiful evidence from the natural and social worlds. The most common description of Playfair is that he sought truth in data, and invented graphical methods to unveil it.

Playfair’s real legacy is that he was concerned with truth and beauty equally. Today, it seems that one can hardly express a desire to search for truth or beauty in the natural

and social worlds, however abstract or vague these terms might have become, without

being ridiculed.

This concluding chapter of the dissertation is concerned with why infovis matters. Re-

turning to the social themes outlined at the beginning of the first chapter, I expand on

issues that have been raised in the subsequent chapters to discuss why there is now a cru-

cial need for critical visualization literacy. Developing this will require re-thinking what

“critical” means for this context, and I attempt to do so here. In making the chapter’s

argument, I will outline a framework for critical visualization that is situated within a

broader approach I loosely term critical information practice. This is an argument for

responsible communication. It is about messaging that is not reduced to what Tufte calls

“evidence presentation” or “revelation.” The argument I make calls for multi-faceted,

polyvocal interactions with information. “Letting the data speak for itself” has become Chapter 6. Information, Not Representation 195 an altogether too convenient excuse for taking visualization media at face value without giving careful consideration to how it fits into a larger ecosystem of information engage- ment. In the preceding chapters, I have presented a series of strategies for denaturalizing infovis. Together, they can be thought of as a conceptual toolkit for critical visualiza- tion. As I have presented it throughout the dissertation, this approach has two tracks.

First, denaturalization requires examining naturalized values. I have demonstrated this in each chapter by identifying and surfacing various epistemic conditions and illustrat- ing how they have come to be naturalized. The second track of this approach entails demonstrating and providing alternative visualization methods. Illumination and cri- tique, I feel, does not do enough. We require new methods if scholars wishing to probe the boundaries of infovis using critical tools also wish to intervene in the worlds they are studying.

What, then, is critical visualization in practice (and out in the world)? Critical visualiza- tion using the methods and strategies of denaturalization that I have provided throughout the dissertation? Critical visualization as a form of critical data practice? Its critical edge is situated within what Orit Halpern has recently described as a “contradiction between the desire to produce knowledge and the demand to circulate information, otherwise un- derstood as a space between older histories of knowledge and the emerging paradigm of bounded rationality” (Halpern, 2015, p. 191). The underlying logic that structures the tension she describes also underpins “our contemporary cultural attitudes to the screen, the image, and data.” The discipline-specific motivations of information design have al- ways been about developing models for controlling society through attempts to render

“the world we live in as simulation and model” (Hagener, 2015). This tendency, from the post-WWII development of mass media, through the cybernetic revolution, up to the past decade’s emergence of individual-scale data monitoring practices, is not about subjective disciplining - it is about data power. The “critical” in critical visualization Chapter 6. Information, Not Representation 196 practice has to do with recognizing and unsettling data power trajectories. It is about being skeptical of “data-driven” regimes and all the claims they make.

“Data as truth” is a fallacy. Data is merely the fuel for different epistemic traditions and communities to construct their truth claims. Various academic movements, from STS to critical data studies, recognize this. There is really no such thing as “raw data,” Lisa

Gitelman suggests (Gitelman, 2013).1 These arguments are not exclusive to academic critique, however. “Data is always biased, measurements always contain errors, systems always have confounders, and people always make assumptions” writes Angela Bassa,

Director of Data Science at iRobot, an IoT company.2 Recognizing and discussing these

issues will enable responsible researchers, data artists, infovis professionals, and citizens

alike to correct for bias, account for personal assumptions and measurement errors, and

build tools that don’t replicate the various flaws that have been described throughout this

dissertation. We must be careful not to throw the baby out with the bathwater, however.

Critical hermeneutic distance cannot be the only way to engage reflexively. It is possible

to undertake critique using the tools of empirical science, and willingness to engage with

them should be a cornerstone of critical methods developed for the infovis domain. But a

truly critical approach to visualization practice will require multiple facets: illumination

of the dynamics of data power; engagement with the broader public on issues related to

data-driven life; and reflexivity as a design principle. These facets are captured in three

contemporary movements: the re-emergence of political visualization, the growth of data

journalism, and what might be termed a “reflexive turn” in visualization design.

1Her claim extends earlier work by the likes of Drucker (2011), who advocates using the term “capta” to acknowledge the complicated trajectory between measurement and sensemaking. 2https://medium.com/@angebassa/data-alone-isnt-ground-truth-9e733079dfd4 Chapter 6. Information, Not Representation 197

Political Visualization

On my part this passionate study [of statistics] is not in the least based on a love of science, a love I would not pretend I possessed. It comes uniquely from the fact that I have seen so much of the misery and sufferings of humanity, of the irrelevance of laws and of governments, of the stupidity, dare I say it? – of our political system, of the dark blindness of those who involve themselves in guiding our body social that ... frequently it comes to me as a flash of light across my spirit that the only study worthy of that name is that of which you have so firmly put forward the principles. Florence Nightingale to Adolphe Quetelet3

Information graphics have been political for well over a century. Minard, second only to

Playfair in the pantheon of great and influential visualization pioneers, constructed his

last work, arguably the most powerful and reproduced data graphic ever, of Napoleon’s

long winter march through Russia, as an anti-war poster (Tufte, 2006, p. 134).4 This

wasn’t just a descriptive infographic - it was an anti-war propaganda missive on the

eve of the Prussian invasion of France. When one of the most famous data graphics, a

canonical piece, carries such a heavy political argument, it speaks to the rhetorical power

of graphics.

There are multiple ways that political visualization operates. It can be used to persuade

and proselytize, or to illuminate and edify. It can expose and disrupt regimes of data

power. It can provoke action around significant public concerns (e.g. global warming).5

3See Diamond and Stone (1981, p. 72). 4Also see Michael Friendly on Minard’s influence: http://www.datavis.ca/gallery/re-minard. php. 5Some interesting recent examples that combine all of these actions include climate sci- entist Ed Hawkins’ widely circulated radial .gif of global temperature increase ( https: //twitter.com/ed_hawkins/status/870292202357968896), or these interactives from the New York Times: https://www.nytimes.com/interactive/2017/07/28/climate/more-frequent- extreme-summer-heat.html?smid=tw-share and https://www.nytimes.com/interactive/2018/ 08/30/climate/how-much-hotter-is-your-hometown.html. Chapter 6. Information, Not Representation 198

Figure 6.1: Minard’s famed visualization of losses suffered by Napoleon’s army in its winter march through Russia.

It can be used to aggregate and draw comparisons between national and continental relations (e.g. exhaustive global data projects like Our World in Data6). It can be used to inform subterfuge and projects that seek to disrupt the sociotechnical status quo.7

In chapter 4, I examined a few ways that visualization can be used to deceive and/or manipulate. Political - and even “activist” - visualization can no longer be written off as simply agit-prop or deceptive media. There is a generation of information designers learning to make use of tools that formerly resided in the domain of statistical analysis.

Some of the most exciting current work in infovis, from a design perspective at least, could certainly be classified as political infovis. Yanko Tsvetkov’s Atlas of Prejudice, for example, uses humorous thematic maps to critique contemporary geopolitical affairs.8

The Center for Artistic Activism uses creative approaches to visualization in its efforts to “change behavior” and “elicit empathy or compassion.”9 But there is a flip side to

6https://ourworldindata.org/ 7See chapter 2 in Raley (2009). 8https://atlasofprejudice.com/?gi=94b2676e6fea 9https://c4aa.org/2016/01/data-visualization-for-what/ Chapter 6. Information, Not Representation 199 this coin. Without democratizing the tools and outputs of visualization, it will serve to consolidate data power in the hands of an increasingly small collective. Catherine

D’Ignazio, a prominent data literacy activist and scholar, puts it bluntly: “Until we acknowledge and recognize that power of inclusion and exclusion, and develop some visual language for it, we must acknowledge data visualization as one more powerful and

flawed tool of oppression.”10

The (Re)Emergence of Data Journalism

The power of the axiom that “a truthful press is an objective press” appears to be waning.

Journalism in an attention economy relies on clickbait headlines, fanning the flames of political rhetoric, etc. “Truth,” we are repeatedly reminded, is under attack. The past half-decade has been a fertile moment for the development of journalism that attempts to counter these concerns by using data and statistical methods in its storytelling. Or- ganizations like ProPublica and Google News Initiative are at the forefront of efforts to develop responsible data journalism initiatives (including extensive training programs).11

MediaShift’s Martha Kang writes: “the job shouldn’t be left to big newsrooms with ded- icated teams. In this era of big data, every journalist must master basic data skills to make use of all sources available to them.”12 As if responding to Kang’s call, specialized data journalists with domain knowledge and data skills have started to emerge.

10https://civic.mit.edu/2015/12/01/feminist-data-visualization/ 11ProPublica has been a leader in producing data-focused stories. Its mission is to “expose abuses of power and betrayals of the public trust by government, business, and other institutions, using the moral force of investigative journalism to spur reform through the sustained spotlighting of wrongdoing.” As part of its mandate, ProPublica has initiated a collaboration with the The Ida B. Wells Society for Inves- tigative Reporting to host workshops where journalists can learn how to use data, design, and code for journalism. On a similar note, see Google News Initiative’s new data journalism handbook, interestingly titled “Towards a Critical Data Practice”: https://medium.com/google-news-lab/introducing-the- new-data-journalism-handbook-3e9c1ed7db2b. 12http://mediashift.org/2015/04/its-time-for-every-journalist-to-learn-basic-data- skills/ Chapter 6. Information, Not Representation 200

One of the most vocal advocates for data journalism has been Nate Silver, founder of the influential website FiveThirtyEight. In a 2014 article, Silver wrote a sort-of manifesto for the new field, proclaiming in it that his “disdain for opinion journalism (such as in the form of op-ed columns) is well established, but my chief problem with it is that it doesn’t seem to abide by the standards of either journalistic or scientific objectivity. Sometimes it doesn’t seem to abide by any standard at all.” Silver closed by writing that “FiveThir- tyEight’s philosophy is basically that the scientific method, with its emphasis on verifying hypotheses through rigorous analysis of data, can serve as a model for journalism. The reason is not because the world is highly predictable or because data can solve every problem, but because human judgment is more fallible than most people realize - and being more disciplined and rigorous in your approach can give you a fighting chance of getting the story right.”13

Despite Silver’s parochial vision of a new journalistic method, so-called data journalism is really not that new. We can trace it back, at the very least, to William Mitchell

Gillespie, a New York Times graphic artist who, in the 1850s, published various charts detailing census-based demographic change, cholera deaths, and other related themes that would not look out of place today.14 The Economist has been doing data journalism since quite possibly the 1840s, when it first started publishing data tables. Seven of its

first issue’s sixteen pages, in fact, contained tables. Over the past decade, however, their

“journalists and editors (and the world at large) developed a taste for data-driven stories, and a new format was created: the chart-based article.” In this type of feature, the chart is the star, and accompanying text only “plays a supporting role.” In describing the data design process at the Economist, Alex Selby-Boothroyd, the magazine’s Head of Data Journalism, writes: “Cow gum, scalpels and correcting fluid have given way to

13https://fivethirtyeight.com/features/what-the-fox-knows/ 14https://tinyletter.com/abovechart/letters/the-dataviz-pioneer-you-ve-never-heard- of Chapter 6. Information, Not Representation 201 computer-aided chart-making, and new coding tools like R and Python have allowed visualisers, researchers and journalists to collaborate on a more expansive online version of the charticle, the ever-popular daily chart.” Today’s data journalism relies on entirely new forms of interactive media than its static forebears, however. Selby-Boothroyd notes:

“The original ethos remains: you should always learn something interesting from the chart in just a few seconds. But you will gain even more by exploring the graphic and spending time with it, just as you would with a full page of text.”15

What this all points to is an emphasis on data literacy for both journalists and the public. This, it should be pointed out, is not necessarily critical data literacy. While data journalism has created a fresh opportunity for infovis to hone its capacity to resonate with diverse public audiences, the underlying objective premise of the field, that data is more truthful than expert opinion, needs to be called into question. Leon Wieseltier, former Literary Editor at the New Republic, in presenting a scathing rebuttal of Nate

Silver’s vision following the 2014 manifesto, called Silver a prince of a new positivism, a

“data mullah,” and an advocate of “intimidation by quantification.”

He dignifies only facts. He honors only investigative journalism, explanatory

journalism, and data journalism. He does not take a side, except the side of

no side. He does not recognize the calling of, or grasp the need for, public

reason; or rather, he cannot conceive of public reason except as an exercise

in statistical analysis and data visualization. He is the hedgehog who knows

only one big thing. And his thing may not be as big as he thinks it is.

Wieseltier identifies a number of complex topics that data journalism would be in fits try- ing to answer, including “whether men should marry men”, or “whether the government

15https://medium.economist.com/data-journalism-at-the-economist-gets-a-home-of-its- own-in-print-92e194c7f67e Chapter 6. Information, Not Representation 202 should help the weak.” Wieseltier also raises the salient question of whether numeracy is

“really what American public discourse most urgently lacks,” a question that I will shortly take up. Saving his most vituperative and savage critique for last, Wieseltier closes by stating “neutrality is an evasion of responsibility, unless everything is like sports,” a cruel dig at Silver’s ESPN backers.16 In Wieseltier’s rebuttal, we can recognize a push back against data journalism by those who have been trained in critique and interpre- tive humanities inquiry. Are the two at odds or, like the false truth|beauty dichotomy I highlighted in chapter 3, is there room for both? Is it possible for a critical and human- istic data journalism to take shape? Is ProPublica already doing this? These are open questions for the field to address.

A Reflexive Turn in Visualization Design?

Within the various professional infovis communities, however, critical approaches to visu- alization are only now starting to emerge. Various prominent figures in the infovis world have begun to advocate for new approaches that do not merely entail new methods, but new epistemological perspectives as well. A nexus of critical, reflexive, and humanistic data visualization practices have started to coalesce. Jer Thorp, influential former Data

Artist in Residence at the New York Times, for example, asks that we always look to return the abstract referent of infovis to human contexts.17 Giorgia Lupi, whose work

was discussed in chapter 3, has commented extensively over the past year on the prospect

for a kind of data humanism in which we recognize and acknowledge our own subjectivity

in the visualization process - not just in the capture of data.18 These movements might

be re-cast as a turn toward “reflexivity” in visualization design. Reflexivity was discussed

16https://newrepublic.com/article/117068/nate-silvers-fivethirtyeight-emptiness- data-journalism 17https://www.ted.com/talks/jer_thorp_make_data_more_human 18https://www.ted.com/talks/giorgia_lupi_how_we_can_find_ourselves_in_data Chapter 6. Information, Not Representation 203 in chapter 2, but it’s worth briefly revisiting what this concept is about. Used mostly in interpretive social sciences and humanities disciplines to describe how researchers and designers account for their situatedness or acknowledge the role of context, reflexivity positions the researcher within the world they are studying (Alvesson and Sköldberg,

2017).

Among the most prominent voices calling for a kind of reflexive shift from within the infovis and data science communities is Elijah Meeks, Senior Data Visualization Engineer at Netflix. Meeks, in numerous recent essays, blog posts, podcast interviews, and projects has sounded the alarm for a field in flux. Last year, Meeks, concerned about the state of the industry, issued a call for data visualization professionals to describe what doing professional data visualization means to them. Nearly 1000 respondents commented on issues like whether they considered visualization to be a primary or secondary aspect of their design work.19 Among the findings from his 2017 survey, Meeks acknowledged that, although there are clearly emerging categories of data visualization professions, “visual- ization products are extremely conservative” and “today’s professional data visualization engineer uses theory and practice more closely resembling the conservative approach to data visualization.” These specific comments prompted heated debate between Meeks,

Stephen Few, and other noted infovis figures about “frivolous engagement” and “unneces- sary complexity,” subjects I have discussed at various points in the dissertation.20 Meeks went on to comment on a growing professional split that holds various parallels with the entangled tension between aesthetics and truth that I discussed in chapter 4:

There are prominent theorists and practitioners in data visualization that sim-

ply do not believe there is such a thing as a dedicated data visualization role

19Meeks discussed the survey here https://medium.com/@Elijah_Meeks/2017-data- visualization-survey-results-40688830b9f2 and has issued a follow-up here https: //medium.com/@Elijah_Meeks/2018-data-visualization-survey-results-26a90856476b. 20See https://medium.com/@few.stephen/elijah-6af896211fc2. Chapter 6. Information, Not Representation 204

Figure 6.2: A spectrum of visualization roles from Meeks’s essay on why people are leaving the field.

in industry. For those critics there is no profession, only a skill used near the

end of a long process performed by scientists, analysts and engineers. In con-

trast, there’s a celebratory data visualization community that gathers for the

Information is Beautiful Awards and looks to people like David McCandless as

a thought leader. The more serious are in or allied with journalism, the more

exotic might call themselves artists, and the freelancers and consulting firms

that dominate this area might see themselves as a bit of both. In their case,

catching an audience in an attention economy is a prominent requirement of

their data visualization work.21

In his comments on this year’s survey results, Meeks noted a specific absence: “What the

survey does not explain and what I would love to hear about is what is meant by ‘design’

to people in data visualization. Not just whether it falls into one of the big schools, like

graphic design, experience design or information design but what are the practical steps

one takes in designing data visualization.”22 He believes that it is a misconception to treat data visualization as an engineering problem, when it remains primarily a design concern.

Communication and design, rather than engineering, should be the focus of visualization

21https://medium.com/visualizing-the-field/why-people-leave-their-data-viz-jobs- be1a7ab5dddc 22https://medium.com/visualizing-the-field/why-people-leave-their-data-viz-jobs- be1a7ab5dddc Chapter 6. Information, Not Representation 205 professionals, Meeks argues. This, I believe, is where the visualization community stands to gain a great deal from scholars working at the intersection of HCI and STS who have advocated for reflexive approaches in design.23 But it is worth noting that missing from Meeks’s survey is any sort of question about critical practice. Reflexivity, in a design context, entails self-critique. Conservatives in the visualization community will claim that reflexivity is already commonplace, in the form of practices like critique by redesign,24 but such practices are typically not self -reflexive. Moreover, they reinforce a

kind of staid critique of form. Says Meeks: “We’re good at criticizing pie charts and 3D

bar charts but we tend to quail at deep criticism of technically sophisticated solutions

that don’t address the core needs of data visualization as a communication medium.”25

There is a question of literacy baked into this problem, Meeks notes: “The meritocratic

approach only works if everyone has equal capacity to judge the ideas offered up in

the marketplace of ideas. If that literacy isn’t there, the natural tendency will be to

support conservative data visualization.” In a very recent piece outlining what he calls

a “third wave,” Meeks echoes many of these concerns in a section that calls for working

“to develop our community to be a place to give, receive and model critical discourse.”26

What he is calling for - new visualization epistemologies, in effect - will require a period

of adjustment, and periods of adjustment produce epistemic anxiety (Drucker, 2014, p.

8).

Some interesting questions are raised by this trend. How should a reflexive infovis take

shape, and what should it look like? Is reflexive infovis the same thing as critical or polit-

ical infovis? Are its ethical concerns the same as those being raised in the broader emerg-

23Phoebe Sengers, Paul Dourish, and Carl Disalvo all come to mind. 24Critique by redesign, discussed in chapter 3, entails both criticism of the design work of other practitioners - often making claims about their epistemic commitments - as well as criticism of data visualization products. 25https://medium.com/visualizing-the-field/strategic-innovation-in-data- visualization-will-not-come-from-tech-4c1f7379ae39 26https://medium.com/@Elijah_Meeks/3rd-wave-data-visualization-824c5dc84967 Chapter 6. Information, Not Representation 206 ing discipline(s) of data science? Can quantitative|qualitative and empirical|interpretive entanglements coexist harmoniously in this space? What kind of literacies are necessary to promote reflexive practice? Finally, is reflexivity enough?27

Visualization Fluency

Interaction with data has become a commonplace aspect of engaged citizenship. Voting in a civic election, for example, typically entails familiarizing oneself with housing and demographic statistics through interactive . Selecting a school for one’s chil- dren requires interpreting complex metrics comprised of standardized test results, STEM funding figures, and infrastructure investment scores. Even choosing whether to get a

flu vaccination, a decision that is no longer really personal, can mean having to engage with maps and charts that would once have been the domain of professional epidemi- ologists. While these trends are celebrated by the many among us who advocate for expanding participatory channels in civic experience,28 others have rightfully expressed concern about the complicated dimensions of balancing increased access to civic data with comprehensive digital literacy initiatives.29 Visual and graphical literacy are es-

sential aspects of this knotty calculus. If engaged citizenship increasingly necessitates

being able to interpret civic data through interfaces such as city dashboards and open

data portals, then data literacy programs should support diverse populations to both

interpret data and develop critical perspectives on data representation.

Effective data representations are not only used to assist decision making, support query-

ing, or reduce cognitive load. They also ground conversations, communicate policy ideas,

27We might draw on Karen Barad to address this question. See Barad (2007, pp. 86–94) for her arguments in favour of diffraction and agential realism over reflexivity. 28See, for example, D’Ignazio and Bhargava (2015). 29See Farina et al. (2014). Chapter 6. Information, Not Representation 207 and substantiate arguments about important civic issues. At the same time, they can be

(and often are) used to persuade and mislead. From truncated axes to scale manipulation to cherry-picked data, the practice of graphical deception is regularly employed by politi- cians, activists, and professionals of various stripes. The stakes for being unable to read a chart can be incredibly high. Imagine basing a life decision, such as the purchase of a home, on misleading data. Or making an everyday decision, like choosing a traffic route, without having developed the capacity to understand the UI of dynamic digital mapping platforms. Expanding statistical, graphical, digital, and media literacy is a necessary foundation for any kind of critical data culture and, as such, has become an important site of research, development, and scholarship.

Data literacy initiatives, ranging from STEM and digital literacy after-school programs to new research funding mechanisms, have proliferated over the past decade. Most claim to promote the growth of data literacy skills without exception, and the distribution of these skills across income, racial, and gender divides has become a key topic of research in fields like information, critical data studies, and STS. Who are the participants in these expanded models of literacy and modes of engagement, though? Is it the general public that needs to build critical data and visualization literacy? The researchers who study and make arguments about the ethical issues associated with big data, machine learning, etc.? Or is it the designers and engineers who develop the systems on which data visualization is deployed?

Is there a minimum level of literacy for each of these communities? Is literacy even the most appropriate concept here? There are a host of problems and new questions that arise from the recent emphasis on visualization literacy in the professional infovis world. Michael Correll, a research scientist at Tableau, has illuminated an important concern related to the increasing use of the term “literacy” in the context of the public’s Chapter 6. Information, Not Representation 208 understanding of infovis.30 Correll worries that this term carves its audience (or subject population) into two camps: the literate, and the illiterate. Acknowledging the challenge with even trying to outline the vast skill set required to parse visualization, he asks:

“Is visualization literacy our ability to ‘decode’ charts? Our exposure to novel forms of visual presentation? Is it a sub-type of numeracy, or just correlated with it?” How much of each of these capacities is required (and can even be measured), we might ask?

Are statistical and visual literacy the most necessary components, or are there other literacies that deserve equal space? Will intro-level stats knowledge suffice? “Do I need to be cognizant of all the many ways that charts and their underlying data can mislead,”

Correll asks, “or is that skepticism distinct from interpretation?” Does an analyst drawing on visualization require enough art historical knowledge to understand where many of the tropes I described in chapter 3 come from (e.g. Gestalt psychology and modernist design)?

Correll raises two additional crucial issues, each of which are key to the arguments I have made in this dissertation. The first is that the “visual language of charts is not innate or fundamental, but shaped by exposure,” a theme that strongly influences the second issue, that a binary understanding of literacy positions us to reject new design types and forms due to how unnatural they might seem.

If, as I suspect, the process of “figuring out” a particular genre of chart is one

that involves training or acclimatization or even repeated cultural exposure

(as opposed to an all-or-nothing ‘decoding’), then we may try out designs that

appear to have very little in the way of quantitative empirical benefit at first

blush, but could be extremely useful once they have been established within

30https://medium.com/multiple-views-visualization-research-explained/what-does- visualization-literacy-mean-anyway-22f3b3badc0 Chapter 6. Information, Not Representation 209

some broader visualization milieu.31

That’s an especially salient point in light the growth of new interaction modalities for visualization (e.g. 3D visualization).

This question about our capacity to read and interpret increasingly complex graphs underlies the following sections, which are unified by their concern with how to engage the public around futures that include technologies they don’t fully understand. Numerous recent initiatives have attempted to carve out curriculum, if not a full program, for data literacy. Beyond digital literacy, data literacy is seen as a core “21st century” skill, but this is a far more complicated and nuanced subject area than many literacy advocates are willing to admit. Recent attempts to outline scholarly research and public programs for data/information literacy, graphical/visual literacy, statistical/numerical literacy, alongside efforts to establish an expanded digital literacy curriculum, frequently miss an important aspect: few are reflexive, critical, or even cautious. Typically, essays and mission statements will call for greater attention to the sources of data, applying more holistic methods, or elucidating the ethical contexts of data-driven environments. But what are the larger skills that critical visualization pedagogy can teach? The following sections develop arguments about where I feel critical visualization (and data) literacy initiatives are most sorely needed, and where I feel my work on denaturalization can have its greatest impact going forward.

Radical Graphicacy and Technological Fluency

Graphicacy is a term that has a long and complicated history. It refers to the ability to both understand and use graphical information (e.g. charts, graphics, maps, and related

31https://medium.com/multiple-views-visualization-research-explained/what-does- visualization-literacy-mean-anyway-22f3b3badc0 Chapter 6. Information, Not Representation 210 diagrams). First coined in 1965 by geographers William Balchin and Alice Coleman in an article they wrote for the Times Education Supplement,32 the term experienced its hey- day between the mid 80s and early 2000s when it was the subject of numerous research programs in educational psychology. Graphicacy is deeply interwoven with another con- cept, numeracy, which also has an equally long and problematic history in education research. Not that graphicacy is coming back into vogue, but certain influential scholars in the visualization community (e.g. Alberto Cairo) have been using it in recent years in lieu of visualization literacy.33

Because we can assume that most visualization researchers and designers will have an acquired level of graphical knowledge, graphicacy is really a measure of visualization literacy for a public audience. So, what might the principles of radical, critical visual- ization literacy/graphicacy look like when applied to the public? If we take critical to mean, in the Frankfurt school sense, seeking human “emancipation from slavery,” acting as a “liberating influence,” and working “to create a world which satisfies the needs and powers” of human beings (Horkheimer, 1972, p. 172), then we might associate it with another liberatory term: radical, whose etymological origin, from the Latin “root,” speaks to a need to develop something new - in this case, a new program that can take root.

Scholarship in the areas of critical data studies (boyd and Crawford, 2012; Dalton and

Thatcher, 2014) and critical visualization (Drucker, 2011; Hall, 2008) has established the necessary foundations for an alternative to the purely technical approaches we often find in standard data science curricula.34 This alternative will need methods that combine the social and the technical, the humanistic and the instrumental, and will need to be grounded in both interpretive experience and direct, hands-on engagement with data.

32See Balchin and Coleman (1965). 33https://www.urban.org/events/alberto-cairo-misleading-data-and-visualizations 34One noted example is Carl T. Bergstrom and Jevin West’s new “Calling Bullshit” course at the University of Washington’s iSchool. Its aim is to teach students “how to think critically about the data and models that constitute evidence in the social and natural sciences.” See http://callingbullshit. org/. Chapter 6. Information, Not Representation 211

What I suggest is necessary, then, is a re-framing of data and visualization literacy into a program that combines discursive and critical hands-on material production practices - a combination that brings together social and technical concerns and forges local pathways into the issues, techniques, and practices of data-driven life. The strategies I propose encourage audiences to understand data by thinking with their hands (Sennett, 2008), counteracting the elegant lull of what Bret Victor refers to as “pictures under glass.”35

What does this hands-on “data making” look like in practice, or in workshops I’ve designed for a public audience? There are a number of distinct stages, beginning with a discursive move - a prompt. The prompt should be deliberately vague - something like “search for patterns of immediacy in the distant” - without clarifying specifically what is expected from it. This breaks from the tendency to insist on the defined outcomes that are so often encountered in data science curriculum. This discursive move is followed by a material one. Participants are expected to build or reconfigure a data capture apparatus.

This emphasizes shared acts of making (Ratto, 2011b), rather than evocative objects, as well as critical reflection on the technologies and practices involved in building data capture technologies. This is making writ large - not the digestible, outcomes-focused making of the so-called maker movement. It includes repurposing, reverse engineering, crafting, and sculpture, as well as electronics programming. Similar to speculative design approaches, this may involve invoking new skills, including “learning how to use tools and applications for the production of multimodal forms of expression, techniques for rapid prototyping, processes of iterative experimentation, and skills in social negotiation and integration” (Balsamo, 2009). The process of constructing a data capture apparatus serves as an opportunity for reflection on collaborative data collection practices, collective

35“Pictures Under Glass sacrifice all the tactile richness of working with our hands, offer- ing instead a hokey visual facade. Pictures Under Glass is an interaction paradigm of perma- nent numbness. It’s a Novocaine drip to the wrist. It denies our hands what they do best. And yet, it’s the star player in every Vision Of The Future.” See http://worrydream.com/ ABriefRantOnTheFutureOfInteractionDesign/. Chapter 6. Information, Not Representation 212 empiricism, social phenomena like self-tracking, and the temporality of data capture. In this context, it should be guided by the following questions: What are sources of data that are immediately available to us? What can be turned into data? What resists capture?

The next step requires collecting data from messy sources. This might include qualitative observations that haven’t been time-stamped. It could be a huge pile of indecipherable accelerometer data that has no clear point of origin, captured over a serial connection and dumped into a .txt file that a laptop might struggle under the weight of. This is not the perfectly scrubbed .csv files that you get from civic open data warehouses or software templates - even though those provide fodder for the question: how does data get so clean? I strongly advocate against the self-referential datasets that turn up in technical curriculum that valorize silicon valley startup culture (e.g. the “volume of daily tweets” data, or the Google valuation data, or the classification of Amazon books and Netflix movies). Data munging is a crucial part of the process, and it should be confusing, at least initially. It should also have an eye toward what algorithms occlude.

What does the human notice? What does the human care about? Are these things available to capture? In the conjoined processes of building a data capture apparatus and collecting and cleaning messy data from it, there is an opportunity to pay attention to the materiality of data making. What is the data physically made of? Does it have a duration? A location? Is it ephemeral or embodied?

This stage requires paying attention to the local which means recognizing that there is no such thing as universal data, a common claim that standard data science curriculum often makes. Yanni Loukissas has described how data are marked indelibly by local artifacts, local knowledge, and local significance, no matter how far they travel. Local reading, a technique he adapted from the humanities method known as close reading, suggests examining how isolated features produce meaning. Local readings act as a counterpoint Chapter 6. Information, Not Representation 213 to visualization in that they reveal the heterogeneity of data before they are processed.

For students, learning about this heterogeneity can help dispel the illusion that any data can offer what Haraway calls the view from nowhere (Loukissas, 2016a,b). While I am not offering a definition of big data, and recognize that, in common parlance at least, big data seems to be a placeholder for all data, the datasets I advocate generating (or using) are far from the n = all descriptor that tends to characterize big data. They do not necessarily approach the velocity of big data, or encourage the omniscient perspective of big data analytics. But I believe that wrestling with the local, the intimate, the disaggregated, and occasionally the incomprehensible, can open portals into the big, the aggregated, and the flattened.

The final move is a performative one. Distant readings of data can be as fruitful as close ones, so I recommend aggregation and layering datasets and visualizations in a search for duplicates, parallels, and comparisons. This is not necessarily algorithmic clustering as one would find in contemporary data science curriculum. We must ask: where are the breaks and deviations that human interpreters tend to notice only visually?

Luciano Floridi’s “search for small patterns” comes into play (Floridi, 2012), as it can provide a base from which students or workshop participants might articulate unique and novel interpretations by presenting data through creative forms like narrativization and sculpture, forcing them to consider why the data makes sense to them... or doesn’t! This leverages their prior knowledge about data visualization, which can enhance their ability to comprehend and recall important characteristics about the data they are working with

(Kim, Reinecke, and Hullman, 2017). It also forces them to consider the relationship between presentation and representation. Representational practices involved in creative data rematerialization, including digital sculpting and smoothing, draw parallels with image retouching practices of an earlier era, providing an opportunity to consider the epistemic value of objectivity and the subjective role of the interpreter (Daston and Chapter 6. Information, Not Representation 214

Galison, 2007, p. 53). This is an opportunity to unsettle one’s habit of imagining abstract data as immaterial. Data is never immaterial.

I want to emphasize that this is not an empirically tested, validated pedagogical model by any means. I have implemented bits and pieces of it in various classes that I’ve taught, and have used it as the foundation for visualization workshops I’ve led, including one at the 2016 iConference (Resch et al., 2016). Among the challenges that come with envisioning a program like this as a long-term research agenda, there are important structural questions to be considered. When is the right age to introduce critical data visualization concepts? In a data-driven society, is it ever too soon? And is there an appropriate population to develop curriculum for? Should it focus on single participants or group participation? Recent work on ongoing, participative visualization initiatives indicates that collective literacy work might prove to be more successful than targeted, individual projects.36 Scalability is also important, as such a pedagogical model should be applicable over the course of 3 hours or 3 months or 3 years. Finally, we must consider whether there is an appropriate site for this sort of pedagogy. If academia, would it be initiated through undergraduate courses or graduate seminars where these deeper epistemic issues can be addressed? Outside of academia, are informal spaces, like libraries, science centres, or even hacker/makerspaces and hackathons the appropriate location? These are open questions. I hope to find answers to some of them through a master’s course on critical and human-centred approaches to infovis that I developed and will soon be teaching at the University of Toronto’s iSchool.37

Up to now, I haven’t really addressed the question of whether literacy is the best lens through which to approach these issues. Beyond helping affected audiences develop cri-

36http://www.visualisingdata.com/2014/02/the-success-of-participative- visualisations/ 37https://ischool.utoronto.ca/course/special-topics-in-information-studies-critical- and-human-centred-approaches-to-information-visualization/ Chapter 6. Information, Not Representation 215 tiques of new technologies, responses to ethical concerns, etc., the program I am propos- ing actually has a much longer horizon: its mandate is the development of technological

fluency. Literacy aligns with existing digital programs and initiatives, which makes it an easier concept to operationalize and scale. But technological fluency, according to

Jonathan Lukens and Carl DiSalvo (2012), “in contrast to literacy, affords creativity.” It gives those who develop it not only the ability to read and decipher a graph, but the capacity to take it apart, to know why the data is trustworthy (or not), and then to imag- ine new ways to represent it and persuade additional audiences with it - “to understand, use, and assess technology beyond its rote application.” Focusing on fluency, I argue, is a better approach to bridging the widening gap between audience and developer.

Critical Information Practice

Truly critical visualization literacy and/or fluency initiatives must also aim to advance a professional program for critical engagement with data. Here, the focus is not on public audiences or end users of visualization products, but on the designers, developers, and companies that build them. I use the term “critical information practice” to frame an approach that might facilitate an alternative politics of data and visualization through grounded practices rather than data science fundamentals. Critical information practice, as I propose it, weaves together threads from a handful of related theoretical approaches and literacy programs. The term critical sociotechnical literacy has been used to describe the ability to navigate ethical questions around sociotechnical issues like automation and algorithmic life. It extends the concept of media literacy (lately out of vogue), which describes the ability to parse highly-constructed, mediated images, as well as related concepts of textual literacy that address issues like bias in printed media. Scholars like

Catherine D’Ignazio have worked extensively in the area of critical data literacy, what

Alan Tygel and Rosana Kirsch have outlined as “the set of abilities which allows one to Chapter 6. Information, Not Representation 216 use and produce data in a critical way.” The rubric they propose includes the ability to read data; process and manipulate it; communicate with and about it; and produce it

(Tygel and Kirsch, 2016), but remarkably says nothing about the ability to creatively interact with it, a crucial aspect of fluency. Of note, their definition explicitly includes the non-expert as well as the expert. Finally, we have the slightly more developed area of critical information literacy. Within the interdisciplinary field of big-I information, which encompasses data science, library and information science, sociology of information, and a number of other cognate disciplines, initiatives for critical information literacy have focused on helping educators - including both academics and librarians - foster critical consciousness in students (Elmborg, 2006). In recent years, this has specifically come to mean critical consciousness about information and its various effects on (and in) society.

Renowned Brazilian education theorist Paulo Freire’s influence on each of these literacy programs is markedly felt, especially in their positions against positivist and authoritarian discourses of data and information. But we must return to the question of whether

“literacy” is an adequate umbrella to align these concerns under. The notion of “practice” rejects the binary framing that literacy seems to be determined by. Effective pedagogy must encourage, inculcate, and facilitate practice(s), rather than simply transmit or develop literacy skills. If we can assume that data designers are already literate in the various technical skills their professions require (e.g. statistics; graphic design), then the term “practice” becomes a way to describe their orientation, their epistemic values, and their commitment to ongoing engagement with an audience.

Critical information practice, as I propose the term, places an emphasis on critical and reflexive material production with data. It draws its philosophical inspiration from Phil

Agre’s critical technical practice, which encourages adherents to accept a kind of epis- temic anxiety and dislocation when they challenge their own internalized biases. Agre described an orientation that entails developing a split identity, with one foot planted Chapter 6. Information, Not Representation 217 in the craft work of design and the other foot planted in the reflexive work of critique.

He advocated for a praxis of daily work: forms of language, career strategies, and social networks that support the exploration of alternatives. Critical information practice, like

Agre’s program, entails more than just a method or a toolkit or even a curriculum - it is characterized by adherents publicly stating their commitments. This an orientation that doesn’t treat information volume as a technical or epistemological problem, but as an opportunity... an opportunity to get lost, to descend into the maelstrom of big data, and to move between the different velocities of the outside and the middle. Agre described the sense of vertigo he felt when he began to apply theories that were foreign to his disci- plinary background. “I still remember the vertigo I felt during this period; I was speaking these strange disciplinary languages, in a wobbly fashion at first, without knowing what they meant - without knowing what sort of meaning they had... in retrospect this was the period during which I began to ‘wake up’, breaking out of a technical cognitive style that I now regard as extremely constricting” (Agre, 1997). Critical information practice moves this sense of reflexive disequilibrium from the production, assessment, and consid- eration of new technologies to the understanding of how new information media might impact social affairs. It treats vertigo, like denaturalization, as a design strategy.

There are parallels to a number of related programs, including critical engineering and tactical media, but the design emphasis in critical information practice is placed on data representation. It takes critical making as a methodological orientation, and uses denatu- ralization as a counterfactual design strategy. Counterfactual design entails asking “what might have been” or posing “what-if” prompts at various stages of an iterative design process. These sort of prompts can be used to derive new visual design approaches (e.g.

“what if we choose a different visualization form”), provoke important ethical questions about how an audience might receive a visualization product (e.g. “what will happen if we choose to add a text or narrative annotation to a visualization”), and even imagine Chapter 6. Information, Not Representation 218 extreme outcomes (e.g. “how would this visualization product differ if we designed it with evil or nefarious intentions in mind”).38 It can also prompt designers to regularly check in and state their commitments, epistemic biases, and goals.

While there are distinctions between the concepts of “data” and “information” that could be brought to bear on this program - why information and not critical data practice, for example - there are various reasons for approaching this through an informational lens, rather than a data-specific one. The data-information distinction matters as a discursive phenomenon. There is not the space to consider all the reasons to focus on information, but an admittedly trivial one is that it gives us an opportunity to counteract the reductive tendency to consider our contemporary media condition as exclusively “data-driven,” as if it can be somehow be divorced from the information and communications technologies along which digital media moves.

Johanna Drucker, in a recent email conversation discussing the development of critical data visualization practices, suggested the following: “All theoretical formulations have to find their way into the stuff and matter of activity and action... critique assume(s) dis- tance, separation between a critical subject and a to-be-critiqued object, an outside-ness, a separation, that seems impossible to reconcile with the complicity and connectedness of ourselves with the world. After critique comes engagement, but not on terms of moral superiority and distance, rather, from within the conditions of our own complicity and ignorance.”39 As Agre once immersed himself in Foucault, today’s data engineers might also. They might learn to read an archive as Derrida would have. To unsettle the foun- dations of reason like Lorraine Daston. And to not feel threatened by poststructuralist

38The latter question is spurred by Alexis Hiniker’s “Designed for Evil” course at the University of Washington, as well as a twitter thread that suggested anyone collecting personal data should ask “what would the worst people do if they got hold of this?” The thread can be accessed here: https: //twitter.com/eey0re/status/970144255745212416. 39http://empyre.library.cornell.edu/phpBB2/viewtopic.php?t=1258&sid= e635c9f2858b61045f396bc61f287948 Chapter 6. Information, Not Representation 219 critiques of the contemporary data science industry-cult.

Responsibility

Beyond the two groups I’ve discussed so far - the general public, and the data designers and engineers who build and work with visualization technologies - there is a final group that needs to develop data visualization fluency. Social scientists and humanists must learn the requisite technical skills - in fact, could develop a kind of data-centric material practice - if they are to adequately interrogate data and its various representations. This extends beyond learning a few Tableau templates or running some sample code without having any clue what it does. It means always being on the lookout for interesting datasets - asking that question “where is the data around us” - and building capture apparatuses while critically evaluating their empirical and social outcomes. It means

visualizing data in ways that might require narrativizing a dataset’s meaning, taking a

subjective, interpretive stance and willingly putting it on the line for others to critique.

And it also means likely having to learn what NumPy and Pandas can do, or getting

through the entire MapReduce entry. And it will be confusing, surely, but over

time those who do might cultivate practices that will allow them to better consider their

responsibilities as data interpreters - to become comfortable with the hybrid identity of

information critic-cum-information technologist.

Researchers in the humanities and social sciences who are interested in building a rich

discussion around critical data topics have both an opportunity and a responsibility

to expand the field of infovis, not to simply deconstruct it. If critical visualization

is going to be taken seriously as a concern, it is going to have to do more than just

critique. Its practitioners are going to have to work with and understand visualization,

contribute to technical development in the field of visualization, etc. There are already Chapter 6. Information, Not Representation 220 enough determinist invitations to resist data-driven life. We will be better served by questioning our complicity in the production of new sociotechnical systems and, if we

find fault with them, building alternatives. That is one path. Another lies in joining the various recent conversations about ethics in data science that have raised concerns about privacy, data power, bias in machine learning, and a host of related themes.40 Many of these discussions have yet to consider how the public’s interface to data - visualization media - is an important part of this conversation.41 At the moment, only a small number

in the data science community are focusing their attention on interpretable graphics,

challenging graphical misrepresentation, etc.42

Such efforts might be framed in terms of “data justice,” a popular recent catch-all for work

that connects social justice and data imperatives, both within academia and beyond.43

Outside of the data journalism and political infovis communities, there are a variety of

public initiatives that interpretive social scientists, humanists, and critical technologists

might take part in.44 Data justice is not just about access, critique, or literacy. If the

motivation that aligns most data justice projects is to overcome the power of algorithms

to determine aspects of our lives, then critical visualization literacy - the ability to not

only interpret graphs, but to both construct and creatively manipulate them, within a

40Much of the recent conversation was initiated by a call for a “code of ethics,” similar to a Hippocratic oath, that data scientists could pledge adherence to. It was written by DJ Patil, the former Chief Data Scientist of the United States under President Obama, in early 2018. Patil’s statement, which generated a voluminous amount of social media debate, can be found here: https://medium.com/@dpatil/a- code-of-ethics-for-data-science-cda27d1fac1. A comprehensive overview of many of the follow- ing initiatives can be found here: https://www.fast.ai/2018/09/24/ai-ethics-resources/. 41See, as an example, https://www.oreilly.com/ideas/of-oaths-and-checklists which suggests that “checklists will help put principles into practice” and gives some example checklist items, but doesn’t really say anything about visualization. Also see this reusable ethics checklist for data science projects: http://deon.drivendata.org/. 42E.g. Researchers working on Google’s Distill project: https://distill.pub/about/. 43Examples of interesting projects include Alessandra Renzi’s work on “counter-modeling” which is de- scribed here: https://www.onlineopen.org/entangled-data-modelling-and-resistance-in-the- megacity; and Rahul Bhargava and Catherine D’Ignazio’s Data Basic suite of tools which are described here: https://databasic.io/. 44These include viz for social good: https://www.vizforsocialgood.com/ and data for democracy: http://datafordemocracy.org/index.html. Chapter 6. Information, Not Representation 221 broader application of critical information practice - needs to be recognized as a core component of data justice. Creative data representation is a data justice method!

Florence Nightingale, according to her own account, was moved to study statistics and invent new graphical techniques to compel others to action after bearing witness to “so much of the misery and sufferings of humanity, of the irrelevance of laws and govern- ments.” Voting is often seen as our highest civic duty, our responsibility to maintain democratic representational governance. It is held up as the greatest virtue of public en- gagement. What if, however, in a data-driven world, learning to read and creatively use the medium of data has become a higher duty. What social and political justice issues might visualization play a significant role in? Arguably, the most important is global warming, and many critical data researchers have turned their attention to issues like the protection of government-funded scientific data.45 Amanda Montañez, an associate graphics editor at Scientific American, has suggested that “we who have the forum to create and distribute science content should approach each project with the full reality of

Trump’s presidency and the danger of anti-science rhetoric in mind. Information graphics will play one of many roles in this fight.”46 A truly critical information practice, then, will need to have an interventionist bent.47 But we must not only intervene in the lives of those who choose to actively engage with data. We must be committed to intervention in the broader society. Critical sociotechnical researchers who choose to use data visual-

45A great example is work by the Environmental Data & Governance Initiative (EDGI), a multi- institution collaboration that a number of my colleagues are involved with. Among other efforts, EDGI has staged a handful of DataRescue events in which they have brought together scientists, coders, librarians, and volunteers to identify and preserve at-risk datasets in collaboration with the Internet Archive. More information can be found here: https://envirodatagov.org/datarescue/. 46https://blogs.scientificamerican.com/sa-visual/how-science-visualization-can-help- save-the-world/ 47This is inspired by a recent STS workshop at Munich Center for Technology in Society, Technical University of Munich, which claimed that “most (STS) research remains distanced and detached, fo- cusing on the traditional genres of description and explanation” and asked what would happen “if we - as researchers - commit to intervention” wondering how we can be immersed participants (Bellacasa, 2017) while producing “concrete guides of how to perform interventions .” More information about this workshop can be found here: https://www.dests.de/news/3447. Chapter 6. Information, Not Representation 222 ization as a creative medium of engagement can’t simply delegate responsibility. We will need to acknowledge that making and intervening makes one complicit in the production of new worlds.48

Conclusion

If, as critical scholars, we are to build these new worlds and locate new sites of meaning within them, perhaps it is worth getting past the sense of epistemic anxiety they will induce among members of the public, professional, and academic infovis worlds. There is already a society-wide anxiety around the role of data and information technologies in re-shaping social activity. This condition is akin to the “vortical tension” of the late

1960s that Joan Didion (1990, p. 41) famously described, and shares features with the maelstrom of technology anxiety that McLuhan built his later career around. Over the past decade, countless deterministic claims about the promise of big data, data-driven governance - data-driven everything! - have been bandied about by scholars, so-called digital prophets, and Silicon Valley venture capitalists. Some have come true. Many have not. Others may still. In 2008, former Wired editor Chris Anderson, no stranger to techno-utopian hyperbole, described big data as a death knell for the scientific method.

It would, he claimed, hold the potential to produce more accurate and insightful scientific results than the findings of countless domain specialists who experiment, observe, and rely on their tacit expertise (Anderson, 2008). Anderson’s claims caused a firestorm of rebuttal. More recently, in a widely-circulated Forbes article calling for expanded data literacy curriculum, “serial entrepreneur” H.O. Maycotte, CEO of a data rights manage- ment company at the time, called big data a “disruptor that waits for no one,” suggesting that relying on the next generation to take up the mantle of data literacy will leave us far behind competitors who embrace data literacy now (Maycotte, 2014). Who the “us”

48See Latour (2004, 2008) and Ratto (2016). Chapter 6. Information, Not Representation 223 is in Maycotte’s equation is unclear - big business, government, the public, universities... probably all of the above. According to Maycotte, we can be proactive. Rather than wait for universities to expand data literacy curricula, societies might invest in intuitive technologies that can simplify the vast complexities of big data, abstracting the data scientist’s knowledge of where to begin navigating and applying analytic strategies to the mess of data surely buried behind slick graphical interfaces. This is black-boxed sifting vs tacit knowledge, and it sounds like exactly the kind of race a data management company

CEO would have a horse in.

Providing a counterpoint, we have technology critic Evgeny Morozov, who recently called big data a harbinger for the death of politics, warning about futures in which technocrats endlessly promote data-driven governance and algorithmic regulation (Morozov, 2014).

Morozov’s alarmism doesn’t provide a clear path forward, other than to declare that critique must be grounded by technical and social strategies, as he reminds us that

Silicon Valley already writes the terms of engagement upon which critique will be met.

In order to effectively contend with data as a social phenomenon, we don’t just need new pruning techniques and aggregation methods - we need a fundamentally different kind of data literacy than is currently being promoted. The sort of data pedagogy that turns up in data science and visualization curricula has been inherently biased toward separating technical and social concerns and methods. Too often, students are introduced to data concepts and ideas through sanitized examples, black-boxed algorithms, and standardized templates for graphical display. Meanwhile, models they rely on conveniently ignore the social and political implications of data in areas like healthcare, journalism, and civic governance. It seems there can be technical data science curriculum and non-technical information studies curriculum, but rarely the twain shall meet.

This chapter began with an epigraph from Gilles Deleuze, who claimed that represen- Chapter 6. Information, Not Representation 224 tation has a “false depth” - that it “mobilizes and moves nothing.” I don’t pretend to engage deeply with Deleuze’s rich anti-representationalism here. That will have to wait for another multi-year project. But I’d like to appropriate his statement and close the dissertation by calling for further study, design, and development toward representation that mobilizes and moves. What I am advocating for is the formation of information visualization cultures that actively seek to make a difference, that interfere with and intervene in the world. The approaches I’ve outlined in this dissertation - methods that afford new scale perspectives, move between material and digital, transect new media and old, and engage a hybrid sensorium - all work toward this goal. Taken together, as a program for denaturalizing information visualization, they aim to bring greater depth to our collective information practices and the infinite regress of relationships between objects, interpretation, and meaning they are defined by. Without depth, information is nothing more than empty data. Bibliography

Agre, Philip (1997). “Toward a critical technical practice: Lessons learned in trying to

reform AI”. In: Bridging the great divide: Social science, technical systems, and coop-

erative work. Ed. by Geoffrey Bowker et al. Mahwah, NJ: Erlbaum, pp. 131–157.

Alač, Morana (2011). Handling digital brains. Cambridge, MA: MIT Press.

Alpers, Svetlana (1983). The art of describing: Dutch art in the seventeenth century.

Chicago, IL: University of Chicago Press.

Alvesson, Mats and Kaj Sköldberg (2017). Reflexive methodology: New vistas for quali-

tative research. London, UK: Sage.

Anderson, Chris (June 2008). The end of theory: The data deluge makes the scientific

method obsolete. Wired. url: http://www.wired.com/2008/06/pb-theory/.

Artaud, Antonin (1977). Le théâtre et son double. Paris, FR: Éditions Gallimard.

Balchin, William and Alice Coleman (1965). “Graphicacy should be the fourth ace in the

pack”. In: The Times Educational Supplement 947.

Balsamo, Anne (2009). “Design”. In: International Journal of Learning and Media 1.4,

pp. 1–10.

Barad, Karen (2003). “Posthumanist performativity: Toward an understanding of how

matter comes to matter”. In: Signs: Journal of women in culture and society 28.3,

pp. 801–831.

– (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter

and meaning. Durham, N.C.: Duke University Press.

225 BIBLIOGRAPHY 226

Barad, Karen (2014). “Diffracting diffraction: Cutting together-apart”. In: Parallax 20.3,

pp. 168–187.

Bateson, Gregory (2000). Steps to an ecology of mind: Collected essays in anthropology,

psychiatry, evolution, and epistemology. Chicago, IL: University of Chicago Press.

Battista, Andrew and Jill A Conte (2017). “Teaching with data: Visualization and In-

formation as a critical process”. In: Critical library pedagogy handbook. Ed. by Nicole

Pagowsky and Kelly McElroy. Vol. 2. LIS Scholarship Archive. Chap. 19, pp. 147–154.

Bellacasa, Maria Puig de la (2017). Matters of care: Speculative ethics in more than

human worlds. Minneapolis, MN: University of Minnesota Press.

Bergstrom, Carl and Jevin West (2018). “Why scatter plots suggest causality, and what

we can do about it”. In: arXiv preprint arXiv:1809.09328. url: https://arxiv.org/

abs/1809.09328.

Berkowitz, Bruce (2018). Playfair: The True Story of the British Secret Agent who

Changed how We See the World. Fairfax, VA: George Mason University Press.

Boehner, Kirsten et al. (2005). “Affect: from information to interaction”. In: Proceedings

of the 4th decennial conference on Critical computing: between sense and sensibility.

ACM, pp. 59–68.

Bolter, Jay David and Richard Grusin (2000). Remediation: Understanding new media.

Cambridge, MA: MIT Press. boyd, danah and Kate Crawford (2012). “Critical questions for big data: Provocations for

a cultural, technological, and scholarly phenomenon”. In: Information, communication

& society 15.5, pp. 662–679.

Boym, Svetlana (2005). “Poetics and politics of estrangement: Victor Shklovsky and

Hannah Arendt”. In: Poetics Today 26.4, pp. 581–611.

Bozalek, Vivienne and Michalinos Zembylas (2017). “Diffraction or reflection? Sketching

the contours of two methodologies in educational research”. In: International Journal

of Qualitative Studies in Education 30.2, pp. 111–127. BIBLIOGRAPHY 227

Butscher, Simon et al. (2018). “Clusters, trends, and outliers: How immersive technologies

can facilitate the collaborative analysis of multidimensional data”. In: Proceedings of

the 2018 CHI Conference on Human Factors in Computing Systems. ACM.

Buxton, Bill (2010). Sketching User Experiences: Getting the Design Right and the Right

Design: Getting the Design Right and the Right Design. San Francisco, CA: Morgan

Kaufmann Publishers.

Candlin, Fiona (2004). “Don’t touch! Hands off! Art, blindness and the conservation of

expertise”. In: Body & Society 10.1, pp. 71–90.

Card, Stuart, Jock Mackinlay, and Ben Shneiderman (1999). Readings in information

visualization: using vision to think. San Francisco, CA: Morgan Kaufmann Publishers.

Carpenter, Edmund (1980). “If Wittgenstein Had Been an Eskimo. Even for Profound

Philosophers, Literacy Has its Limitations”. In: Natural History New York, NY 89.2,

pp. 72–77.

Chalmers, Matthew (2003). “Seamful design and ubicomp infrastructure”. In: Proceedings

of Ubicomp 2003 Workshop at the Crossroads: The Interaction of HCI and Systems

Issues in Ubicomp.

Charmaz, Kathy (2006). Constructing grounded theory: A practical guide through quali-

tative analysis. Thousand Oaks, CA: Sage Publications.

Chia, Robert (2000). “Discourse analysis organizational analysis”. In: Organization 7.3,

pp. 513–518.

Christin, Angèle (2016). “From daguerreotypes to algorithms: Machines, expertise, and

three forms of objectivity”. In: ACM SIGCAS Computers and Society 46.1, pp. 27–32.

Clark, Andy (2007). “Re-inventing ourselves: The plasticity of embodiment, sensing, and

mind”. In: Journal of Medicine and Philosophy 32.3, pp. 263–282.

Classen, Constance (1997). “Foundations for an anthropology of the senses”. In: Interna-

tional Social Science Journal 49.153, pp. 401–412. BIBLIOGRAPHY 228

Cleveland, William S and Robert McGill (1984). “Graphical perception: Theory, exper-

imentation, and application to the development of graphical methods”. In: Journal of

the American statistical association 79.387, pp. 531–554.

Coleman, Rebecca (2014). “Inventive feminist theory: Representation, materiality and

intensive time”. In: Women: A Cultural Review 25.1, pp. 27–45.

Coopmans, Catelijne et al. (2014). Representation in scientific practice revisited. Cam-

bridge, MA: MIT Press.

Correll, Michael and Michael Gleicher (2014). “Error bars considered harmful: Exploring

alternate encodings for mean and error”. In: IEEE transactions on visualization and

computer graphics 20.12, pp. 2142–2151.

Costigan-Eaves, Patricia and Michael Macdonald-Ross (1990). “William Playfair (1759-

1823)”. In: Statistical Science, pp. 318–326.

Cowen, Ron (2015). “Sound bytes”. In: Scientific American 312.3, pp. 44–47.

Crary, Jonathan (1992). Techniques of the observer: on vision and modernity in the

nineteenth century. Cambridge, MA: MIT Press.

Crawford, Kate (2014). “The anxieties of big data”. In: The New Inquiry 30.

D’Ignazio, Catherine and Rahul Bhargava (2015). “Approaches to building big data lit-

eracy”. In: Proceedings of the Bloomberg Data for Good Exchange Conference.

Dalton, Craig and Jim Thatcher (2014). “What does a critical data studies look like, and

why do we care? Seven points for a critical approach to ‘big data’”. In: Society and

Space 29.

Daston, Lorraine and Peter Galison (2007). Objectivity. New York, NY: Zone Books.

Daston, Lorraine and Katharine Park (1998). Wonders and the Order of Nature, 1150-

1750. New York, NY: Zone books.

Davis, Philip J and Reuben Hersh (2005). Descartes’ dream: The world according to

mathematics. North Chelmsford, MA: Courier Corporation. BIBLIOGRAPHY 229

Deleuze, Gilles (1995). Difference and repetition. New York, NY: Columbia University

Press.

Diamond, Marion and Mervyn Stone (1981). “Nightingale on Quetelet”. In: Journal of

the Royal Statistical Society: Series A (General) 144.1, pp. 66–79.

Didion, Joan (1990). The white album. New York, NY: Macmillan.

Dirksmeier, Peter and Ilse Helbrecht (2008). “Time, non-representational theory and the

‘performative turn’ - Towards a new methodology in qualitative social research”. In:

Forum Qualitative Sozialforschung/Forum: Qualitative Social Research. Vol. 9. 2.

Donalek, Ciro et al. (2014). “Immersive and collaborative data visualization using virtual

reality platforms”. In: Big Data (Big Data), 2014 IEEE International Conference on.

IEEE, pp. 609–614.

Dörk, Marian et al. (2013). “Critical InfoVis: exploring the politics of visualization”. In:

CHI’13 Extended Abstracts on Human Factors in Computing Systems. ACM, pp. 2189–

2198.

Dourish, P. (2004). Where the action is: The foundations of embodied interaction. Cam-

bridge, MA: MIT Press.

Drucker, Johanna (2001). “Digital Ontologies: The Ideality of Form in/and Code Storage–

or–Can Graphesis Challenge Mathesis?” In: Leonardo 34.2, pp. 141–145.

– (2011). “Humanities approaches to graphical display”. In: Digital Humanities Quarterly

5.1, pp. 1–21.

– (2014). Graphesis. Cambridge, MA: Harvard University Press.

Eisenstein, Elizabeth L (1980). The printing press as an agent of change. Cambridge,

UK: Cambridge University Press.

Elmborg, James (2006). “Critical information literacy: Implications for instructional prac-

tice”. In: The Journal of Academic Librarianship 32.2, pp. 192–199. BIBLIOGRAPHY 230

Farina, Cynthia R et al. (2014). “Designing an online civic engagement platform: Balanc-

ing “more” vs.“better” participation in complex public policymaking”. In: International

Journal of E-Politics (IJEP) 5.1, pp. 16–40.

Fleckenstein, Kristie S (2008). “A Matter of Perspective: Cartesian Perspectivalism and

the Testing of English Studies”. In: JAC, pp. 85–121.

Floridi, Luciano (2012). “Big data and their epistemological challenge”. In: Philosophy &

Technology 25.4, pp. 435–437.

Fortin, Claude (2016). “Recasting the data sublime in media architecture”. In: Proceedings

of the 3rd Conference on Media Architecture Biennale. ACM, p. 6.

Friedman, Batya and Helen Nissenbaum (1996). “Bias in computer systems”. In: ACM

Transactions on Information Systems (TOIS) 14.3, pp. 330–347.

Friendly, Michael (2006). “A Brief History of Data Visualization”. In: Handbook of Com-

putational Statistics: Data Visualization. Ed. by C. Chen, W. Härdle, and A Unwin.

Vol. III. Heidelberg, DE: Springer-Verlag, pp. 15–56.

– (2008). “The golden age of statistical graphics”. In: Statistical Science, pp. 502–535.

Galloway, Alexander R (2014). Laruelle: Against the digital. Minneapolis, MN: University

of Minnesota Press.

Gelman, Andrew (2011). “Why tables are really much better than graphs”. In: Journal

of Computational and Graphical Statistics 20.1, pp. 3–7.

Gelman, Andrew and Antony Unwin (2013). “Infovis and statistical graphics: different

goals, different looks”. In: Journal of Computational and Graphical Statistics 22.1,

pp. 2–28.

Gillespie, Tarleton (2006). “Engineering a Principle: ‘End-to-End’ in the Design of the

Internet”. In: Social Studies of Science 36.3, pp. 427–457.

Gitelman, Lisa (2013). Raw data is an oxymoron. Cambridge, MA: MIT Press.

Gore, Al (2006). An inconvenient truth: The planetary emergency of global warming and

what we can do about it. Emmaus, PA: Rodale. BIBLIOGRAPHY 231

Hacking, Ian (1983). Representing and intervening: Introductory topics in the philosophy

of natural science. Cambridge, UK: Cambridge University Press.

Hagener, Malte (2015). “Beautiful Data/The Democratic Surround”. In: NECSUS. Eu-

ropean Journal of Media Studies 4.2, pp. 223–227.

Hall, Peter Alec (2008). “Critical visualization”. In: Design and the elastic mind. Ed. by

Paola Antonelli. New York, NY: Museum of Modern Art, pp. 122–131.

Halpern, Orit (2015). Beautiful data: A history of vision and reason since 1945. Durham,

NC: Duke University Press.

Hamilakis, Yannis (2013). “Afterword: Eleven Theses on the Archaeology of the Senses”.

In: Making senses of the past: Toward a sensory archaeology. Ed. by Jo Christine

Day and Patricia S Eckert. Carbondale, IL: Center for Archaeological Investigations,

Southern Illinois University Press.

Hansen, Mark BN (2006). “Media theory”. In: Theory, Culture & Society 23.2-3, pp. 297–

306.

Haraway, Donna (1988). “Situated knowledges: The science question in feminism and the

privilege of partial perspective”. In: Feminist studies 14.3, pp. 575–599.

– (1997). Modest_Witness@ Second_Millennium. FemaleMan.© Meets_Oncomouse: Fem-

inism and technoscience. London, UK: Psychology Press.

– (2004). The haraway reader. London, UK: Psychology Press.

Hayles, N. Katherine (1999). How we became posthuman: Virtual bodies in cybernetics,

literature, and informatics. Chicago, IL: University of Chicago Press.

Heller, Steven and Rick Landers (2014). Infographic Designers’ Sketchbooks. New York,

NY: Princeton Architectural Press.

Hillier, Jean (2008). “Plan (e) speaking: A multiplanar theory of spatial planning”. In:

Planning Theory 7.1, pp. 24–50.

Hoel, Aud Sissel and Annamaria Carusi (2018). “Merleau-Ponty and the Measuring

Body”. In: Theory, Culture & Society 35.1, pp. 45–70. BIBLIOGRAPHY 232

Holmes, Oliver Wendell (1892). Medical essays 1842-1882. Boston, MA: Houghton, Mif-

flin.

Hooker, Clifford Alan (1987). A realistic theory of science. Albany, NY: Suny Press.

Horkheimer, Max (1972). Critical theory: Selected essays. Vol. 1. New York, NY: A&C

Black.

Ihde, Don (1999). “Expanding hermeneutics”. In: Hermeneutics and Science. Ed. by

Márta Fehér, Olga Kiss, and Laszlo Ropolyi. New York, NY: Springer, pp. 345–351.

– (2017). “Sonifying science: Listening to cancer”. In: Nursing Philosophy 18.1.

Jansen, Yvonne (2014). “Physical and tangible information visualization”. PhD thesis.

Université Paris Sud-Paris XI.

Jansen, Yvonne, Pierre Dragicevic, and Jean-Daniel Fekete (2013). “Evaluating the effi-

ciency of physical visualizations”. In: Proceedings of the SIGCHI Conference on Human

Factors in Computing Systems. ACM, pp. 2593–2602.

Jansen, Yvonne et al. (2015). “Opportunities and challenges for data physicalization”.

In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing

Systems. ACM, pp. 3227–3236.

Jasanoff, Sheila (2012). Science and public reason. New York, NY: Routledge.

Jay, Martin (1988). “The rise of hermeneutics and the crisis of ocularcentrism”. In: Poetics

Today 9.2, pp. 307–326.

– (1991). “The disenchantment of the eye: surrealism and the crisis of ocularcentrism”.

In: Visual Anthropology Review 7.1, pp. 15–38.

Johri, Aditya, Wolff-Michael Roth, and Barbara M Olds (2013). “The role of represen-

tations in engineering practices: Taking a turn towards inscriptions”. In: Journal of

Engineering Education 102.1, pp. 2–19.

Jurgenson, Nathan (2014). “View from nowhere: on the cultural ideology of big data”.

In: The New Inquiry 9. BIBLIOGRAPHY 233

Kant, Immanuel (1960). Observations on the Feeling of the Beautiful and Sublime, trans-

lated by John T. Goldthwait. Berkeley, CA: University of California Press.

Kennedy, Helen and Rosemary Lucy Hill (2017). “The pleasure and pain of visualizing

data in times of data power”. In: Television & New Media 18.8, pp. 769–782.

Kensing, Finn and Jeanette Blomberg (1998). “Participatory design: Issues and concerns”.

In: Computer Supported Cooperative Work (CSCW) 7.3-4, pp. 167–185.

Kim, Yea-Seul, Katharina Reinecke, and Jessica Hullman (2017). “Explaining the Gap:

Visualizing One’s Predictions Improves Recall and Comprehension of Data”. In: Pro-

ceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM,

pp. 1375–1386.

Kirschenbaum, Matthew (2004). “Extreme inscription: Towards a grammatology of the

hard drive”. In: TEXT Technology 13.2, pp. 91–125.

Knechtel, Ruth (2010). “Digital estrangement, or anxieties of the virtually visual: XSLT

transformations and the 1890s online”. In:

Knorr Cetina, Karin (1999). Epistemic cultures: How the sciences make knowledge. Cam-

bridge, MA: Harvard University Press.

– (2007). “Culture in global knowledge societies: Knowledge cultures and epistemic cul-

tures”. In: Interdisciplinary science reviews 32.4, pp. 361–375.

– (2008). “Objectual Practice”. In: Knowledge As Social Order: Re-Examining Barry

Barnes, p. 83.

Knuuttila, Tarja (2005). “Models, representation, and mediation”. In: Philosophy of Sci-

ence 72.5, pp. 1260–1271.

– (2011). “Modelling and representing: An artefactual approach to model-based repre-

sentation”. In: Studies in History and Philosophy of Science Part A 42.2, pp. 262–

271. BIBLIOGRAPHY 234

Kosara, Robert (2007). “Visualization criticism-the missing link between information

visualization and art”. In: Information Visualization, 2007. IV’07. 11th International

Conference. IEEE, pp. 631–636.

Kostelnick, Charles (2008). “The visual rhetoric of data displays: The conundrum of

clarity”. In: IEEE Transactions on Professional Communication 51.1, pp. 116–130.

Kress, Gunther and Theo van Leeuwen (2006). Reading images: The grammar of visual

design. New York, NY: Routledge.

Larkin, Jill H and Herbert A Simon (1987). “Why a diagram is (sometimes) worth ten

thousand words”. In: Cognitive science 11.1, pp. 65–100.

Larson, Mindy Legard and Donna Kalmbach Phillips (2013). “Searching for Methodology:

Feminist Relational Materialism and the Teacher-Student Writing Conference”. In:

Reconceptualizing Educational Research Methodology 4.1.

Latour, Bruno (1983). “Visualisation and Cognition: Drawing Things Together”. In:

Knowledge and Society Studies in the Sociology of Culture Past and Present. Ed. by

H. Kuklick. Stamford, CT: JAI Press, pp. 1–40.

– (2004). “Why has critique run out of steam? From matters of fact to matters of con-

cern”. In: Critical inquiry 30.2, pp. 225–248.

– (2008). “A cautious Prometheus? A few steps toward a philosophy of design (with

special attention to Peter Sloterdijk)”. In: Proceedings of the 2008 annual international

conference of the design history society, pp. 2–10.

Levin, David Michael (1993). Modernity and the hegemony of vision. Berkeley, CA: Uni-

versity of California Press.

Lohr, Steve (2012). “The age of big data”. In: New York Times. url: https://www. nytimes.com/2012/02/12/sunday-review/big-datas-impact-in-the-world.

html.

Lord, Beth (2006). “Foucault’s museum: difference, representation, and genealogy”. In:

Museum and society 4.1, pp. 1–14. BIBLIOGRAPHY 235

Loukissas, Yanni Alexander (2016a). “A place for Big Data: Close and distant read-

ings of accessions data from the Arnold Arboretum”. In: Big Data & Society 3.2,

p. 2053951716661365.

– (2016b). “Taking Big Data apart: local readings of composite media collections”. In:

Information, Communication & Society, pp. 1–14.

Luck, Rachael (2003). “Dialogue in participatory design”. In: Design studies 24.6, pp. 523–

535.

Lukens, Jonathan and Carl DiSalvo (2012). “Speculative design and technological flu-

ency”. In: International Journal of Learning and Media 3.4, pp. 23–40.

Lynch, Kevin (1960). The image of the city. Cambridge, MA: MIT press.

Lynch, Michael (1994). “Representation is overrated: Some critical remarks about the use

of the concept of representation in science studies”. In: Configurations 2.1, pp. 137–149.

Lynch, Michael and Woolgar (1990). Representation in scientific practice. Cambridge,

MA: MIT press.

Lyotard, Jean-François (1994). Lessons on the Analytic of the Sublime: Kant’s Critique

of Judgment,[sections] 23-29. Palo Alto, CA: Stanford University Press.

Magaudda, Paolo (2013). “What happens to materiality in digital virtual consumption?”

In: Digital Virtual Consumption. Ed. by Mike Molesworth and Janice Denegri Knott.

New York, NY: Routledge, pp. 118–133.

Mailer, Norman (2008). On God: An Uncommon Conversation. Random House Incorpo-

rated.

Mann, Michael E (2013). The hockey stick and the climate wars: Dispatches from the

front lines. New York, NY: Columbia University Press.

Manovich, Lev (2002a). The anti-sublime ideal in data art. url: http://virus.meetopia.

net/pdf-ps_db/LManovich_data_art.pdf.

– (2002b). The language of new media. Cambridge, MA: MIT press. BIBLIOGRAPHY 236

Manovich, Lev (2014). “Visualization Methods for Media Studies”. In: Oxford Handbook

of Sound and Image in Digital Media.

Marx, Karl (1844). “Private property and communism”. In: The economic and philosophic

manuscripts of 1844. Ed. by Dirk Struik. New York, NY: International Publishers,

pp. 132–146.

Mattern, Shannon (2015). “History of the Urban Dashboard”. In: Places Journal. url: https://placesjournal.org/article/mission- control- a- history- of- the-

urban-dashboard/?cn-reloaded=1.

Maycotte, H.O. (2014). Data Literacy–What It Is And Why None of Us Have It. url: https://www.forbes.com/sites/homaycotte/2014/10/28/data-literacy-what-

it-is-and-why-none-of-us-have-it/#5a96944268bb.

Mazzei, Lisa A (2014). “Beyond an easy sense: A diffractive analysis”. In: Qualitative

Inquiry 20.6, pp. 742–746.

McCloud, Scott (2006). Making comics: Storytelling secrets of comics, manga and graphic

novels. New York, NY: Harper.

McCormack, Jon et al. (2018). “Multisensory immersive analytics”. In: Immersive Ana-

lytics. Ed. by Kim Marriott et al. New York, NY: Springer, pp. 57–94.

McGookin, David, Euan Robertson, and Stephen Brewster (2010). “Clutching at straws:

using tangible interaction to provide non-visual access to graphs”. In: Proceedings of

the SIGCHI conference on human factors in computing systems. ACM, pp. 1715–1724.

Merleau-Ponty, Maurice (1964). “Eye and mind”. In: The primacy of perception. Ed. by

James Edie. Evanston, IL: Northwestern University Press.

Miller, Perry (1965). The Life of the Mind in America: Books One Through Three. San

Diego, CA: Harcourt, Brace & World.

Monks, Sarah (2010). “Suffer a Sea-Change: Turner, Painting, Drowning”. In: Tate Re-

search Papers 14. url: https://www.tate.org.uk/art/research-publications/ BIBLIOGRAPHY 237

the-sublime/sarah-monks-suffer-a-sea-change-turner-painting-drowning-

r1136832.

Morozov, Evgeny (2014). “The rise of data and the death of politics”. In: The Guardian.

url: https://www.theguardian.com/technology/2014/jul/20/rise-of-data-

death-of-politics-evgeny-morozov-algorithmic-regulation.

Nancy, Jean-Luc (2007). Listening. New York, NY: Fordham University Press.

Neven, Louis Barbara Maria (2011). “Representations of the old and ageing in the design

of the new and emerging: assessing the design of ambient intelligence technologies for

older people”. PhD thesis. Universiteit Twente.

Newell, Jenny (2012). “Old objects, new media: Historical collections, digitization and

affect”. In: Journal of Material Culture 17.3, pp. 287–306.

Nocek, Adam James (2015). “Animate Biology: Data, Visualization, and Life’s Moving

Image”. PhD thesis. University of Washington.

Norman, Don (2014). Things that make us smart: Defending human attributes in the age

of the machine. New York, NY: Diversion Books.

Nowell, Lorelli S et al. (2017). “Thematic analysis: Striving to meet the trustworthiness

criteria”. In: International Journal of Qualitative Methods 16.1, p. 1609406917733847.

Nye, David E (1994). American technological sublime. Cambridge, MA: MIT Press.

Parikka, Jussi (2015). A geology of media. Vol. 46. Minneapolis, MN: University of Min-

nesota Press.

Patnaik, Biswaksen, Andrea Batch, and Niklas Elmqvist (2018). “Information Olfacta-

tion: Harnessing Scent to Convey Data”. In: IEEE transactions on visualization and

computer graphics.

Pillow, Kirk (2000). Sublime understanding: Aesthetic reflection in Kant and Hegel. Cam-

bridge, MA: MIT Press. BIBLIOGRAPHY 238

Pinch, Trevor J and Wiebe E Bijker (1984). “The social construction of facts and artefacts:

Or how the sociology of science and the sociology of technology might benefit each

other”. In: Social studies of science 14.3, pp. 399–441.

Pink, Sarah (2009). Doing sensory ethnography. London, UK: Sage Publications.

Pink, Sarah et al. (2013). “Applying the lens of sensory ethnography to sustainable HCI”.

In: ACM Transactions on Computer-Human Interaction (TOCHI) 20.4, p. 25.

Pinker, Steven (2018). Enlightenment now: the case for reason, science, humanism, and

progress. London, UK: Penguin Books.

Raley, Rita (2009). Tactical media. Minneapolis, MN: University of Minnesota Press.

Ratto, M. (2011a). “CSE as Epistemic Technologies: Computer Modeling and Disciplinary

Difference in the Humanities”. In: Handbook of Research on Computational Science and

Engineering: Theory and Practice. Ed. by J. Leng and Wes Sharrock. IGI Global.

Ratto, Matt (2011b). “Critical Making: Conceptual and material studies in technology

and social life”. In: The Information Society 27.4, pp. 252–260.

– (2016). “Making at the end of nature.” In: interactions 23.5, pp. 26–35.

Resch, Gabby (2019). “Denaturalizing Visual Bias in Multisensory Data Visualization”.

In: Digital Culture and Society. Forthcoming - In Revision.

Resch, Gabby, Yanni Loukissas, and Matt Ratto (Sept. 2016). Critical Information Prac-

tice. Society for the Social Studies of Science Annual Meeting. Barcelona, ES.

Resch, Gabby, Daniel Southwick, and Matt Ratto (2018). “Denaturalizing 3D Printing’s

Value Claims”. In: New Directions in 3rd wave HCI. Ed. by Michael Filimowicz. Cham,

CH: Springer International.

Resch, Gabby et al. (Mar. 2016). Makes Sense to Me!: Participatory Sensing, Information

Visualization, and 3D Representation. iConference. Philadelphia, PA.

Resch, Gabby et al. (2018). “Thinking as Handwork: Critical Making with Humanistic

Concerns”. In: Making Things and Drawing Boundaries: Experiments in the Digital

Humanities. Ed. by Jentery Sayers. Minneapolis, MN: University of Minnesota Press. BIBLIOGRAPHY 239

Rheinberger, Hans-Jorg (1997). Toward a History of Epistemic Things: Synthesizing Pro-

teins in the Test Tube. Palo Alto, CA: Stanford University Press.

Roberts, Jonathan C et al. (2014). “Visualization beyond the desktop–the next big thing”.

In: Computer Graphics and Applications, IEEE 34.6, pp. 26–34.

Roberts, Tom (2017). “Thinking technology for the Anthropocene: encountering 3D print-

ing through the philosophy of Gilbert Simondon”. In: cultural geographies, p. 1474474017704204.

Rouse, Joseph (1996). Engaging science: How to understand its practices philosophically.

Ithaca, NY: Cornell University Press.

Sack, Warren (2011). “Aesthetics of information visualization”. In: Context Providers:

Conditions of Meaning in Media Arts. Ed. by Christiane Paul, Margot Lovejoy, and

Victoria Vesna. Bristol, UK: Intellect, pp. 123–150.

Salter, Chris (2015). Alien agency: Experimental encounters with art in the making. MIT

Press.

Sayers, Jentery (2015). “Prototyping the Past.” In: Visible Language 49.3, pp. 156–177.

Sengers, Phoebe et al. (2004). “Culturally embedded computing”. In: Pervasive Comput-

ing, IEEE 3.1, pp. 14–21.

Sengers, Phoebe et al. (2005). “Reflective design”. In: Proceedings of the 4th decennial

conference on Critical computing: between sense and sensibility. ACM, pp. 49–58.

Sennett, Richard (2008). The craftsman. New Haven, NY: Yale University Press.

Shklovsky, Viktor (2015). “Art, as device”. In: Poetics Today 36.3, pp. 151–174.

Simondon, Gilbert (2009). “The position of the problem of ontogenesis”. In: Parrhesia

7.1, pp. 4–16.

Slater, Mel and Sylvia Wilbur (1997). “A framework for immersive virtual environments

(FIVE): Speculations on the role of presence in virtual environments”. In: Presence:

Teleoperators & Virtual Environments 6.6, pp. 603–616.

Spiegel, Simon (2008). “Things Made Strange: On the Concept of" Estrangement" in

Science Fiction Theory”. In: Science Fiction Studies, pp. 369–385. BIBLIOGRAPHY 240

Spinuzzi, Clay (2005). “The methodology of participatory design”. In: Technical commu-

nication 52.2, pp. 163–174.

Stallabrass, Julian (2007). “What’s in a face? Blankness and significance in Contemporary

Art photography”. In: October, pp. 71–90.

Star, Susan Leigh and James R Griesemer (1989). “Institutional ecology, translations,

and boundary objects: Amateurs and professionals in Berkeley’s Museum of Vertebrate

Zoology, 1907-39”. In: Social studies of science 19.3, pp. 387–420.

Sterne, Jonathan (2003). The audible past: Cultural origins of sound reproduction. Durham,

NC: Duke University Press.

– (2006). “The mp3 as cultural artifact”. In: New media & society 8.5, pp. 825–842.

Suchman, L. (2005). “Affiliative Objects”. In: Organization 12.3, pp. 379–399.

Supper, Alexandra (2014). “Sublime frequencies: The construction of sublime listening

experiences in the sonification of scientific data”. In: Social Studies of Science 44.1,

pp. 34–58.

Švankmajer, Jan (2014). Touching and Imagining: An Introduction to Tactile Art. Lon-

don, UK: I.B. Tauris.

Taher, Faisal et al. (2017). “Investigating the use of a dynamic physical bar chart for data

exploration and presentation”. In: IEEE Transactions on Visualization and Computer

Graphics 23.1, pp. 451–460.

Tak, Susanne and Alexander Toet (2013). “Towards Interactive Multisensory Data Rep-

resentations.” In: GRAPP/IVAPP, pp. 558–561.

Tavares, Tatiana Aires (2013). “Multisensory Interaction Design: Designing User Inter-

faces for Human Senses”. In: International Conference WWW/Internet 2013.

Tufte, Edward R (1983). The visual display of quantitative information. Cheshire, CT:

Graphics Press.

– (1997). Visual explanations. Cheshire, CT: Graphics Press.

– (2006). Beautiful Evidence. Cheshire, CT: Graphics Press. BIBLIOGRAPHY 241

Tukey, John W (1962). “The future of data analysis”. In: The annals of mathematical

statistics 33.1, pp. 1–67.

Turkel, William J (2011). “Intervention: Hacking history, from analogue to digital and

back again”. In: Rethinking History 15.2, pp. 287–296.

Tygel, Alan Freihof and Rosana Kirsch (2016). “Contributions of Paulo Freire for a

Critical Data Literacy: a Popular Education Approach”. In: The Journal of Community

Informatics 12.3.

Van der Tuin, Iris (2011). ““A Different Starting Point, a Different Metaphysics”: Reading

Bergson and Barad Diffractively”. In: Hypatia 26.1, pp. 22–42.

– (2014). “Diffraction as a Methodology for Feminist Onto-Epistemology: On Encoun-

tering Chantal Chawaf and Posthuman Interpellation”. In: Parallax 20.3, pp. 231–244.

Van der Tuin, Iris and Rick Dolphijn (2010). “The transversality of new materialism”. In:

Women: a cultural review 21.2, pp. 153–171.

Victor, Bret (2015). Magic Ink: Information software and the graphical interface, 2005.

url: http://worrydream.com/MagicInk.

Viégas, Fernanda B and Martin Wattenberg (2007). “Artistic data visualization: Be-

yond visual analytics”. In: International Conference on Online Communities and Social

Computing. Springer, pp. 182–191.

– (2008). “Timelines tag clouds and the case for vernacular visualization”. In: interactions

15.4, pp. 49–52.

Viégas, Martin Wattenberg Fernanda and Martin Wattenberg (2015). Design and re-

design in data visualization. url: https://medium.com/@hint_fm/design- and-

redesign-4ab77206cf9.

Wainer, Howard (1990). “Graphical Visions from William Playfair to John Tukey”. In:

Statistical Science, pp. 340–346.

– (1996). “Visual Revelations: Why Playfair?” In: Chance 9.2, pp. 43–52. BIBLIOGRAPHY 242

Wainer, Howard and Paul F Velleman (2001). “Statistical graphics: Mapping the path-

ways of science”. In: Annual Review of Psychology 52.1, pp. 305–335.

Ware, Colin (2012). Information visualization: Perception for design. Burlington, MA:

Morgan Kaufmann.

Wartofsky, Marx W (1979). “The model muddle: Proposals for an immodest realism”. In:

Models. Springer. Dordrecht, NE, pp. 1–11.

Weiskel, Thomas (1976). The romantic sublime. Baltimore, MD: Johns Hopkins Univer-

sity Press.

Wickham, Hadley (2010). “A layered grammar of graphics”. In: Journal of Computational

and Graphical Statistics 19.1, pp. 3–28.

Wickham, Hadley et al. (2014). “Tidy data”. In: Journal of Statistical Software 59.10,

pp. 1–23.

Wilkinson, Leland (2006). The grammar of graphics. Berlin, DE: Springer Science &

Business Media.

– (2007). Playfair’s commerical and political atlas and statistical breviary.

Wolf, Gary (2010). “The data-driven life”. In: The New York Times. url: https://www.

nytimes.com/2010/05/02/magazine/02self-measurement-t.html.

Wood, E. and K.F. Latham (2011). “The Thickness of the Things: Exploring the Museum

Curriculum through Phenomenological Touch.” In: Journal of Curriculum Theorizing

27.2, pp. 51–65.

Yin, Robert (2006). Case study research: Design and methods. Thousand Oaks, CA: Sage

Publications.

Younan, Sarah and Cathy Treadaway (2015). “Digital 3D models of heritage artefacts:

towards a digital dream space”. In: Digital Applications in Archaeology and Cultural

Heritage 2.4, pp. 240–247. Appendix A: Interview Guide

243 Conversational prompts will be chosen or modified according to the respondent’s level of expertise, professional background, and interest in the subject matter.

Start by reading verbal consent and then informed consent and get signature

Background on this project: • Social reasoning • Explain that we're only focusing on specific graphic modalities today for the sake of simplicity, but that we're also working with other modes • Explain that our goal is reproducibility and modularity of graphics so updated versions can be dropped into a tactile dashboard

Participant's Background - with data graphics especially (10 minutes): • What do you do professionally or recreationally? • What is your current level of interaction with digitized data? This might include data generated from wearable computers (e.g. Fitbit), sports and cultural events (e.g. baseball stats), or civic data (e.g. TTC ridership stats). • How well do institutional data providers (e.g. Statistics Canada) make provisions for vision-impaired users? • How well do open data platforms (e.g. the City of Toronto’s Open Data portal) make provisions for vision-impaired users?

Participant background with tactile and interactive surfaces (on a scale of 1 to 5): • Interactive tactile surfaces which enable you to manipulate objects on a computer using touch gestures • Haptic technologies (like rumble packs for game controllers) which turn digital gestures and interactions into forces, vibrations, or motions that you can feel with your hands or body • 3D printing, which can turn digital objects into material ones

Background on King Street pilot and discussion using tactile prompts, including: • Infographic/Dashboard ◦ Braille vs raised text vs other tactile writing systems? ◦ Inverse vs raised graphs and text? ◦ In what context(s) would you find a dashboard or tactile infographic useful? With what kinds of data? • Tactile surfaces ◦ size/scale of object. larger better? ◦ tactility/malleability/softness? • Interactive haptic touch interface ◦ speech to text voice or recordings of natural speakers?

Discussion around open civic data: • what datasets would be useful to you (give examples)? ◦ civic data ◦ housing ◦ demographic ◦ TTC capacity • what are civic datasets that would be valuable for blind or partially blind users? • what do people need to know about the city that is visible in open data streams but is not accessible • what are other datasets that would be personally interesting to you? ... based on your own interests

Additional concerns • How might we get away from requirement for an interpreter/data storyteller? • Is audio playback the preferred method of receiving information? • Would braille on a tactile infographic be a suitable replacement? • Do you think data visualization techniques can be made more accessible with new technological advancements? • Do you think physical data objects created from digitized data can stand in for their visual data counterparts? • How might traditionally visual information be presented in non-visual ways? • Do you have any thoughts on how data visualization might engage multiple senses? Which senses, and under what circumstances? Appendix B: Recruitment

246

CALL FOR PARTICIPATION IN A RESEARCH STUDY

Hello ______,

My name is Gabby Resch. I’m a doctoral researcher at the University of Toronto’s Faculty of Information where I work in the Semaphore Research Cluster for Inclusive Design. With Dr. Matt Ratto, Associate Professor in the Faculty of Information, I am currently undertaking a research study on developing new technologies for data interaction that vision-impaired users might benefit from.

The title of this study is “Making Big Data accessible: Alternative forms of data representation and analysis for blind and partially-sighted users.” Our planned programme of research will explore alternative forms of data representation, particularly interactive tactile and auditory modalities that potentially enable greater accessibility to data analysis and understanding. Using 3D printing and employing embedded tactile electronic technologies, we will generate and evaluate tangible 'data objects' intended to provide non-visual access to civic data and related concepts. This research project involves design workshops with a diverse group of participants, including accessibility professionals, interaction designers, and members of the vision-impaired community. The results from this work will inform the generation of novel accessible interfaces and objects for data interpretation.

I would like to invite you to participate in one of our research workshops. You will be asked to attend a half- day session in which you will learn about DIY microelectronics, 3D printing, and a number of related technologies that you will use to collaboratively design and create a physical 3D data object similar to 2D visual data objects that are common. No previous technical knowledge is required for participating. The workshop will take place at the University of Toronto’s Semaphore Research Cluster, an accessible location near St. George subway station.

Your involvement in this research project will be voluntary, but you will be compensated with a $50 gift card. Additionally, catered food will be provided, and you will get to keep any creative output resulting from your participation. Your choice to either accept or decline participation will be strictly confidential. For further details about this study, please see the consent form that has been attached to this message.

If you are interested in participating in this research, or would like to further discuss the details of the study, please contact me at [email protected], or by phone at (647) XXX-XXXX. Alternatively, you may contact Dr. Matt Ratto, at [email protected]

Thank your consideration,

Gabby Resch PhD Candidate Faculty of Information University of Toronto Appendix C: Informed Consent

248

RIGHTS OF RESEARCH PARTICIPANTS You may withdraw your consent at any time and discontinue participation without penalty. You are not waiving any legal claims, rights or remedies because of your participation in this research study. This study has been reviewed and received ethics clearance through the University of Toronto Social Sciences, Humanities and Education Research Ethics Board. This research study may be reviewed for quality assurance to ensure that the required laws and guidelines are followed. If this study is chosen for review, a representative of the Human Research Ethics Program may access the study and consent materials as part of the review process. Information accessed by the Human Research Ethics Program will be upheld to the same level of confidentiality stated by the researcher.

If you have any questions about this research protocol, or your rights as a participant, you may contact:

Office of Research Ethics Telephone: (416) 946-3273 University of Toronto Fax: (416) 946-5763 McMurrich Building, 2nd floor [email protected] 12 Queen’s Park Crescent West Toronto, ON M5S 1S8

STUDY OUTCOMES The researcher intends to publish findings based on this study in scholarly journals, and at academic conferences. Participants in this research may follow its progress or get more project details at: http://semaphore.utoronto.ca /.

SIGNATURE OF RESEARCH PARTICIPANT/LEGAL REPRESENTATIVE

I have read the information provided for the study “Making Big Data accessible: Alternative forms of data representation and analysis for blind and partially-sighted users” as described herein. My questions have been answered to my satisfaction, and I agree to participate in this study. I have been given a copy of this form.

______Name of Participant (please print)

______Signature of Participant Date

SIGNATURE OF WITNESS

______Name of Witness (please print)

______Signature of Witness Date