Explorable Explanations

What are they? What do they explain? How do we work with them? Let's find out.

by Jesper Hyldahl Fogh May, 2018 Thesis-project – Interaction Design Master at K3 / Malmö University / Sweden

Examiner: Maria Engberg Supervisor: Simon Niedenthal Time of examination: 28th of May, 13:00

Page 1 / 35 ​

TABLE OF CONTENTS

TABLE OF CONTENTS 1

ABSTRACT 3

1 · INTRODUCTION 3

1.1 · Research Question 3

1.2 · Delimitation 3

1.3 · Structure 3

2 · THEORY 5

2.1 · Generic Design 5

Critiques 6

2.2 · Educational games theory 6

Educational games 7

Games research 7

Play, games, toys, and 7

Mechanics, dynamics and aesthetics 8

Intro to relevant educational theory 9

The cognitive process dimension 9

The knowledge dimension 10

2.3 · What's next? 10

3 · ANALYSIS 10

3.1 · Explorable explanations 11

The categories of analysis 12

General examples 13

Simulating the World In Emojis 14

Introduction to A* 15

Notable examples 15

4D Toys 15

Pink trombone 16

Page 2 / 35 ​

Something Something Soup Something 16

Hooked 17

The Monty Hall Problem 17

Talking with God 17

Fake it to Make It 18

So what are explorable explanations then? 18

What's next? 19

4 · METHODOLOGY 19

4.1 · Sketching and prototyping 19

4.2 · Evaluation 19

5 · DESIGN PROCESS 20

5.1 · The design goals 20

5.2 · Neural networks 20

What are neural networks? 20

Why neural networks make sense as a subject 21

5.3 · The three iterations 21

Iteration 1 · The Visualized Network 21

Evaluating the iteration 22

Iteration 2 · The World's Dumbest Dog 22

Evaluating the iteration 23

Iteration 3 · A Tale of 70.000 Numbers 24

Evaluating the iteration 24

5.4 · A summary of the whole design process 25

6 · REFLECTION & FUTURE WORK 25

6.1 · Reflection 25

Method 25

6.2 · Future work 26

For designers 26

For researchers 27

7 · CONCLUSION 27

Page 3 / 35 ​

ACKNOWLEDGEMENTS 27

REFERENCES 28

APPENDICES 31

Page 4 / 35 ​

ABSTRACT

In this paper, the author examines the concept of explorable explanations. It has emerged as a genre of educational software within the last 7 years, yet descriptions of it are vague at best. The author works with the genre through a generic design approach that consists of an analysis of existing explorables and the design of three iterations of the author's own explorable explanation on the topic of neural networks. 22 examples, of which 9 are presented in-depth, are analyzed with educational theory and games research theory as tools. It is found that explorable explanations tend to be digital experiences with a high degree of interactivity that attempt to teach facts, concepts and procedures to the user. Furthermore, the author embarks on a design process of creating explorable explanations of their own to understand what can be relevant when designing and evaluating an explorable explanation. The paper is concluded with reflections on the employed method in the project. Future work is also briefly outlined about what impact the analysis and design work can have on the practice of other designers seeking to work with the genre, as well as to other researchers.

Page 5 / 35 ​

1 · INTRODUCTION

There is an explorable explanation that teaches how the human voice works (Thapen, 2017). Another one, called Fake it to Make It (Warner, 2017), teaches how and why fake news works. You can even find one that introduces the concept of the fourth dimension through play (ten Bosch, 2017). There are plenty more out there with a variety of other subjects. Yet it is not an easy task to understand if explorable explanations can be considered different from educational games, data , interactive narratives or similar. There are some definitions out there (Case, 2017; Lambrechts, 2018; Victor, 2011), but none of them are really all that helpful in understanding the field. Goldstein (2015) does an admirable job of trying to nail down definitions for it, but most of his references and conclusions are seemingly pulled from thin air. Within the academic world, my searches have not shown much help either. Granström (2016) works with the genre, but is ultimately more interested in the dissemination of physics than designing and understanding explorable explanations. Other sources (He and Adar, 2017; Kaltman, 2015) merely mention the genre in a passing reference. Before we can compare the genre to others, like those mentioned above, it helps to understand what the genre actually is. This paper seeks to do this by conducting a categorical analysis of 22 explorable explanations, and a design process with three iterations of an explorable explanation design concept, which attempts to explain the inner workings of artificial neural networks. 1.1 · Research Question

This project focuses on answering the following research question:

How can we characterize explorable explanations, and what design qualities should be considered when designing and evaluating them? 1.2 · Delimitation

This project is not about design guidelines. I am not attempting to find a design process that can be relied upon for future work with explorable explanations. This is also not about evaluating whether explorable explanations are good at teaching their subject matter to players. These are both worthwhile endeavours, but not within the scope of this paper. 1.3 · Structure

First off, I will go through some theory that is necessary for understanding the analysis of explorable explanations as well as the description of my design work that follows it. The analysis has gone through 22 examples of explorable explanations, and presents 9 of these examples in-depth. Following this, the paper introduces my own explorable explanations, which serve to gain a better understanding of what it takes to design explorables. Finally, I will discuss and reflect on the project and the process that I have gone through, before suggesting possible paths for future work.

This paper is part of a larger story on the research behind it, and there are more aspects to this research than what can be conveyed in the format of a paper. First of all are the prototypes that were developed as part of the research process. In the hopes that it will give a more vivid image of my research, the latest version of each iteration is available to try online on the following address for at least a year after publication: http://neuralnet-explorable.herokuapp.com

Page 6 / 35 ​

The source code for the prototypes will also be available for at least a year on the following Github repository: https://github.com/jepster-dk/neuralnet-explorable

It is recommended that the prototypes are experienced by the reader at some point during the reading process.

Secondly, the way that this paper is structured does not adequately reflect the research process that lies behind it. Papers are written linearly, but the process behind my research did not occur linearly. Furthermore, there is not enough room to present all of the thoughts, experiences and discussions that have led me to this point. So, in order to accommodate the critique of Zimmerman et al. (2010) on the importance of process documentation in research-through-design, I have attempted to visualize the trajectory of my work in figure 1.

Figure 1 · My research process, visualized ​

In general, this visualization shows a process that has featured many explored branching paths, as well as a process that has had to throw away some work in order to trace back to what was important. It is my adaptation of Buxton's idea of design as branching of exploration (Buxton, 2007, p. 388). An example of a branch is the fact that this project started as simply an attempt to create an explorable explanation about neural networks, without regard for attempting to describe the genre. Another example of a branch would be that I initially wanted to play test my prototypes on an expert in the field of neural networks. Due to time constraints, this never came to fruition.

Finally, a small note on language. I will sometimes refer to explorable explanations as simply explorables for the sake of brevity. Furthermore, I will use the terms artificial neural network and neural network interchangeably. The words user and player will also be used interchangeably, since explorables are still poorly defined. They will both be used regardless of whether a referred explorable can be labelled a game or not. With that said, let us move on. 2 · THEORY

Before moving on to the meat of the research process, I wish to establish the theoretical grounding for the project. This includes two main aspects: the employed research-through-design approach and theory on educational games. When structuring and framing my design work, I am following the notion of generic design thinking as introduced by Wiberg and Stolterman (2014). On the matter of educational games theory, I am relying on

Page 7 / 35 ​ both theory on games and play, as well as more general educational theory. 2.1 · Generic Design

Wiberg and Stolterman (2014) propose their idea of generic design thinking as an answer to the issue of gauging whether a design is novel. Generic design is a way to both conceptualize a new design while relating it to an existing body of designs. The goal is that generic designs can make it easier for researchers to figure out if their own and others' work is novel, and thus eligible as a new contribution to the existing body of knowledge. Wiberg and Stolterman draw the notion of generic design from Warfield (1990), and they describe it as:

"A generic design in HCI can be seen as a design concept [emphasis added] that ​ ​ captures some essential qualities of a large number of particular designs [emphasis ​ ​ added], i.e., it defines a class or design space of interactive systems." (Wiberg and Stolterman, 2014, p. 6)

I have emphasized a few key phrases in this description. The first is that generic designs are design concepts, i.e. they are not just descriptions or ideas, but rather sketches, models, prototypes and the like. Second is that this design concept must capture some essential qualities of a large number of particular designs, which means that the aim of a generic design is not to create a perfect prototype that has been thoroughly user-tested and ready to hit the market. A generic design should be representative of a broad range of designs that reside within the same class and design space. It is important to note that to Wiberg and Stolterman (2014), this applies to both HCI and Interaction Design Research. They make no effort to distinguish between the two fields.

In introducing generic design thinking, Wiberg and Stolterman (2014) identify three existing approaches to working with ideas in design and characterizing designs: proof-of-concept ​ designs, design guidelines and concept design. They map the three along with generic ​ ​ ​ ​ ​ design on a two-dimensional matrix (see figure 2). The horizontal dimension shows how the approaches work with ideas in the design process. Are the ideas concretely represented in a design (left), or are they abstracted and provide scaffolding (right)? On the vertical dimension, we see whether the approaches deal with particular details and qualities of designs (top) or attempt to build general theories for working with designs (bottom). As such, generic design deals with particular points of reference but on an abstracted layer as scaffolding for design work. Wiberg and Stolterman summarize the main activity of generic design as one of grouping and describing, with the outcome or goal of the method being a characteristic design. Its purpose is to describe design and, if done correctly, will define a design space.

Page 8 / 35 ​

Figure 2 · The matrix of design research (Wiberg and Stolterman, 2014, p. 7) ​

In this paper, I examine the group of educational software that goes by the name explorable explanations from a generic design perspective. This entails two main research activities: analysis of the qualities of 22 explorables and the design and development of three iterations of an explorable explanation of my own. In generic design terms, this means that I am grouping and describing 22 explorables, while seeking to articulate a characteristic design. The hope is that this will give other designers and researchers a better understanding of the design space that explorable explanations exist in.

Critiques

Before moving on to educational games theory, I want to address one particular issue with generic design. The approach relies on what Wiberg and Stolterman (2014) call "essential qualities". This implies that there is an essence that can differentiate one design class from others. However, Wiberg and Stolterman do a poor job of giving examples of what this could look like. If you ask Wittgenstein (1953), perhaps this is due to the fact that essential qualities are rarely identifiable. Wittgenstein instead talks about family resemblance. To illustrate his point, he claims that there is no single thing in common between all games, but that they instead share resemblances across the spectrum of their use. This does not mean that descriptions of such concepts are not useful. To Wittgenstein, it still makes sense to say that "these and things similar to it are called games" (Wittgenstein, 1953, §69). He simply points out that in certain cases, it is a futile effort to discover the essence of a concept. In this project, I have had a similar difficulty in trying to nail down the essential qualities of explorable explanations. Instead, I therefore seek to describe them in terms of their family resemblances. I do this by introducing a handful of examples in tandem with a textual description and a table showing their traits in 8 categories of analysis. 2.2 · Educational games theory

In this part of the paper, I want to introduce a couple of terms and theories that are useful when analysing and describing the examples of explorable explanations. They are similarly useful when it comes to describing and relating my own prototypes in relation to these examples.

Page 9 / 35 ​

I will first introduce the field of educational games, which explorable explanations seems to share features in common with. This serves to contextualize explorable explanations in the larger developments of games. Then I will describe the area of games research that deals with the relationship between games, toys, simulations and play; an area that comes in handy when comparing and juxtaposing various explorables. I will then introduce the MDA framework as a tool for describing the specific design of explorable explanations, including my own work. Finally, I will introduce an updated version of the classic taxonomy of the cognitive domain (Bloom et al., 1956), commonly referred to as Bloom's taxonomy, as an analytical tool for understanding what type of knowledge and what type of cognitive processes explorable explanations deal with.

Educational games

For various reasons, the field of educational games has seen a kind of fall from grace in the 2000's (Egenfeldt-Nielsen, 2007). The term is on the verge of being substituted with the larger concept of serious games, which includes games for a multitude of subjects, such as health, education, public policy and more (Games for Health Europe Foundation, 2018; Serious Games Interactive, 2018; World Food Programme, 2005). This does not mean that there is not room for educational games in the games industry anymore, but instead that they are now in cahoots with other games with purposes that extend beyond providing entertainment or artistic expression. This increased scope for educational games makes sense when seen in relation to the growth of explorable explanations as a genre. As we will see in the analysis, explorables deal with a multitude of subject matters, and do not limit themselves to traditional educational environments such as schools and colleges. Instead, they seem to embrace a notion of lifelong learning.

As for how educational games are theoretically grounded, Egenfeldt-Nielsen (2007) states that educational games often draw on the experiential approach to learning. One example is the one formulated by David Kolb (1984). The approach draws connections between concrete experience, reflection, concepts and application, and sees it as a learning cycle. The focus on the concrete experience as part of the cycle lends itself well to educational games, which are aptly suited for providing this experience. The challenge is then to introduce reflection, conceptualization and application as part of the game. Similar challenges and opportunities are seen in explorable explanations.

Games research

Here, I will introduce some theory on the notions of play and games and similar concepts. This will aid me in the analysis of explorable explanations.

Play, games, toys, and simulations

Homo Ludens, first introduced by Huizinga (1938), covers the understanding of humans as inherently playful creatures. In an essay attempting to relate this concept to interaction design, Bill Gaver writes: "Play is not just mindless entertainment, but an essential way of engaging with and learning about our world and ourselves" (Gaver, 2002, p. 1). This notion of play as an engaging activity that enables us to learn about the world is one that resonates with educational games and explorable explanations. I wish to build a bit upon this notion with Salen and Zimmerman (2003). They construct a definition of play that is easier to work with: "Play is free movement within a more rigid structure" (Salen and Zimmerman, 2003, p. 304). This definition of play is accompanied by three categories of play: game play, ludic activity ​ ​ ​ and being playful. Game play constitutes the type of play that occurs when players follow the ​ ​ rules of the game, such as chess. A ludic activity refers to the non-game behaviors that are

Page 10 / 35 ​ still described as playing, such as playing house. Finally, being playful is a more general category that encompasses bringing a playful state of mind into other activities, such as using playful slang.

In relation to play, Salen and Zimmerman go on to define a game as "... a system in which ​ ​ players engage in an artificial conflict, defined by rules, that results in a quantifiable outcome" (Salen and Zimmerman, 2003, chapter 7, p. 11). There are five keywords in this description: system, players, artificial conflict, rules and quantifiable outcome. System refers to a range of ​ ​ elements that interact or interrelate to form a complex whole. Players means that one or more ​ ​ users are actively engaging in playful behavior with the game. Artificial conflict establishes ​ ​ games as being about a contest with others players, the game itself or something third. The outcome of the contest is separate from real life, e.g. games do not lead to actual death, but in-game death. Fourthly, games are defined by rules. These rules are particularly important ​ ​ when distinguishing between the three different categories of play. Finally, games result in a quantifiable outcome. That is, games have winners and losers, or at least give players a score ​ to show how well they did.

Then there are also toys and how they relate to play and games. One useful description of ​ ​ toys is the one that Sicart gives: "Toys facilitate appropriation: they create an opening in the constitution of a particular situation that justifies the activity of play" (Sicart, 2014, p. 36). To Sicart, play is an appropriative activity, and toys enable this activity. I wish to twist this description a bit to match the notion of play that I have introduced with Salen and Zimmerman (2003) and Gaver (2002). In this way, toys are facilitators of play, and thus facilitators of a free movement within a rigid structure of engaging with and learning about the world and ourselves.

Finally, we have simulations. In many ways, they are like games, but according to ​ ​ Egenfeldt-Nielsen (2007) they are missing the key elements of artificial conflict and the player's active engagement. If these are missing, the game seems to become a instead.

I could keep working with these definitions, provide contrasting views, or even go into the discussion of games without goals (Juul, 2007). However, for this project, these definitions are adequate, because they now give me three conceptual boxes in which to sort various explorable explanations: the game box, the toy box and the simulation box. Furthermore, I can build on top of these boxes with the notion of play.

Mechanics, dynamics and aesthetics

The MDA framework is a common and practical approach to discussing the relationship between a game's system, the game designer and the player (Hunicke, LeBlanc and Zubek, 2004). It works as both a game design and game analysis tool. The framework consists of three parts: Mechanics, Dynamics and Aesthetics. Aesthetics ultimately describes the experience a player has when playing the game. The aesthetics can be described in terms such as expressive, sensational, challenging, and more. Dynamics then refer to aspects of a game that create aesthetics. In a sense, it is where the player meets the game, as in dynamics are what happens when a player engages with the systems of a game. Expressive dynamics, for instance, may include systems that allow the player to leave their mark and create their own things. Finally, mechanics refer to the actions, behaviors and control mechanisms that the player is afforded by the game. They are usually described as verbs. Examples of this include walking, jumping or eating.

For this project, the MDA framework will be put to use in describing explorable explanations

Page 11 / 35 ​ in specific terms. Where relevant, the mechanics, dynamics and aesthetics of the analyzed explorables will be described so that they can be compared. It is also used in elements of the design process description.

Intro to relevant educational theory

Since explorable explanations are attempting to teach players something, I have found it useful to introduce some educational theory in order to build an understanding of what types of knowledge and at what level they teach. This entails two aspects of the taxonomy of the cognitive domain: the cognitive process dimension, and the knowledge dimension. Specifically, these will be invoked in the analysis of explorable explanations as a comparative tool.

Despite being released over 60 years ago, the taxonomy of the cognitive domain (Bloom, Engelhart, Furst, Hill, & Kratwohl, 1956) has remained relevant in educational research with only a few updates. In 2001, Anderson et al. (2001) published an updated version, which makes a few minor changes to the original. The changes were made partly to update the taxonomy to the developments that have happened in society since 1956, and partly to bring the contents of the old version into focus in the new millenium. In this paper, I will rely on the 2001 version.

The cognitive process dimension

The cognitive process dimension is comprised of six different levels of reasoning that a student can achieve. It describes what cognitive processes are involved in a student's learning. Moving up in the taxonomy, the levels become increasingly abstract and build upon the preceding steps. As such, in order to reach level three, one must first arrive at a basic skill level in level two.

Anderson et al. (2001) changed the naming for the levels of the taxonomy from "Knowledge, comprehension, application, analysis, synthesis and evaluation" to "Remember, understand, apply, analyze, evaluate and create". Notice that the names changed to verbs and evaluation was moved down by one level, while "Create" was added to the mix. Some of the subcategories were also changed, but I will not go into that much depth here. Figure 3 shows an overview of the levels.

Page 12 / 35 ​

Remember Understand Apply Analyze Evaluate Create

Retrieve Construct Carry out or Break Make Put elements relevant meaning use a material into judgments together to knowledge from procedure in its based on form a from instructional a given constituent criteria and coherent or long-term messages situation parts and standards functional memory relate them whole; internally reorganize and elements externally into a new pattern or structure

Figure 3 · Overview of the cognitive process dimension adapted from Anderson et al. ​ (2001, p. 3)

The first level, remember, represents the ability to retrieve relevant knowledge from long-term ​ ​ memory. This can also be thought of as knowing the facts. Secondly, understand, refers to ​ ​ constructing meaning from instructional messages. In practice, this can for instance mean paraphrasing statements, categorizing subjects, contrasting ideas, or constructing cause-and-effect models. Apply, which is on the third level, refers to carrying out or using a ​ ​ procedure in a given situation, be it familiar or unfamiliar. Examples of this include multiplying two numbers, or using Ohm's law to calculate resistance in a circuit. On the fourth level, we find analyze, which describes the process of breaking material into its constituent parts and ​ ​ determining how these parts relate to one another and to the overall structure. This could, for instance, happen through deconstruction, distinction, selection, or parsing. With evaluate, ​ ​ Anderson et al. (2001) mean making judgments based on criteria and standards. This means to determine and detect the effectiveness or inconsistencies in a product or process in regards to both internal and external criteria. Finally, create, at the top of the taxonomy, is a ​ ​ student's ability to put elements together in a coherent, functional, and novel structure. To demonstrate this, a student could come up with alternative hypotheses based on an observed phenomenon. She could design a procedure for dealing with a new problem. Or perhaps she could produce a new product through the knowledge gained from the learned subject material.

The knowledge dimension

The other dimension of import from the taxonomy is the knowledge dimension. This describes what type of knowledge is involved in the learning process. It includes four levels of knowledge: factual, conceptual, procedural and metacognitive. The factual level is almost ​ ​ ​ ​ ​ ​ ​ ​ self-explanatory and includes the basic elements a student must know to be acquainted with a discipline. Conceptual knowledge refers to knowledge about the interrelationships between the factual knowledge within a discipline, as well as knowledge of the basic elements that enable the factual to work together in a structure. Thirdly, procedural knowledge includes the ability to perform skills, , techniques and methods, but goes a bit beyond that, as it

Page 13 / 35 ​ also includes knowledge of the criteria for using these various procedures. Finally, the metacognitive knowledge can be described as knowledge about knowledge. It refers to being aware of one's own level of knowledge, what is expected of one's knowledge level by others and knowledge of different procedures for learning. 2.3 · What's next?

Now that I have established a toolbox for working with and analyzing educational software, I will now move on to the analysis of various explorable explanations. 3 · ANALYSIS

In this section, I will first talk about how explorable explanations are currently defined by various actors in the field, and give a brief history of the genre. This includes a shallow review of related genres. Then I will move on to an analysis of 22 examples of explorable explanations on the basis of 8 analysed qualities. I will not describe all 22 in depth, but have instead picked two examples that exemplify the most common explorable explanations. Furthermore, in order to encapsulate the variety that exists within the genre, I am also briefly introducing seven other examples that exhibit deviating characteristics in one or more qualities. 3.1 · Explorable explanations

In order to talk about the current state of explorable explanations as a term, I want to do something similar to what Salen and Zimmerman (2004) did when they defined games, albeit in a slightly more limited form. I am going to introduce a couple of existing definitions of explorable explanations and then compare them.

Explorable explanations as a term seems to have its origin in a blog post by Bret Victor (2011) of the same name. It is described like this:

"Explorable Explanations is my umbrella project for ideas that enable and encourage truly active reading. The goal is to change people's relationship with text. People currently think of text as information to be consumed. I want text to be used as an environment to think in." (Victor, 2011)

Others have since taken up the challenge of working with explorable explanations, and they have found a common home on the page Explorable Explanations (Case, 2017). On that page's FAQ, the definition changes to this:

"In short, by "explorable explanation" we mean something that 1) teaches something, and 2) is more interactive than a boring ol' quiz with only one right answer." (Case, 2017)

Third, we have Belgian data journalist Maarten Lambrechts' definition, provided off-the-cuff in an interview with datawrapper.de:

"Explorable explanations (you could also call them dynamic texts or dynamic documents) are documents users can interact with. They educate people not by just combining text and static graphics, but by integrating interactives. So people can really play with what they’re learning; with what they’re seeing. People can learn something without realizing they are learning something." (Lambrechts, 2018)

In these definitions, there seems to be only one main thread: interactivity. The included part of

Page 14 / 35 ​

Victor's definition does not mention interactivity specifically, but from looking at the surrounding description in the blog post, it is apparent that interactive elements are part of what he considers explorable explanations. However, what Victor means by text is not necessarily clear. Interestingly, he also talks of active reading, as if reading is a necessity to ​ ​ explorables. This would seem to put it more in the realm of interactive narratives, except within non-fiction.

Then, in Case's definition interaction is included as one of only two criteria, but he is not very specific about what kind of interaction. Whether it is social interaction, human-computer interaction, or something else remains unclear. Case's definition also relies on the unspecific word "something". Looking at the examples on his page, it seems we are dealing with digital artefacts, but this definition does not wish to make this clear. It is not even clear whether we are talking about artefacts at all. In fact, the definition is broad enough that it might even include a birch tree, since I cannot confidently state that birch trees are not things that teach something by being more interactive than a boring ol' quiz.

Figure 4 · Three explorable explanations? (Allen, 2011) ​

Finally, in Lambrechts' definition, the term explorable explanations is described as interchangeable with dynamic documents or dynamic texts. Integral to his definition is also the inclusion of interactivity. He goes as far as using the word "play" to describe how the users can engage with the document and its educational content. This playing with the content seems to mirror Victor's notion of explorables as "an environment to think in" (Victor, 2011). However, Lambrechts also seems to feel that explorable explanations are closer to stealth learning games, such as Machineers (Lode et al., 2013), than to overt educational games, when he describes them as teaching without the player knowing that they are being taught.

Page 15 / 35 ​

In seeking to build my own definition, I have chosen to disregard essential qualities, and instead work with family resemblance. In practice, this means that I will not attempt to provide anything like a definition before I have presented the analysis of the examples of explorable explanations. When dealing with family resemblance, it is important to first present the family.

The categories of analysis

In order to analyze the various examples of explorables, I have established 8 categories for comparison. They are subject, time required to play, amount of visuals compared to text, ​ ​ ​ ​ ​ ​ amount of interaction, highest level of cognitive process, highest level of knowledge, price ​ ​ ​ ​ ​ ​ and platform. All of the 22 explorables were sourced by looking at the Explorable Explanation ​ ​ site (Case, 2017). There are more games on the site than 22, but in order to maintain the scope, I only looked at 22. It should be noted that more examples of explorable explanations exist outside those found on this site. It may seem then that an external categorization is pressed down upon my own analysis. However, Case's site is frequently referenced by others (Lambrechts, 2018; Goldstein, 2015), is open for outside additions through its open-sourced code on Github (explorableexplanations, 2018) and features plenty of examples not authored by Case himself. Therefore it still seems a fitting place to source my examples. Before moving on to the analysis, however, I will briefly explain the basis behind choosing the analysis categories.

Subject refers to what field of knowledge the explorable teaches. It was chosen as a category ​ in order to understand whether explorables are limited to science-based subject matters, which initially seemed to be the case.

Time required to play says something about the scale of explorable explanations, which can ​ help in comparing them to educational games. Some of the explorables are not easily defined in this category, as it is up to the player how long they want to keep playing. Therefore, I have generally looked at how long it minimally takes to experience all of the content the explorable contains.

As we saw in the given definitions of explorable explanations, particularly in Victor's, the required relationship between text and visuals is not necessarily obvious. For this reason, this ​ ​ criteria was also included. Amount of text and amount of visuals was each given a number between 0 and 100 based on how large a percentage of the explorable they constituted. This means that the sum of both numbers would always result in 100. Visuals include graphics, images, videos, animations and similar. Text means words and textual characters. It was a qualitative evaluation, by which I mean that I did not set up strict criteria for how many words, characters, pixels or similar were used in the explorable to find the percentage.

Fourthly, the amount of interaction was similarly qualitatively evaluated on a scale from 0 to ​ ​ 100. In general, it refers to how large a percentage of the explorable is interactive. By interactive, I mean if the content can be affected by a user's input in a way that is not just natively a part of the platform, e.g. highlighting text, or saving images. When seeing the examples, the exact meaning of this will hopefully be more clear.

I also looked at the cognitive dimension and the knowledge dimension of the explorables. I ​ ​ ​ ​ was mostly concerned with what the highest level an explorable reached on each of the dimensions. Through my own qualitative judgment, this was given a number from 1-6 and 1-4, thereby matching the taxonomy introduced by Andersen et al. (2001).

Finally, I looked at the distribution model for the explorable on two accounts: price and ​ ​ platform. By price is meant how much it costs to experience the explorable. Since none of the ​

Page 16 / 35 ​ examples included advanced pricing models like free-to-play, subscription-based or similar, this was fairly straightforward. By platform, I mean what delivery method is used to experience the game. In the analysis, only two general methods appeared: directly in the browser, or as downloadable software for either iOS, PC, Mac and Linux.

The analysis does not include a category for game, toy or simulation distinctions. This is due to the fact that certain elements of many explorables can be considered belonging to one distinction, but other elements show more belonging to others. For this reason, these distinctions will be referred to in the expanded descriptions of the analysis, but not in the analysis overviews. A spreadsheet with the analysis of all 22 games can be found in appendix 1.

General examples

First, I want to introduce two examples that constitute what I call general examples of explorable explanations. They exhibit traits that are shared by many explorable explanations, and do not show major deviation in any of the analysis categories. The two explorables are Simulating the World In Emojis (Case, 2016) and Introduction to A* (Red Blob Games, 2014).

Simulating the World In Emojis

Subject Time Text-visuals ratio Interactivity ​ Miscellaneous 5-10 50/50 65

Highest cognitive Highest Price Platform level knowledge level

6 3 Free Browser

In a sense, Simulating the World In Emojis (Case, 2016) is one of Nicky Case's most ambitious explorables. Its grand aim is to teach the player how to think in systems. It does this by presenting the user with a series of simulators, initially revolving around the system behind forest fires. Each simulator contains a grid of various sizes with space for an emoji on each individual square. The behavior of the emojis can be tweaked by the user, e.g. so that they appear or disappear more often or turn into other emojis depending on various other rules that are similarly tweakable. The player can also choose to add specific emojis by clicking the simulator, which is then incorporated into the running simulation. The simulators are interspersed with descriptive and narrative text, and occasionally a simple quiz appears that asks the player what they think the behavior of a complex system will result in. Case has prepared the simulators with specific behaviors and rules to build his point, and has also initially left out some features in order to not overwhelm the player. Finally, the explorable features a "sandbox" that includes all features of the simulator that have been introduced with the added possibility of being able to share one's custom-built simulator.

The even split between textual content, the majority of the content being interactive and its free, browser-based distribution model is what makes Simulating the World (Case, 2016) a general example of an explorable explanation. It is also an excellent example of an explorable, because it enables teaching the player on a high cognitive level by allowing the user to create their own simulations, and share them with others. In the text, the explorable also acknowledges and talks about how the subject matter is fairly new and that different theories show different results. This can be seen as a type of level three knowledge in that it

Page 17 / 35 ​ deals with criteria for using different procedures. Simulating the World initially introduces the concept of systems thinking by talking about the theme of forest fires, but it manages to build on this by relating to other complex systems, potentially sparking the player's interest in taking their knowledge further outside of the explorable. When it comes to its relation to toys, simulations and games, the explorable can be described, in Salen and Zimmerman terms, as being in-between ludic activity and game. Case sets up some rules that the player is expected to follow, but they are free not to and just play around with the structures set up for them in the simulators. There is no quantifiable outcome outside of the quizzes, and these play such a small part that they are more dressing than salad. In fact, the simulations are seen to be nothing more than dynamics enabled by the mechanics of the explorable. They are barely simulations at all, because they engage the player actively.

Introduction to A*

Subject Time Text-visuals ratio Interactivity ​ Computing 10-30 35/65 70

Highest cognitive Highest Price Platform level knowledge level

5 3 Free Browser

Introduction to A* (Red Blob Games, 2014) is an interactive primer on the world of path-finding algorithms. It introduces three different algorithms that have certain pros and cons. It also describes some principles that underlie the algorithms, such as early exit, movement costs and frontiers. The explorable explanation features many different maps with different implementations of the various algorithms. The target group is game programmers, so the algorithms are also shown in Python code. Basically, the explorables show how a game system can find paths from a starting point to a goal. So Red Blob Games has made sure that the user can move the goals and starting points at any time to explore how the algorithms work under different circumstances. The player is even able to add and remove walls in some of the maps.

Introduction to A* (Red Blob Games, 2014) is somewhat deviant in regards to the cognitive process dimension, because most analyzed explorables exist on the second level, not the fourth. So what actually makes Introduction to A* a general example of an explorable, like Simulating the World, is that it features many interactive visuals in a relatively short timespan in a free browser-based experience. Introduction to A*, like Simulating the World (Case, 2016), does a good job at introducing a variety of procedures and the criteria for when to use them, while acknowledging its own limits as an explorable of 10-30 minutes.

Notable examples

Hopefully, these two examples give a better understanding of what an explorable explanation looks and feels like in most of the examined cases. However, in the spirit of family resemblance, I believe it is important to highlight some of the edge cases of explorables. I will introduce seven of these.

Page 18 / 35 ​

4D Toys

Subject Time Text-visuals ratio Interactivity ​ Math 5-10 35/65 80

Highest cognitive Highest Price Platform level knowledge level

3 1 15€ iPad / Steam

4D Toys (ten Bosch, 2017) is different from most explorables in that it is neither free nor browser-based. 4D Toys is split into two parts: an explorable explanation and a set of toys. The explorable explanation segment takes the player through how the fourth dimension works as a concept. This way, the player will hopefully have a better chance of understanding what is happening and why when playing with the toys. The toys on the other hand do not feature any explanation, nor any quantified outcome, or artificial conflict. They exist within a rigid structure, but by reacting to their environment in unexpected ways, they facilitate play and experimentation.

Pink trombone

Subject Time Text-visuals ratio Interactivity ​ Biology, linguistics 2-5 5/95 95

Highest cognitive Highest Price Platform level knowledge level

6 1 Free Browser

Pink trombone (Thapen, 2017) is, like 4D Toys (ten Bosch, 2017), a toy-like artefact. It shows the player a model of the human vocal organs. By clicking and moving the mouse around the areas of the model, sounds are generated and change to create various vowels and consonants, which are played out loud. The explorable barely features any text outside of credits or labels for the various parts of the model. The player is expected to figure out for themselves how to make sense of the model. It stands out as an explorable explanation through this specific lack of explanatory text; a quality which seemed to be inherent to explorables before Pink Trombone was classified as one.

Something Something Soup Something

Subject Time Text-visuals ratio Interactivity ​ Philosophy 5-10 15/85 85

Highest cognitive Highest Price Platform level knowledge level

2 2 Free Browser

Page 19 / 35 ​

Something Something Soup Something (Gualeni, 2017) presents itself very much like a 3D point-and-click game. The explorable takes the player into the year 2078, where teleportation is being used to transport food, in this case soup, from alien planets to Earth. Due to translation issues resulting from communication that the player establishes with the alien planets, the "soup" that is teleported does not always look like one would expect. The player is tasked with identifying which teleported objects actually constitute soup so that they can be served to the guests. By doing this, the game hopes to raise questions on the nature of definitions and language. It is described as an interactive thought experiment inspired by Wittgenstein's philosophical work. It differs from most explorables by engaging the player in a 3D world and a narrative. The explorable itself does not explicitly describe its educational intentions, but instead leaves that to the surrounding meta-text. The question of whether the explorable is a game or not is even part of the educational content itself, because as the website says: "Is it even wise or productive to strive for a complete theoretical understanding of concepts like ‘soup’ or ‘game’?" (Gualeni, 2017).

Hooked

Subject Time Text-visuals ratio Interactivity ​ Psychology, 5-10 20/80 60 journalism

Highest cognitive Highest Price Platform level knowledge level

2 2 Free Browser

Hooked (Evershed et al., 2017) is a union between journalism and interactivity. It is perhaps what comes closest to Lambrechts' definition for explorable explanations as dynamic documents. Its goal is to educate the reader in how the design of slot machines works to build addiction in their players. It does this by introducing a handful of interactive elements, such as a simple button that the user is asked to click as much as they want. It peppers this with video interviews with a former addict and a researcher within the field of gambling. It also includes some non-interactive graphics. All in all, it is a very multimedia experience, and even includes certain game-like elements in that the user is asked to play with simulated slot machines.

The Monty Hall Problem

Subject Time Text-visuals ratio Interactivity ​ Math 1-2 80/20 15

Highest cognitive Highest Price Platform level knowledge level

2 1 Free Browser

The Monty Hall Problem (Powell, 2014) is by far the shortest explorable that I have examined. It also has a high degree of text compared to most other examples. It does not take long to

Page 20 / 35 ​ read and experience The Monty Hall Problem, and it features very little interaction. The main thing you can do is to run a simulation of the infamous Monty Hall problem, and then later tweak a few variables of that simulation. Despite this, the explorable does a good job of allowing the user to explore and examine the problem by themselves with only these few variables.

Talking with God

Subject Time Text-visuals ratio Interactivity ​ Philosophy 2-5 100/0 10

Highest cognitive Highest Price Platform level knowledge level

2 2 Free Browser

Talking With God (Stangroom, 2018) is the only examined explorable that contains no visuals. Its interaction is only based on clicking defaultly styled buttons, and the rest is explained in text. The explorable works by putting the player in conversation with God, who asks the player what they think God is capable of and whether God exists. In the end, the player is given an analysis of their results accompanied by an explanation of whether the player's internal logic contains tensions. The inclusion of interactivity works mainly to react to the player's own thoughts on the existence of God. The explorable does not lend itself well to exploration, since the movement through the explorable is very linear, and since the results are only shown once all answers have been given.

Fake it to Make It

Subject Time Text-visuals ratio Interactivity ​ Media studies 30+ 70/30 70

Highest cognitive Highest Price Platform level knowledge level

5 3 Free Browser

Where The Monty Hall Problem (Powell, 2014) is by far the shortest explorable in the analysis, Fake It to Make It (Warner, 2017) is the longest. The explorable is about how fake news works. It is so long in fact that it includes a saving feature, so that the player can leave the game, and come back later. The saving feature is not only necessary because of the game's length, but also because it asks the player to make their own profile, newspaper and articles through an elaborate interface. Thus it is important that these decisions are saved for later retrieval. Furthermore, Fake It to Make It is definitely a game, and not a toy or simulation. The explorable asks the player early on to set a goal for what they want to buy with their fake news empire funds. This sets up a quantified outcome, an artificial conflict and player engagement.

Page 21 / 35 ​

So what are explorable explanations then?

With this analysis behind us, we get a clearer picture of what an explorable explanation is or can be. In general, they seem to be highly interactive, digital experiences with a mix of visual and textual content, leaning towards the visual side. They teach not only facts, but also concepts and procedures as well as the relationships between said facts, concepts and procedures. Some explorables also teach the criteria for using the taught knowledge. They engage cognitive processes on a medium level, focusing mostly on the player's understanding of the subject matter. Explorable explanations are most often free and experienced through a browser. The majority of them are between 5-30 minutes, but they can be shorter or longer. It is all a matter of the scope of the thing to be taught. The subject matter is not limited to neither natural sciences, nor humanities. Explorables can exhibit traits of quantified outcome, or artificial conflict, but in most cases will more easily fit in the ludic activity category. They may include a fictional narrative, but will often not. Simulations can be a tool for engaging with the material, but explorable explanations cannot be classified as such alone.

What's next?

Now that I have described explorable explanations, I will now briefly present my methods for working with my own designs. After that I will take a look at what a design process for an explorable explanation could look like with these designs in focus. 4 · DESIGN METHODOLOGY

My design process has included four main activities: sketching and prototyping, related work research, topic research and evaluation. These activities have not occurred in a chronological manner, but instead I have moved between them seamlessly. To get a better understanding of the process, see figure 5.

Figure 5 · My design process, visualized ​

Page 22 / 35 ​

4.1 · Prototyping

Prototyping has been the primary method for designing this explorable. I follow the Houde and Hill (1997) notion of the term, where prototypes are tools for exploring the look-and-feel, role and implementation, or all three, of a design concept. I used prototyping primarily as an explorative technique by sketching ideas by hand in words and drawings, but also through software sketching in environments like Processing, P5.js, as well as languages like HTML, CSS and Javascript. I have also prototyped to engage with and understand the subject matter. What I mean by this will be made clearer in the presentation of the first iteration. 4.2 · Evaluation

Evaluation has mainly taken place with two parties: two of my peer interaction design students and my girlfriend. This has taken place as play testing sessions, where the players were first told that they could do nothing wrong, and that the main purpose was to understand how players experience the explorable. They were also asked to speak out loud what they were thinking as they went through it. The play testing was followed up with a brief unstructured interview about their experience. The main purpose of evaluation has been to understand usability issues with my designs, and not to discuss the merits of them as explorables or teaching aids. The sessions were recorded with the consent of the play testers. Evaluation has not played a large role in the project, and is therefore visualized as a smaller circle in figure 5. The consequences of my small amount of evaluation is reflected upon in section 6. 5 · DESIGN PROCESS

In this section, I will introduce the three iterations of my explorable explanation. The explorable attempted to teach the subject of neural networks according to a set of design goals, which will be stated in the next segment. I will describe each iteration in depth and in relation to the previously presented analysis. I will also summarize each iteration with an evaluation of the design. This section is concluded by a brief summary of the design process as a whole. 5.1 · The design goals

To guide the process of designing the explorable explanation, I defined the following design goals:

1. The design concept should be able to reasonably be classified as an explorable explanation 2. The design should be available through the browser 3. The design should make the basic functionality of a neural network understandable for people with no professional experience in programming or mathematics 4. The design should result in an experience that takes a maximum of 10-15 minutes to complete

These goals were not made explicit before the design process initiated, but were instead reframed throughout the process via a research diary, which I kept throughout my research. 5.2 · Neural networks

Before moving on to the description of my designs, it helps to understand what artificial neural

Page 23 / 35 ​ networks are, and why they matter as a subject. I will attempt a brief, simplified description of that here. This description will only focus on giving a basic understanding of the specific terms that appear in my designs. It will not suffice as a fulfilling description of neural networks.

What are neural networks?

Neural networks are a type of . Machine learning is a field of that attempts to enable computers to learn tasks without being explicitly programmed to knowing what these tasks are. Neural networks construct outputs by taking one or more inputs, formatted as numbers, and multiplying them by another set of numbers, called weights. Neural networks' main goal is to find the correct weights to multiply by. When it is said that neural networks "learn", what is meant is that they move closer to the correct weights. This happens by subjecting the network to an which adds or subtracts from the weights by a slight amount based on how well the neural network performs in a test. After each weight adjustment, the neural network builds an output, before being subjected to the testing and weight adjustment process again. This process is repeated until a satisfactory output is acquired. This will usually involve millions of iterations, which, depending on the complexity of the neural network and the computer performing the calculations, could take hours, days or weeks.

To give an example of how neural networks work with real-life material, let us consider images. Images are made up of thousands of pixels, which, to a neural network, can be formatted as numbers indicating the amount of red, green and blue in each pixel. This would be the input. To make a neural network that can recognize images of handwritten numbers, one could feed in thousands of examples of such images accompanied by labels that indicate what each image shows. After multiplying all the inputs of each image with weights, the neural network would then test whether these weights resulted in the correct output by comparing with the labels. If it did not, it would tweak its weights and try all over again. This process then takes place hundreds, thousands, sometimes millions of times, until the neural network seems to have reached a good track record of identifying images.

So, in essence, a neural network works by trying to guess the correct weights through millions of iterations of trial-and-error based on given examples.

Why neural networks make sense as a subject

Neural networks are notoriously obscure and difficult to understand for not just amateurs, but professionals as well. The main issue is that neural networks function by doing what computers do well, but humans do poorly, which is iterating on the same task millions of times. They often work with massive datasets, sometimes known as deep learning or deep neural networks, and furthermore do this on powerful computers capable of performing a mind boggling amount of calculations per second. Theoretically, if a mathematician had infinite time to go over the calculations a neural network has done, she would be able to establish how the neural network came to its conclusion. Unfortunately, since no mathematician does have that time, neural networks have become famous for being black boxes (Card, 2017; Lewis and Monett, 2017; Wolchover, 2017). This is often accompanied by a description of "no one actually knows how they come to their conclusions" (ColdFusion, 2018), which is a bit of an overstatement. However, neural networks are slowly becoming a larger and larger part of our world, and each month, it seems, new neural networks are shown to do incredible things (Elias, 2018; Murgia, 2016; Welch, 2018). The complicated innards of neural networks combined with their increasing influence make it an apt topic for an explorable explanation. Furthermore, for this project, it makes sense for me. It is a topic that has held my interest for a long time, and since I am already a programmer, I might have an easier time

Page 24 / 35 ​ understanding neural networks than I will a subject matter outside of computer science. 5.3 · The three iterations

At this point I want to move on to a description of the three iterations of my explorable explanation. Each of them had different individual goals. They represent three different ways of approaching an explorable explanation, and while some are more high fidelity than others, they will not be subjected to a normative evaluation. Instead, they will be evaluated on their own merits and faults.

Iteration 1 · The Visualized Network

The first iteration had the primary goal of helping me understand how neural networks work and how they can be disseminated. At this point in the process, I was still learning, and prototyping became a tool in that learning process. I have named it The Visualized Network, because it mainly took the shape of the classical representation of a neural network as a bunch of circles with lines connecting them. The lines were missing in my version however. The Visualized Network did not feature any explanatory text. Its main interaction happened via sliders that could be moved with the mouse to affect changes in the weights and inputs in the neural network. The first version attempted to make inputs more easily understandable for the user by manifesting it as red, green and blue values that would increase or decrease in saturation along with the value of the input sliders as set by the player. This was abandoned in order to focus on my learning, before I would eventually move on to a more concerted effort in focusing on the user experience in my design.

Figure 6 · Final version of The Visualized Network ​

The Visualized Network can be seen as a simple version of Carter and Milkov's Neural Network Playground (2018), or Crowe's NeuroVis (2018). These represent a common way of illustrating neural networks in an interactive way. The main difference is that, in The Visualized Network, the player takes the role of the neural network, because the weights are not automatically adjusted through a learning process. In general, these types of explorables are

Page 25 / 35 ​ closer to simulations, and rarely work with data that makes sense to a non-programmer or non-mathematician.

Evaluating the iteration

This iteration was never meant to be the final explorable, because before creation, I had already decided that this was not an appropriate way of teaching neural networks to non-programmers and non-mathematicians due to their separation from easily understandable data. The only reason for its inception was to aid my understanding, and in collaboration with topic research, as well as talks with a neural network expert and looking at related work, I ended up boosting my understanding of the field.

Iteration 2 · The World's Dumbest Dog

The second iteration revolved around the story of the world's dumbest dog, Neura, not being able to walk straight down a path. The player was then given the task of teaching the dog how to do this, through a series of small tasks.

In the first version of The World's Dumbest Dog, Neura followed a simple set of rules. This was inspired by the movement of the car in Bret Victor's Up and Down the Ladder of Abstraction (2011). It would walk straight ahead, unless it detected that it was either left or right of the path. If it was left, it would make a corrective turn to the right, and vice versa. The first mechanic that was introduced was that the player could adjust the angle of the corrective turn. Through simulated machine learning algorithms, the user would then help Neura find the angle by herself.

Figure 7 · The World's Dumbest Dog ​

Once I had built the section of the explorable that introduced the self-learning capabilities, I realized that this set of rules was too complicated for a player to easily make sense of. I came to this realization by seeing that the behavior of the car in Victor's Ladder of Abstraction (2011) is chosen exactly because it is complicated and is a good case for why abstracting a problem can help make it more clear. At this point, I remembered that I had watched a video tutorial on neural networks by Daniel Shiffman (2017), which included a way of visualizing neural networks that was quite similar to the visual I had built for this prototype. So I changed the rules for Neura to instead pick a random direction to go in, and then follow that. The

Page 26 / 35 ​ challenge of the explorable would then be to help her find the right direction. In the end, I did not follow this design to the end, and The World's Dumbest Dog was left incomplete when I moved on to the third iteration.

Evaluating the iteration

When I abandoned this prototype, I did it because I realized that this prototype did not do a good job at relating to what neural networks actually do. Instead it was trying to teach the math behind neural networks visually. This prototype did not even do a very good job at this part, because the math it showed was highly simplified. Still, this prototype represented a step in the direction of a concept that was more like explorable explanations. It included more explanatory text, was more interactive, and the length of the experience was increased. Its major fault was its misguided use of metaphor and mechanic.

Iteration 3 · A Tale of 70.000 Numbers

For the third iteration, I took a look at what neural network tutorials usually do to explain the concept. One of the most common things is to use the MNIST dataset (LeCun et al., 1998), a set of 70.000 handwritten numbers for training and testing a neural network. It seems it is a common approach for tutorials due to multiple factors. First off, this dataset is often used for benchmarking neural network implementations. Secondly, it is free and open-source. Thirdly, the data in it is easily understood by everyone as handwritten numbers. A Tale of 70.000 Numbers was thus built around the story of a woman who wrote down numbers in hand every day of her life until her death. When she died, she wanted her numbers to be distributed among her 10 children according to what number it was. The first child should get the zeroes, the second child the ones and so on. The player is tasked with sorting these 70.000 numbers. At first the explorable tells the player to do it by hand through a drag-and-drop mechanic. However through a chat on the right side, a helper, named J, suggests getting a machine to do it. The player is then taken through the explorable slowly by introducing features to this machine, which is called Augusta 1800, named after Augusta Ada Lovelace. The concept of a weight is introduced as a sort of "magic number" that neural networks use, and in the end, through a series of smaller tasks, Augusta is capable of training herself to find the weights. Behind-the-scenes, A Tale of 70.000 Numbers does not actually run a neural network, but simulates it by faking the behavior of a weight.

The relationship between the amount of text and visuals in A Tale is around the average for the analysed examples of explorable explanations. It is also more interactive than the previous iterations, and even the text has become interactive by allowing the player to always choose between two options in the chat on the right side. It is considerably longer than the previous iterations, and is the only iteration, which features a complete experience.

Page 27 / 35 ​

Figure 8 · A Tale of 70.000 Numbers ​

Evaluating the iteration

A Tale of 70.000 Numbers is definitely the iteration that is most representative of explorable explanations that has come out of this design process. It draws on a few features seen in other explorable explanations, but it mixes them in a novel form. A Tale builds a narrative like Something Something Soup Something (Gualeni, 2017), but it does it directly in the browser, and not in an embedded 3D player in the browser. This iteration also introduces a sandbox mode at the end, similar, but not nearly as extensive, as the one seen in Simulating The World In Emoji (Case, 2016). On top of this, the explorable has a quantifiable outcome and artificial conflict, which is not seen in many other explorables. However, there are still elements that are not satisfactory with A Tale of 70.000 Numbers.

First of all, the graphic style of the explorable does not do much to build the narrative or the point of the explorable. It is more or less just the default of the CSS styling library Bootstrap. Secondly, while the explorable is closer to what neural networks are actually capable of than any previous iterations, the explorable relies on a simulation of a neural network. Augusta is not actually capable of identifying numbers. In fact, the algorithm already knows what the correct numbers are, but simply makes fake guesses based on how close the current setting is to a randomly predetermined weight. For more on this, see the Github repository. The choice to keep the neural network a simulation was made in order to make it easier to implement, and control the experience of the explorable. In a future iteration, it might be a better idea to look into Javascript-based neural networks that would actually perform the functions of an image recognition neural network in the explorable explanation. The challenge here would then be to control and disseminate what the weights are doing. 5.4 · A summary of the whole design process

In this section, I have shown how I have gone through a design process of building my own explorable explanation on the topic of neural networks. This has resulted in three iterations of prototypes with quite different approaches. In the next section, I will discuss and reflect on these iterations, the project as a whole and the potential future work in this area.

Page 28 / 35 ​

6 · REFLECTION & FUTURE WORK

There are two elements in this section: reflection and future work. The first part will reflect upon and discuss the method in this project. The second part will outline some suggestions for future work for both researchers and designers of explorable explanations, and briefly argue why this work could be meaningful. 6.1 · Reflection

A major difference that I could have undertaken would be to include educators in my design process. This is even one of Egenfeldt-Nielsen's (2007) recommendations, when it comes to designing educational games. However, for this project, it was not a possibility for practical reasons. First of all, it was difficult to find time for it, but I was also attempting to explain neural networks to an audience that is commonly neglected in this field, which means that the availability of teachers for this level are limited. With this said, even if they did not know anything about neural networks, an educator could have given me some useful knowledge in how to plan a lesson as well as how to use Bloom's taxonomy in the analysis. Evaluation also did not play a large role in my design, which also was due to time constraints and scoping. However, if the explorable explanation should ever be released to the public as an explanation on its own, and not part of a larger research project with a more general purpose, more playtesting would be necessary. It will be helpful in both finding gaps in the usability of the explorable, but also in finding inspiration for how to develop and further design it. To illustrate the potential in playtesting as inspirational, I want to briefly talk about an incident with one of my playtests.

An important question arose when I was playtesting the third iteration with my girlfriend. In the final section, the player is given a sandbox version of Augusta 1800 that can train itself to sort the numbers, called Augusta 2000. However, it is very difficult to test whether the machine has actually sorted them correctly, because the interface does not allow for easily looking through the piles of sorted numbers. This was not intentional, but a byproduct of having other elements to focus on first. The result was that my girlfriend became frustrated that she could not check whether Augusta was doing a good job. Accidentally, this feeling of poor testing tools is quite reflective of the actual process of working with neural networks, where the millions of weights and data points result in an algorithm that is impossible to manually check. This connection is not, however, explicitly explained to the user. The question is then: should I make it easier to check up on the neural network, thus losing a connection to real implementations of neural networks? Or should I maintain the difficulty, hoping that someone will notice it and understand something profound about neural networks? This question is reminiscent of the role that obscure references in literature and movies can have. References that might only be discovered by the avid reader or viewer. Albeit in this case, there is the slight difference that explorable explanations are primarily about teaching facts, concepts, or procedures, and not as vehicles of artistic expression. I have not found the answer to this question, but it is worth considering what role metaphors can have in not just other explorable explanations, but interaction design projects as a whole. 6.2 · Future work

For designers

In this project, I have sought to do mainly two things:

1. Clarify what kind of thing the term explorable explanations refers to

Page 29 / 35 ​

2. Build and evaluate an explorable explanation of my own design in order to extract potential knowledge on the field as a whole

In chapter 3, I presented an analysis of 22 different explorable explanations. In this analysis, it was made clear that explorable explanations is a broad term, but that there are certain aspects that seem to hold it together as a genre, such as the level of interactivity. For future designers of explorable explanations, this analysis can do a couple of things. It may provide inspiration for breaking new ground in the field by challenging the definition with an explorable that is not digital, or maybe by designing an explorable that challenges what interactive can mean in this realm. There also seems to be a lack of metacognitive explorables, which is therefore an area ripe for exploration. Designers of other explorables may also find inspiration in knowing that explorables, like Pink Trombone (Thapen, 2017), get along fine without plenty of text. Or they may find solace in The Monty Hall Problem (Powell, 2014), which manages its scope so well that it only takes 1-2 minutes to play.

Following the analysis, I presented my own work on explorable explanations, which, for designers, may present some opportunities for reflection on design process. It might be worth taking my lead and using prototyping not just as exploratory tools, but also to understand the subject matter. Engaging with the subject matter, for me, has been a continuous process, and I believe a similar approach could benefit others, unless the designer considers themselves an expert on the subject of their explorable already.

Other than this, a takeaway can also be a that designers should consider what metaphor is used to explain a concept. While the dog metaphor proved useful in the beginning of my process with Neura, it ended up showing its limits when referring to the way neural networks are actually implemented.

The scope of the explorable should also be considered. Are you explaining a small mathematical problem, like The Monty Hall Problem (Powell, 2014), or are you explaining a complex system with a large interrelating terminology, like neural networks? This is one of the aspects that my iterations did not handle gracefully, since they, in my own opinion, neither managed to give a high-level overview nor a highly accurate low-level understanding of neural networks.

Finally, mechanics should be considered by designers of explorables. Since working with explorable explanations is a multifaceted task, because it requires not just design, but also teaching and diving into a field of knowledge, it might be a practical concern to limit mechanics to one or two main ones, so not much time has to be spent on designing and implementing these. This may even result in being a helpful constraint on the sketching process, and produce interesting new designs and interactions.

For researchers

Back in 2007, Egenfeldt-Nielsen (2007) lamented that not a lot of studies have been done in the efficiency of educational games. I would argue that the same holds today for explorable explanations. Now that a first step has been taken in identifying the genre, the next step should be to see whether explorables are actually capable of educating players. With Bad News, a game about fake news, Rozenbeek and van der Linden (2018) have attempted to do just that by including a survey as part of the experience. This is one way to approach the issue, but there are surely other, more qualitative ways of doing so.

Furthermore, the genre of explorable explanations deserves to be further contextualized and compared to similar genres, such as news games, data visualizations, interactive narratives

Page 30 / 35 ​ and more. This way the research community can more easily gauge whether explorable explanations can be seen as a novel approach to educational software. 7 · CONCLUSION

In the introduction, I talked about how there are multitudes of explorable explanations with a variety of subject matters, and that despite the sheer number of explorables that exist, descriptions of the genre is still lacking. With this paper, I have sought to contribute to a better understanding of this specific class of educational software. In general, they seem to be free, browser-based experiences with a high level of interactivity and a more or less even mix between visuals and text that teach facts, concepts and procedures at a medium-high level of cognitive processing. However, this general definition is not fulfilling if not seen in the context of the examined examples in this paper. Hopefully, the reader of this paper will leave it with a more succinct understanding of explorable explanations. My elaboration of the genre has been fueled by both an analysis of explorables created by others as well as a description of a design process in which I have created my own explorable. My work has served to build an understanding of what explorable explanations are, but also how one can design and work with them. I have reflected upon this process to include what other paths other designers could take, when creating their own explorable explanations. I have also discussed some issues with my own process and designs, that can hopefully help others avoid my mistakes. This paper concluded by outlining a path for future work with explorable explanations. ACKNOWLEDGEMENTS

Here at the end of my master's journey, I wish to thank the following:

Anne Lund Gillesberg for being the best play tester/girlfriend one could ask for. ​ Victor Bayer Permild, Nina Cecilie Højholdt and Tore Knudsen for being inspiring, helpful, and ​ ​ ​ ​ ​ sometimes just sounding boards for my ramblings on neural networks.

Everyone at Circuit Circus for helping make dreams come true that I never knew I had. ​ My family for supporting me, despite not really understanding what it is that I do. ​ My peers at Malmö Universitet for always challenging the Danish point-of-view. ​ Simon Niedenthal for supervising me and the project, despite our schedules being quite ​ incompatible. REFERENCES

Allen, K., 2011. Silver birch trees, Fen Bog.

Anderson, L.W., Krathwohl, D.R., Airasian, P., Cruikshank, K., Mayer, R., Pintrich, P., Raths, J., Wittrock, M., 2001. A taxonomy for learning, teaching and assessing: A revision of Bloom’s taxonomy. New York. Longman Publishing. Artz, AF, & Armour-Thomas, E.(1992). Development of a cognitive-metacognitive framework for protocol analysis of mathematical problem solving in small groups. Cognition and Instruction 9, 137–175.

Page 31 / 35 ​

Bloom, B.S., Engelhart, M.D., Furst, E.J., Hill, W.H., Kratwohl, D.R., 1956. Taxonomy of educational objectives. Longmans, Green New York.

Bogost, I., 2016. Play Anything: The Pleasure of Limits, the Uses of Boredom, and the Secret of Games, 1st ed. Basic Books.

Buxton, B., 2007. Sketching User Experiences: Getting the design right and the right design, First Edition. ed. Elsevier, Amsterdam.

Card, D., 2017. The “black box” metaphor in machine learning. Towards Data Science.

Carter, S., Milkov, D., 2018. Neural Network Playground [WWW Document]. URL http://playground.tensorflow.org (accessed 4.29.18). ​

Case, N., 2017. Explorable Explanations [WWW Document]. URL http://explorabl.es/ ​ ​ (accessed 4.23.18).

Case, N., 2016. Simulating The World (In Emoji) [WWW Document]. URL http://ncase.me/simulating/ (accessed 5.15.18). ​

ColdFusion, 2018. Google Duplex A.I. - A Much Deeper Look!

Egenfeldt-Nielsen, S., 2007. Educational potential of computer games. Continuum.

Elias, J., 2018. Baidu’s neural network can now clone your voice with just a snippet of audio [WWW Document]. PCMag India. URL http://in.pcmag.com/artificial-intelligence/119432/news/baidus-neural-network-can-cl one-your-voice-with-just-a-snipp (accessed 5.16.18). ​

Evershed, N., Ball, A., Liu, R., Davey, M., Fanner, D., Wall, J., 2017. Hooked: how pokies are designed to be addictive [WWW Document]. the Guardian. URL http://www.theguardian.com/australia-news/datablog/ng-interactive/2017/sep/28/ho oked-how-pokies-are-designed-to-be-addictive (accessed 5.16.18). ​

explorableexplanations, 2018. explorableexplanations.github.io: The Explorable Explanations Website.

Games for Health Europe Foundation, 2018. Games for Health Projects – seriousgaming.nl. Games for Health Projects.

Gaver, W., 2002. Designing for homo ludens. I3 Magazine 12, 2–6.

Goldstein, M., 2015. Exploring “Explorable Explanations.” Max Goldstein.

Granström, P., 2016. Many Tiny Things.

Page 32 / 35 ​

Gualeni, S., 2017. Something Something Soup Something [WWW Document]. URL http://soup.gua-le-ni.com/ (accessed 5.16.18). ​

He, S., Adar, E., 2017. VizItCards: A Card-Based Toolkit for Infovis Design Education. IEEE Transactions on Visualization and Computer Graphics 23, 561–570. https://doi.org/10.1109/TVCG.2016.2599338

Houde, S., Hill, C., 1997. What do prototypes prototype?, in: Handbook of Human-Computer Interaction (Second Edition). Elsevier, pp. 367–381.

Huizinga, J., 1938. Homo Ludens: Proeve eener bepaling van het spel-element der cultuur. Athenaeum Boekhandel Canon.

Juul, J., 2007. Without a goal: on open and expressive games. Videogame, player, text 191–203.

Kaltman, E., 2015. Exploring the Technical History of Games Through Software and Visualization., in: FDG.

Kolb, D.A., 2014. Experiential learning: Experience as the source of learning and development. FT press.

Lambrechts, M., 2018a. Explorable Explanations | The power of good old fashioned poking around [WWW Document]. Slides. URL https://slides.com/maartenzam/visber ​ ​ (accessed 5.18.18).

Lambrechts, M., 2018b. The reward of interacting is understanding.

LeCun, Y., Cortes, C., Burges, C., 1998. MNIST handwritten digit database, Yann LeCun, Corinna Cortes and Chris Burges [WWW Document]. URL http://yann.lecun.com/exdb/mnist/ (accessed 5.16.18). ​

Lewis, C., Monett, D., 2017. AI & Machine Learning Black Boxes: The Need for Transparency and Accountability. KDNuggets.

Lode, H., Franchi, G.E., Frederiksen, N.G., 2013. Machineers: playfully introducing programming to children, in: CHI’13 Extended Abstracts on Human Factors in Computing Systems. ACM, pp. 2639–2642.

Murgia, M., 2016. Google’s DeepMind AI makes history by defeating Go champion Lee Se-dol. The Telegraph.

Powell, V., 2014. The Monty Hall Problem [WWW Document]. URL http://blog.vctr.me/monty-hall/ (accessed 5.16.18). ​

Page 33 / 35 ​

Red Blob Games, 2014. Introduction to A* [WWW Document]. URL https://www.redblobgames.com/pathfinding/a-star/introduction.html (accessed ​ 5.15.18).

Roozenbeek, J., van der Linden, S., 2018. The fake news game: actively inoculating against the risk of misinformation. Journal of Risk Research 1–11.

Salen, K., Zimmerman, E., 2004. Rules of play: Game design fundamentals. MIT press.

Serious Games Interactive, 2018. Serious Games Interactive.

Shiffman, D., 2017. 10.2: Neural Networks: Perceptron Part 1 - The Nature of Code, The Nature of Code. The Coding Train.

Sicart, M., 2014. Play matters. MIT Press.

Stangroom, J., 2018. Talking with God: The Euthyphro Dilemma [WWW Document]. URL http://www.philosophyexperiments.com/euthyphro/ (accessed 5.16.18). ​

ten Bosch, M., 2017. 4D Toys. An interactive toy for 4D children. [WWW Document]. URL http://4dtoys.com/ (accessed 5.16.18). ​

Thapen, N., 2017. Pink Trombone [WWW Document]. URL https://dood.al/pinktrombone/ ​ ​ (accessed 5.16.18).

Victor, B., 2011a. Explorable Explanations [WWW Document]. Bret Victor, beast of burden. URL http://worrydream.com/#!/ExplorableExplanations (accessed 4.23.18). ​ ​ ​

Victor, B., 2011b. Up and Down the Ladder of Abstraction [WWW Document]. Up and Down the Ladder of Abstraction. URL http://worrydream.com/LadderOfAbstraction/ ​ ​ (accessed 4.29.18).

Warfield, J.N., 1990. A science of generic design. Intersystems Publ.

Warner, A., 2017. Fake It To Make It [WWW Document]. URL http://www.fakeittomakeitgame.com/ (accessed 5.16.18). ​

Welch, C., 2018. Google just gave a stunning demo of Assistant making an actual phone call - The Verge. The Verge.

Wiberg, M., Stolterman, E., 2014. What makes a prototype novel?: a knowledge contribution concern for interaction design research, in: Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational. ACM, pp. 531–540.

Page 34 / 35 ​

Wittgenstein, L., 1953. Philosophical investigations. John Wiley & Sons.

Wolchover, N., 2017. New Theory Cracks Open the Black Box of Deep Neural Networks. Wired.

World Food Programme, 2005. Food Force. World Food Programme.

Zimmerman, J., Stolterman, E., Forlizzi, J., 2010. An analysis and critique of Research through Design: towards a formalization of a research approach, in: Proceedings of the 8th ACM Conference on Designing Interactive Systems. ACM, pp. 310–319.

APPENDICES

Appendix 1 - Explorable Explanation Analysis - Sheet

Highest Amount of Amount of Highest Time required Amount of Cognitive explanatory interaction (in Knowledge Platform Name Subject to play (see visuals (in Process Price ($) text (in percentage of Dimension legend) percentage) Dimension percentage) total content) Level (1-4) Level (1-6) Parable of the Social Science 3 30 70 70 6 0 2 Browser Polygons Biology, Pink Trombone 1 5 95 95 6 0 1 Browser linguistics Something Something Philosophy 2 15 85 85 2 0 2 Browser Soup Something Seeing Circles, Math 4 80 20 25 2 0 2 Browser Sines & Signals 4D Toys Math 2 35 65 80 3 15 1 iPad / Steam To Build a Civics 3 60 40 70 6 0 3 Browser Better Ballot District Civics 3 20 80 80 4 0 3 Browser Introduction to Computing 3 35 65 70 5 0 3 Browser A* A Slower Physics 2 15 85 70 2 0 2 Downloaded Speed of Light Fireflies Biology 1 30 70 75 4 0 2 Browser The Evolution Social Science 3 30 70 75 4 0 3 Browser of Trust Hooked: How slot machines Psychology, 2 20 80 60 2 0 2 Browser are designed journalism to be addictive Many Tiny Physics 2 30 70 80 2 0 2 Browser Things Back to the Future of Computing 2 70 30 50 2 0 2 Browser Handwriting Recognition Complexity Explorables: Biology 1 80 20 30 2 0 2 Browser Flock'n Roll Fake It To Media Studies 4 70 30 70 5 0 3 Browser Make It Philosophy, Moral Machine 2 30 70 60 0 0 0 Browser Computing The Monty Hall Math 0 80 20 15 2 0 1 Browser Problem Simulating the Misc 2 50 50 65 6 0 3 Browser world in Emojis Philosophy Experiments: Philosophy 1 100 0 10 2 0 2 Browser Talking With God A Neural Network Computing 1 15 85 25 2 0 2 Browser Playground Spot the Misc 1 70 30 20 3 0 2 Browser Drowning Child Average 2.045454545 44.09090909 55.90909091 58.18181818 3.272727273 0.6818181818 2.045454545 Median 2 32.5 67.5 70 2.5 0 2

Legend for Time 1-2 0 2-5 1 5-10 2 10-30 3 30+ 4