<<

, Representation, and Cartography: Compiling a Virtual

Johannes Lenhard Bielefeld University

Simulation modeling resembles -making in an important way: Both aim at constructing devices for orientation and for intervention. The reference to reality or the , however, is far from straightforward. Rather, the result of simulation modeling can be conceived of as a virtual atlas – an object different from a traditional atlas. It consists of a huge compilation of local that do not match up to a general overview. Only in a computational setting such an atlas can be used as an orientation device.

1. Introduction It is a commonplace that “the map is not the territory.” The phrase goes back to Alfred Korzybski (1931), while the insight—formulated by insisting on an apparent triviality—is surely older and has received poetical expressions from Lewis Carroll and Jorge Luis Borges among others. The difference between map and territory is a difference of categories and is at the base of the representation relation. Hence maps play a prominent role in the dis- course about representation which takes place in a number of disciplines and from different perspectives. Three of them are of special interest for the present paper. One influential view has been put forward by historian of art Ernst Gombrich in his classical lecture on “Mirror and Map” (1975) where he states that the mirror opens up a sophisticated play of representation, whereas the map is less interesting because it (just) presents a well-defined scientific

I would like to thank Anne Marcovich, Terry Shinn, the audiences of the conferences “ NanoSpace” and “Models and 5,” and also two anonymous reviewers for their helpful comments.

Perspectives on 2015, vol. 23, no. 4 ©2015 by The Massachusetts Institute of Technology doi:10.1162/POSC_a_00180

386

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00180 by guest on 28 September 2021 Perspectives on Science 387

artifact, a practical tool, constructed in precise and systematic ways.1 The second viewpoint has been articulated in the . It dismisses the above description of maps as an overly rationalistic and his- torically inappropriate model. To think maps would be different from art or painting because they create a straightforward similitude to reality would be to fall prey to an “illusion of map-making”—states historian of cartog- raphy Brain Harley (2001, p. 154). The philosophy of science contributes a third : It has discovered maps as an interesting topic because they are related both to objectivity and conventions, hence can serve as a resource to scrutinize the ways scientific knowledge is organized. The present paper will bring together all three perspectives on science and map-making, and it will utilize them to investigate and probe a claim about simulation modeling and its relationship to mapping. The main claim holds that simulation modeling resembles map-making in an im- portant way and can be compared to compiling a virtual atlas. However, this virtual atlas is different from a regular one. While the latter is comprised of a number of local maps that together cover a certain territory, its virtual counterpart resembles the compilation of particular itineraries. The argu- mentation will proceed in the following way. Section two will briefly introduce the discussion of map-making in the philosophy of science. The standard account treats maps as a paradigm for representation that is based on an isomorphic relationship. Such an account tends to see maps as generic objects of little particular philosophical inter- est. For more than a decade, the topic of maps and map-making is re- ceiving increasing attention, connected to a practice turn and to a turn from theories to models. The analysis of map-making has enriched the con- ceptual repertoire one can use to think and debate about representation. The fact that there are several maps of the same territory that are useful for different purposes has inspired the debate about the (non)uniqueness of knowledge representations. In particular, it will be discussed how the pluralist stance of Helen Longino (2002) and the realist position of Philip Kitcher (2001) combine. A second and especially important aspect con- cerns the global versus local character of maps and how maps of different scales are systematically related. This relationship will be helpful to ana- lyze simulation modeling, namely my pointing out the difference between (general) maps and a compilation of particular itineraries. Section three will introduce nanoscale as a field where map- making and simulation modeling both play important roles. Two types of images will be distinguished. The first type is produced by scanning

1. Various scholars have embraced similar views, cf. Arnheim 1986; Eco 1976; Goodman 1968.

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00180 by guest on 28 September 2021 388 Simulation, Representation, and Cartography

tunnel microscopy; it is a popular one and displays molecular landscapes that suggest metaphors for nano of great rhetorical force. The second type is images produced by simulation models that from the outset do not appear to be map-like. A case from tribology, i.e. the study of friction, where molecular dynamics simulations are employed, will be introduced and discussed for . It will be argued that the first type of image appears map-like by a rather superficial analogy, while the second type in fact exhibits an inter- esting functional analogy between simulation modeling and map-making. The philosophical account of mapping considerably profits when com- plemented by considerations and material from the history of cartography. This will be undertaken in the fourth section. Historians of cartography have articulated fundamental criticisms of the “ideology of map-making” that takes maps for objective and neutral representations. Rather, they conceive of map-making as an activity that is carried out in a historical and political context. To them, maps are constructed as orientation devices and to provide options for interventions in very particular contexts. The “Great ” of India that the British Empire conducted (Edney 1997) will serve as source for illustration. In effect, the material will vindicate the connection between map-making and modeling. Section five will pull the threads together. The philosophical and historical discussion of mapping will be utilized to analyze simulation modeling. The notions of prediction, negotiation, and locality will play a central role. It will be argued that simulation modeling can indeed be compared to map- making: both activities create devices for orientation and for intervention. The result of simulation modeling can be conceived of as a virtual atlas— an object that differs from a regular atlas in important ways. It consists of a huge compilation of particular itineraries that do not match up with a gen- eral overview; rather they lack the relation to something like an ordinance survey map that would integrate the local maps. Only in a computational setting, it is claimed, can such an atlas be used as an orientation device.

2. Representation and the Map-Making Paradigm Maps serve as an interesting issue for philosophy of science because they are examples of representations that are systematic and successful (in a sense to be specified) and therefore can serve as a prima facie point of reference for investigating scientific knowledge in general. However, a common view in does demand strict measures of objectivity for knowledge. If maps should be a point of reference, doesn’t that demand highest stan- dards of objectivity for maps and map-making, too? Some practice-oriented philosophers of science have turned around the perspective and asked what the investigation of map-making can tell about scientific epistemology.

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00180 by guest on 28 September 2021 Perspectives on Science 389

Scholars like Ron Giere, Philip Kitcher, and Helen Longino – leaving aside relevant differences between them – all invoke the example of map-making when they critically debate central features of knowledge and representation. The common point of departure of these authors is to explain how scien- tific knowledge deserves the character of a success term without relying on a rigid truth standard that hardly fits any aspect of scientific practice. At the same time, knowledge shall keep its truth standard. This attempt has led them to consider maps as a paradigm case: map-making can produce suc- cessful devices in a systematic way. Three aspects will be addressed in the following: instrumental character, pluralism, and systematic connectivity. Stephen Toulmin is an important voice in appreciating the significance of maps. He devotes chapter four of his introduction to philosophy of science (1960) to “theories and maps.” Toulmin points out that neither theories nor maps have a deductive connection to facts. Maps serve as a paradigm case that should inform our thinking about laws: they are not strictly derived from the terrain, though one can read off propositions about geographical facts. Thus, for Toulmin, maps and natural laws have in common their character as instruments: both are orientation devices—in a terrain (maps) or in a of phenomena (laws). The term “orientation” here is not restricted to the spatial sense. Laws can bring some order to an otherwise impenetrable maze of phenomena. In particular, they can help to intervene in a controlled way, analogous to maps that indicate where to go for certain destinations (goals). This instrumental perspective will be of central relevance in the present paper. Helen Longino also subscribes to the paradigm character of maps for scientific knowledge in general (2002). She wants to stress the inherent pluralism of scientific knowledge and argues that “true” and “false” are not adequate for evaluating such knowledge because that would presuppose a propositional form and also restrict the question of what con- stitutes (justified) knowledge to a binary relation—a too narrowly restricted view on knowledge. How maps conform seems more adequate as a role model, because there are a plurality of ways to conform:

Idealizations like laws are not, strictly speaking, true because there is not a particular situation that they accurately and precisely represent, but they conform to the range of phenomena over which they are idealizations in the way a map conforms to its terrain. Like maps, they are useful just because they do not represent any particular situation (…) Maps fit or conform to their objects to a certain degree and in certain respects. I am proposing to treat conformation as a general term for a family of epistemological success concepts … (Longino 2002, p. 117)

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00180 by guest on 28 September 2021 390 Simulation, Representation, and Cartography

This family of success concepts comprises members like isomorphism, homomorphism, truth, approximation, fit, similarity—truth being only one member of the family. Longino proposes “conformation” as an encompassing notion, i.e. as the family name, to express such success and she sees maps as a good example to illustrate why conformation is a more appropriate relation than the mere true-false distinction when one wants to analyze scientific knowledge.2 This point is the source of agreement between the actors in the philosophical debate. The function of maps as orientation devices and how success is determined there is taken as a role model when analyzing the success-term of scientific knowledge in general. This agreement might come as a surprise because the participants in that debate hold diverging opinions regarding the positions they want to argue for. Maps do not only serve as a paradigm for Longino’s pluralism, they also attract a realist of Kitcher’s sort. First of all, Kitcher does not advocate a strong anti-pluralist position. In particular, he expresses skepticism regard- ing the existence of one overarching aim for science.3 Thus, he is now open to a pluralism of sorts, a pluralism he takes from cartography when he argues that it is a type of success like that of maps that buttresses a modest realism (Kitcher 2001, chap. 5; see also 2003). Map-making, Kitcher maintains, does not invoke one single best map. On the contrary, no ideal, objective, context-independent collection of maps exists; rather there are a variety of maps, each of which is indexed by a purpose for which it is designed to work. Some maps can be more accurate than others, but there is no goal-independent best one. This brings us to the third topic, systematic connectivity. A variety of maps exist that are constructed to serve different purposes: for tourists going by car, or by bike, or for geologists or planners of edifices, etc. These maps do not stand in a clear relationship to each because each contains information that is useful for a certain purpose, but would hinder other purposes—and different purposes are not in a unique order. However, maps seem to differ one from the other in that they include or leave out certain information. Doesn’t that suggest there is a version that includes maximal information—though being neither the territory itself nor serving all goals perfectly? Toulmin pointed out that there is one, namely the ordinance survey map. It assumes a fundamental position because it contains all information from which other maps choose a subset according to their purpose. It is, however, itself dependent on conventions—those conventions that the ordinance survey office follows, for instance, how many measurements are taken to determine contour lines, or what the highest resolution is. Thus, there is

2. Longino refers to Giere’s discussion of maps in his “Science Without Laws” (1999). 3. This skepticism is deliberately and significantly modifying his own prior position.

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00180 by guest on 28 September 2021 Perspectives on Science 391

an underlying connectivity insofar as scaling up will approach the ordinance survey map, and scaling down from there will follow a tree-like structure towards different purpose-dependent maps. The latter may also differ in conventions not related to , like the way to highlight certain infor- mation. Thus, maps appear to be jacks-of-all-trades, introducing some variability into the representation relation while maintaining the nature of knowledge as a success term, providing room for Longino’s pluralism— various kinds of conformations—and at the same time backing Kitcher’s modest realism. The glue that holds together these diverging perspectives is provided by the success condition that is open to an instrumental as well as a realist interpretation. The crucial question is: When do maps conform successfully? They have to exhibit a kind of conditioned accuracy: When a purpose is specified, then accuracy is indicated by being successful for this given pur- pose. Here, Longino coincides with Kitcher in that the plurality of accurate representations—each single one accurate relative to a given purpose—is compatible with a realist stance on why each representation is in fact accu- rate: “Once the conventions are fixed, specifying, say, colors to represent types of vegetation rather than economic activity, then the features of the terrain itself […] determine the adequacy or inadequacy of the map” (Longino 2002, p. 116). This reference to “the terrain itself” is exactly what Kitcher takes as basis for his realist account. But this condition leaves intact a completely instrumentalist avenue, namely to regard adequacy as determined by success. Does every recipe that works for a special purpose count as an for realism? Surely not. And Kitcher is aware of that shortcoming. Conse- quently, he strengthens the conditions: not any success is backing realism, rather success is required that is difficult to achieve. Namely, predictive suc- cess is the key. It is taken as the potential of a map to tell you in advance where you will end up. But this is still too weak for Kitcher, because he wants to exclude a lucky guess. Hence he requires additionally that predic- tive success is achieved in a variety of applications. In brief, to be successful in the strong sense, maps have to provide orientation in more than one situation and for more than one route. If a map works under various circum- stances, the reasoning goes, this indicates that it conforms to the real terri- tory. For Kitcher, systematic and difficult predictive success is what matters as a criterion for modest realism. In short, it is the genericity of maps that matters. Toulmin put forward a very similar viewpoint. In his discussion of maps, he insisted that they have to be route-neutral (1960, p. 125). Thus, for Toulmin as for Kitcher, a good map is an orientation device with some genericity. There exist orientation devices that lack genericity, think of tour guides that describe particular

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00180 by guest on 28 September 2021 392 Simulation, Representation, and Cartography

routes. And, in fact, the forerunners of maps have been itineraries, col- lections of route-specific representations. Itineraries describe one particular route, i.e. give a recipe for how to reach point B when starting in A. For a different purpose, or even regarding a different route with same start and endpoints, it may be useless. Route-neutrality separates maps from itineraries. However, it will be argued that the concept of itineraries – rather than that of maps – is useful to characterize simulation models.

3. Mapping Nano-Space Nanoscale research (NSR)4 presents a case of particular interest when one intends to investigate the relationship between mapping and simulation modeling. It does so for two reasons. One reason is that computer simu- lations form an important and for some areas of NSR even essential part of instrumentation. To be more precise, the combination of new metrological instrumentation – scanning probe microscopes – with simulation forms the epistemological and methodological core of NSR.5 The second reason is the remarkable extent to which images and metaphors of space contribute to the appeal of NSR. The rhetorics of space metaphors and the visualizations obtained from simulation models mutually enforce each other. Two types of images will be discussed. The first is of great rhetorical force and looks map-like from the outset. It will be argued, however, that the analogy to maps is merely superficial. While the second type does not look like a map, it in fact functions like one and hence does support the analogy between map-making and simulation modeling. There is a type of that endows NSR with a kind of space optics, transposing the very small with the very wide. The cover displayed in Figure 1 is an emblematic instance for the nano field and for the atti- tude of the time. It displays a molecular landscape that is made up from visualized obtained by scanning tunnel microscopy (STM). In the background one sees the starry sky of the endless universe—inserted for presumably rhetorical reasons.6 The new microscopes, so it is suggested, transform the previously arcane realm of atoms and molecules into something visible and geographically explorable. The chosen background contributes to the claimed potentials of exploring the apparently unlimited vastness. The center—the hilly landscape—is a representation of data attained by microscopy while the whole concoction, including also 3-D rendering, appears to be a collage to create publicity.

4. I find it useful to speak about NSR to avoid two debates at the same time. One about the novelty of nanoscience, the other about the status as science or technology. 5. Marcovich and Shinn (2014) aptly speak of a “combinatorial” in their study of NSR. 6. The black and white version printed here admittedly loses rhetorical force.

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00180 by guest on 28 September 2021 Perspectives on Science 393

Figure 1. Cover of the report of Science and Technology Council on “Nanoscience,” 1993. Courtesy of L.J. Whitman.

However, this is not a mere fake image for public relations—there is more to it. The capacity of NSR to intrigue the imagination of researchers is fueled by space-related metaphors. Already Richard Feynman had given his now famous lecture the programmatic title that “there is plenty of room at the bottom” (1960), a promise that seems to be perfectly visualized by Figure 1. Metaphors of this sort suggest that there is something like a new , a not yet discovered part of the , unknown and unseen landscapes that wait to be conquered by NSR. Scanning probe microscopy is in fact not an optical device. The scanning procedure first collects data about the probe, e.g., an array of forces, that later are rendered visible in a separate step. In effect, the STM-visualized landscape is constructed by navigating with a tip along the surface of the probe, constantly reporting measure- ments. In a way, the tip of a scanning microscope assumes the role of a sys- tematically proceeding surveyor that is charting new territory. It has often and aptly been acknowledged in the literature that the imaging technology of STM and related instruments does not provide a direct representation because the modeling part is crucial in the imaging procedure.7 Otávio Bueno (2006) has brought up the issue of mapping in this context. He com- pares representation by microscopes with mapping, or more precisely, he introduces the weaker notion of “partial mapping” for it. However, Bueno

7. A good sample of papers that analyze these aspects in the broader context of NSR is offered by Baird et al. 2004; especially about STM instrumentation, see Mody 2011.

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00180 by guest on 28 September 2021 394 Simulation, Representation, and Cartography

takes mapping in the generic mathematical sense where one space is mapped on another, not in the geographical sense of map-making on which the present paper focuses. However, let us put aside the superficial (though forceful) analogy between map-making and simulation modeling. There is another, philosophically more significant analogy between the two. It concerns images that have more relevance in scientific practice, but exhibit much less resemblance to a landscape. The present paper pursues the claim that computer simula- tion and related visualizations are like maps that unknown territory. However, the relevant class of images is not instantiated by the landscape-like Figure 1, rather by the likes of Figure 2. These images call for explanation as their character follows from the process of their construction. Uzi Landman from Atlanta’s Georgia Tech has studied lubrication and the properties of lubricants that are confined to very small, that is, nanoscaled

Figure 2. Ordered high friction state (upper image) and oscillation-induced disordered low friction state (from Landman 2002, courtesy of U. Landman).

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00180 by guest on 28 September 2021 Perspectives on Science 395

. The outcome of a numerical in which two surfaces (colored yellow) are sliding one against the other is shown in Figure 2. Lubricant molecules are in the small, nanosized gap between the sur- faces as well as in the bulk outside. The upper part of the picture shows a snapshot: The molecules of the lubricant are forming ordered layers that sig- nificantly influence the movement of sliding surfaces as friction increases, i.e. the lubricant molecules behave in an unexpected way that Landman calls “soft-solid.” Quoting Landman, “We are accumulating more and more evidence that such confined fluids behave in ways that are very different from bulk ones, and there is no way to extrapolate the behavior from the large scale to the very small” (2002). Landman and his colleagues also tried to “overcome the problem” of high friction in their simulation study. When continuing their molecular dynamics simulations, they manipulated the movement of the slides by inducing small oscillations. The simulation shows how oscillating the gap between the two sliding surfaces reduces the order of thin-film lubricant molecules, thus lowering friction. In the lower part of the image, molecules thathadbeenconfined within the surface, which were marked red after the first snapshot, have moved out into the bulk lubricant and are no longer confined, and molecules from the bulk areas have moved into the gap.8 This series of visualizations may resemble Figure 1, because of its artificial computer generated appearance, but it lacks any features of a landscape. Our analysis will bring out, however, that simulation modeling can be com- pared to map-making in important respects. How does molecular dynamics modeling work? Much like exploring unknown territory, modelers have to explore the model behavior, because there is no general knowledge from which one could derive this behavior. More specifically, molecular dynamics works with a force field that effectively describes the forces on single mole- cules so that the resulting overall dynamic is close to the actual one. The central justification for this modeling strategy is a profoundly theoretical one: In a mathematical sense, there exist force fields that fulfill the task. From this general point, however, does not follow how actual force fields look. They crucially depend on the particular conditions, like the materials involved, and the point is that typically theoretical knowledge is not sufficient to derive how the force field has to look in particular cases. Such fields are constructed employing iterated feedback loops where tentative fields are calibrated against known cases. Hence it requires great skill from simulation modelers like Landman to accurate fields. After the force field has been specified, the behavior of the model can be explored, like in the example

8. The electronic version of this paper shows colored images whose message is much harder to recognize in black-and-white print.

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00180 by guest on 28 September 2021 396 Simulation, Representation, and Cartography

with the two sliding surfaces. In general, there is no other way to “extrapolate” this behavior. Landman and his group have acquired a reputation for having at their command a powerful simulation toolbox of molecular dynamics and multi-scale methods. Still, this does not imply that modeling proceeds quasi-automatically. During the modeling phase a number of (cleverly chosen) relevant parameters have to be adapted and calibrated to the materials and dimensions under investigation. For example, in multi-scale simulations, a crucial part is how exactly the models of different scales, molecular dynamics and continuum mechanics, are sewed together. That should happen as seamlessly as possible, but there is no recipe that would be universally applicable. Rather, there are several competing approaches whose qualities depend on which particular materials and constellations are to be modeled. Hence there is the question of how the simulation models are specified and parameters adapted. And if a simulation model is successful in the sense of reproducing known results of test cases and also making predictions in yet unknown cases, such success is normally restricted to the particular situation and does not grant a similar success when materials or boundary conditions are different. Landman himself is very aware of this point.9 He underlines how essen- tial the interplay is between simulation modeling and , when one wants to study the phenomena of friction on the nanoscale. The behavior known on other scales—the continuum or “bulk” case—often is not a good guide to the nanoscale. Landman is convinced that there is no way to extrap- olate the behavior at the nanoscale from knowledge about bulk properties (cf. Landman 2002). Consequently, simulation researchers have to deal with peculiar and strange—unexpected—behavior which makes it especially hard to discriminate between strange behavior of the kind one is looking for and strange behavior resulting from modeling flaws. This discrimination re- quires constant check and negotiation between simulation experiments and laboratory experiments. Let us now turn to the history of cartography where the issue of negotiation will come up again from a different perspective.

4. Map-making in Context The adequacy of maps is dependent not only on the territory to be charted, but is a more complex matter. Historical cartography has intensified a debate about maps and map-making that takes into account the political context of maps and that is inspired by work in colonial history, like that

9. I am grateful to Terry Shinn and Anne Marcovich for sharing their knowledge with me in personal communication, including material with Landman that is part of their study (2014).

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00180 by guest on 28 September 2021 Perspectives on Science 397

of Edward Said. Maps then appear as a not only for orientation, but also for acquiring power for interventions in a very tangible sense. A striking example has been contributed by Matthew Edney in his book Mapping an Empire: The Geographical Construction of British India, 1765–1843 (Edney 1997). As the subtitle indicates, at the time “India” was not a delineated object. Rather, map-making was the process that turned it into an object that could be controlled. The point is that map-making itself then becomes a political activity. Edney aptly describes the goal: “In the case of the British conquest of South in the hundred years after 1750, military and civilian officials of the East India Company undertook a massive intellectual campaign to transform a land of incomprehensible spectacle into an empire of knowledge” (1997, p. 2).10 It was James Rennell, the surveyor general of Bengal and a main actor in Edney’s book, who directed a massive cartographical survey (1782–1788) to determine or establish India geographically. It did not matter that he conflated various conceptions of the territory—like the Mughal Empire, or Hindustan—that were in use then. What did matter was that map- making created at once a device for orientation and for political power: “The triumph of the British empire, from the imperialist perspective, was its replacement of the multitude of political and cultural components of India with a single all-India state coincident with a cartographically defined geographical whole” (Edney 1997, p. 15). However, planning a general survey and carrying it out in practice are two different things and Edney reports a certain complication. In theory, the general survey would create a map according to global and general rules. In fact, Edney argues, the map constructed by Rennell’s six year- long survey resembled more a patchwork of local routes than a consistent construction: [T]he British could never implement the technological ideal offered by triangulation and were forced to rely on the older epistemological ideal of the eighteenth century. That is, the British could only make their general maps of by combining multiple surveys within a framework of and longitude. (1997, p. 17) How did the practice of map-making deviate from its methodological ideal? Edney analyzes that the agglomeration of linguistic problems and

10. It may be worth to point out that this “campaign” stands in a telling analogy to the campaign for nanoscale research whose promise is to make the ‘land’ of atomic spectacles accessible for technology.

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00180 by guest on 28 September 2021 398 Simulation, Representation, and Cartography

problems of access required a strong reliance on “indigenous assistants, guides, and local informants” so that: the surveys were exercises in negotiation, mediation, and contestation between the surveyors and their native contacts, so that the knowledge which they generated was a representation more of the power relations between the conquerors and the conquered than of some topographical reality. (Edney 1997, p. 25) Edney fleshes out this claim in minute historical detail. For our present context it is sufficient to note that this character of the map did not stand in the way of the practical and political goals of the survey—it in fact provided orientation and a means for intervention.11 It has been made clear that the success of a map as an orientation device and as a means for intervention do not critically depend on its derivation via general (mathematical) rules. It is not claimed that the analogy between map- making and modeling implies anything about a political context of modeling. Rather, the analogy entails that both are complex activities and not merely applications of given rules. The instrumental qualities of amap—how successfully it can be used as an orientation device and as a means for interventioncan rely on a collection of local parts that are sewed together pragmatically. This was exemplified in the Great Survey where systematic triangulation was (partly) replaced by local negotiations. And we will see that this holds—to a certain extent—also for simulation modeling.

5. Compiling a Virtual Atlas Different perspectives on maps from cartography and philosophy have been gathered. They will serve now as a resource to argue for the main claim, namely, that simulation modeling can be described as compiling a virtual atlas. Three elements of the previous discussions will highlight analogies and dis-analogies between map-making and simulation modeling: predic- tive success as a criterion of accuracy, the role of negotiation in the process of construction, and the issue of locality.

Predictive Success The conception of a map does include the promise that maps serve as orientation devices, notwithstanding that success is not guaranteed – and, of course, I myself was more than once annoyed by maps not working

11. Graham D. Burnett would be another source who argues for a strong link between map-making and intervention as the title of his book tells: Masters of All They Surveyed (2000).

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00180 by guest on 28 September 2021 Perspectives on Science 399

properly as a guide. Working successfully as an orientation device is depen- dent on purposes and hence not a factual relation between territory and image. However, the question whether a given map works as orientation device or does not boils down to its success in prediction, i.e. of predicting how and when one arrives at a certain target point. On the one hand, this kind of success criterion is rigorous in the sense that it can easily be missed, while on the other hand, the criterion can be met by maps that are anything but isomorphic representations.12 To foster this point, let us again turn attention to Landman’s molecular dynamics model that produced the visualizations of Figure 2. Molecular modeling works with force fields. Whether such fields are designed ade- quately cannot be decided on the basis of general scientificknowledge. Rather, the criterion is how well these fields perform, i.e. how well the models predict (or retro-dict) behavior in known cases. The particular simulation of Landman discussed above makes a prediction; moreover, it makes a prediction in the strong sense of describing an unexpected and not yet experimentally detected phenomenon. Hence, by exploring and chart- ing yet unknown behavior, the model works as an orientation device and gives options for intervention: for instance, how to overcome friction that is caused by narrow confinement of lubricants. Of course, the prediction may turn out to be wrong, but that danger only underlines that the simula- tion can work as an orientation device—or can fail to do so. Thus we have an analogy between maps and simulation models: Both have a predictive capacity and therefore offer options for interventions.

Negotiation The second topic is the role of negotiation in map-making. There was an interesting observation by Edney regarding the fact that the ideal con- struction principles for maps had not been implemented in the General —negotiations with “indigenous assistants, guides, and local informants” took place to construct local maps and to stitch them together. The patchwork nature did not hinder the success as a device for exerting control, though. There is an analogy on the side of simulation modeling. Simulation modeling also relies on negotiations in an important sense. A short episode from nanoscale research will highlight the inter- dependency of experiment, simulation, and theory. The laboratory instruments

12. Cultural theorist Jean Baudriliard urges that success terms become irrelevant with simulation, thereby rendering irrelevant also the comparison with maps and other forms of representation (Baudriliard 1998). Without completely dismissing Baudriliard’s critique about recent culture, I’d like to maintain the significance of the map as point of comparison for computer simulation models—including predictive success as a main criterion.

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00180 by guest on 28 September 2021 400 Simulation, Representation, and Cartography

which arguably contributed much to the emergence of NSR are the scanning tunnel and atomic force microscopes. In such an apparatus, a very sharp tip is brought into near proximity to a probe and while the tip is steered over the probe following a certain grid, the forces (or tunneling currents) are measured. The array of measured values then is visualized in an image. The trans- formation from measured forces to the display of images is not straight- forward. These microscopes deliver a vast amount of noisy data that need to be processed and cleansed before they built up what counts as input data for the visualization procedure. Hence the models and procedures employed in pre-processing function as accepted background knowledge. Though de facto accepted, such knowledge may turn out to be problem- atic. One seminal component of such knowledge, regarding the strength of molecular adhesion bonds, is the so-called Evans-Richie-Theory. In the early 2000s, a gap in this theory was detected. The episode reported here starts with a mathematically motivated work that aimed at optimizing the men- tioned filter or pre-processing procedure for microscopic data, using itself asimulationmodel—based on Evans-Richie (Evstigneev 2003). The next step was a series of experimental (non-simulation) measurements to confirm the optimal, or at least improved, performance of the new pre-processing method. The attempt failed, however, and the series showed that the expected optimization did not occur (Raible et al. 2004). The reason for this failure was unclear, and to find out what went wrong required an extended series of back-and-forth between experiment, theory and simulation, involv- ing different groups from mathematicians to microscopists. Eventually, a gap was localized in the Evans-Richie-Theory, i.e. in the part of the model- ing process that had been presupposed when discerning data from noise. Thus, this theory was modified (Raible et al. 2006). The modified theory changes what, in effect, is counted as data (rather than noise) from micro- scopes. This brief episode illustrates how simulation modeling and data measurement are intertwined and it confirms the analogy between simula- tion modeling and map-making: both involve negotiations to structure a piecemeal construction process. They do not arise out of a top-down con- struction according to general principles governing atomic forces or , respectively.

Locality The issue of locality occurred already in the discussion of negotiation. Both are systematically connected: if a top-down design would determine the map—by triangulation, or the model—by derivation from general laws, negotiations would be dispensable. However, the debate about models in the philosophy of science has provided ample evidence that models are

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00180 by guest on 28 September 2021 Perspectives on Science 401

normally not derived from laws. In our case, the molecular dynamics sim- ulation of friction is of a local nature in the following sense. Molecular dynamics is an approach somehow in between quantum theory and continuum mechanics. Consequently, it will not work when quantum effects are important, nor will it work—for practical computa- tional reasons—when too many molecules are involved. However, it is a popular modeling approach because it allows for investigation of phenomena in between the realms of quantum and continuum mechanics methods. The whole setting has been calibrated for a certain amount and particular sort of material as well as for a particular underlying computing technology (the facilities at Georgia Tech). Is the model still adequate if these conditions are changed? Or in other words: How robust is the model? What cases are similar and what cases are critically different? An answer to these ques- tions must be considered out of reach for theoretical reasoning—it calls for local exploration to sound out the limits and to provide partial answers. It is a remarkable fact that simulation makes prediction possible although extrapolation from general accepted knowledge is not possible. As Landman stated in the quote given earlier: his goal is to predict behavior that is not predictable from knowledge about lubricants in general. Sure, general laws of physics go into the simulation as do generally accepted techniques of simulation. However, as has been remarked, behavior at the nanoscale can- not be extrapolated from continuum mechanics nor derived from quantum mechanics. To study the question of how thin film lubricants behave, one must explore concrete cases by simulation. To the extent that phenomena are of an emergent nature, they will not allow a computationally compressed description, like a general law would give it. Hence the simulation provides knowledge of a fundamentally local nature. Since one cannot extrapolate behavior from the general laws of mechanics, one needs simulation to find out to what effects concrete initial and boundary conditions lead. And to find out in what range of conditions certain phenomena occur, one will have to iterate the simulations with varying assumptions. It is relatively easy to modify the model, which amounts to the capability to compute special itineraries when needed. Landman’s group, for instance, is known to have the skills and techniques at their disposal to venture into such non-extrapolatable situations, adapting to a change in any of the mentioned restrictions. To stay with the metaphor: another modified and then calibrated model would add a local map, or more precisely, an itinerary. In sum, the investigations of Landman and his colleagues result in compiling a virtual atlas. These considerations generalize to a much bigger class of simulation models, namely to models that are complex in the following sense: their behavior partly depends on the concrete specifications and value assignments

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00180 by guest on 28 September 2021 402 Simulation, Representation, and Cartography

and does not follow from the general knowledge that goes into them. Behavior of such models can be explored and documented by compiling a virtual atlas.

6. Conclusion In this way, each simulation model, including quantitative assignments for all parameters, works more like an itinerary, a special-route map, rather than a general route-neutral map. The latter is replaced by massive iteration of slightly varying itineraries which results in a compilation of simulation runs. Taken together, these runs can cover the territory: in the language of cartography, the result is a virtual atlas that is compiled of itineraries. Combining different perspectives on map-making, we have arrived at the main claim that simulation modeling is analogous to compiling a virtual atlas. This notion of an atlas, however, does not coincide with the one most com- monly known from street maps, like an atlas of a country’s highway system. Such a compilation would normally consist of a number of road maps, each displaying a certain part of the country and highlighting the highways. Taken together these maps cover the territory in a coherent way. The compiled virtual atlas has a different nature. There, no plane survey sheet (ordinance survey map) exists in the background thus it is restricted to working as a collection of itineraries that do not even combine in a homogeneous way. The activity of modeling in our case resembles the construction of itin- eraries. They are a special, not a general purpose, device, but in a sense this is compensated for by the computational capacity of the computer. One can mitigate missing generality by iterations. Many iterations, producing slightly varying versions of itineraries, might be able to bring out what aspects of the itineraries are general (and to what degree). Thus, the amount of local itineraries increases, whereas their sheer number replaces connectivity, or rather is a surrogate for connectivity. The scaling behavior, i.e. the systematic connectivity that Toulmin had pointed out as a pivotal property of maps, need not be maintained in such a virtual atlas. Since the laws do not fully determine model behavior, surprises may be encountered in particular combinations of parameters. This is where computational modeling changes the situation. A big or even huge number of maps can be contained in a virtual atlas and still be very practical. Thus we have arrived at an analogy and a dis-analogy between map- making and simulation modeling: The character as a device for orientation and intervention can be maintained, while iterativity and adaptations are counteracting or preventing route-neutrality. On the one side, the capacity to make predictions is an important epistemic virtue of science.13 On the

13. For a thesis about computer based science as a culture of prediction, cf. Johnson and Lenhard (2011).

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00180 by guest on 28 September 2021 Perspectives on Science 403

other side, the virtual atlas falls short of other epistemic virtues, mainly generality. To generalize is a major way in which science creates systematic connections and makes accessible new fields. In my view, it is an open ques- tion whether areas like NSR can thrive permanently on the basis of com- putational instrumentation that in a certain sense substitutes generality by iteration or whether this is only a momentary and short-lived dynamics that has to be overcome by some (unforeseen) conceptual step to keep science afloat long-term.

References Arnheim, Rudolf. 1986. New Essays on the Psychology of Art. Berkeley, CA: University of California Press. Baird, Davis, Alfred Nordmann, and Joachim Schummer (eds.) 2004. Discovering the Nanoscale. Amsterdam: IOS Press. Baudriliard, Jean. 1998. “Simulacra and Simulations.” Pp. 166–184 in Selected Writings. Edited by Mark Poster. Stanford: Stanford University Press. Bueno, Otávio. 2006. “Representation at the Nanoscale.” Philosophy of Science 73: 617–628. Burnett, Graham D. 2000. Masters of All They Surveyed: Exploration, and a British El Dorado. Chicago and London: The University of Chicago Press. Eco, Umberto. 1976. A Theory of Semiotics. Bloomington: University of Indiana Press. Edney, Matthew H. 1997. Mapping an Empire: The Geographical Construction of British India, 1765–1843. Chicago: The University of Chicago Press. Evstigneev, Mykhailo, and Peter Reimann. 2003. “Dynamic Force Spectroscopy: Optimized Data Analysis.” Physical Review E 68, 045103: 1–4. Feynman, Richard P. 1960. “There is Plenty of Room at the Bottom.” Caltech Engineering and Science 23 (5): 22–36. Giere, Ronald. 1999. Science without Laws. Chicago: The University of Chicago Press. Gombrich, Ernst. 1975. “Mirror and Map: Theories of Pictorial Represen- tation.” Philosophical Transactions of the Royal Society of London. Series B 270 (903): 119–149. Goodman, Nelson. 1968. Languages of Art. New York: Bobbs-Meryll. Harley,J.Brian.2001.The New Nature of Map: Essays in the History of Cartography. Baltimore: The Johns Hopkins University Press. Johnson, Ann, and J. Lenhard. 2011. “Toward a New Culture of Prediction: Computational Modeling in the Era of Desktop Computing.” Pp. 189– 199 in Science Transformed? Debating Claims of an Epochal Break edited by Alfred Nordmann, Hans Radder, and Gregor Schiemann. Pittsburgh, PA: University of Pittsburgh Press.

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00180 by guest on 28 September 2021 404 Simulation, Representation, and Cartography

Kitcher, Philip. 2003. “The Third Way: Reflections on Helen Longino’s The Fate of Knowledge.” Philosophy of Science 69: 549–559. Kitcher, Philip. 2001. Science, Truth, and Democracy. Oxford, New York: Oxford University Press. Korzybski, Alfred. 1931. “A Non-Aristotelian System and its Necessity for Rigour in Mathematics and Physics,” presented at a AAAS meeting December 28, 1931. Reprinted in Science and Sanity, 1933, pp. 747–61. Landman, Uzi. 2002. “Studies of Nanoscale Friction and Lubrication.” Georgia Tech Research News October 22: http://gtresearchnews.gatech.edu/ newsrelease/MRSMEDAL.htm. (visited 24 April 2013) Longino, Helen E. 2002. TheFateofKnowledge. Princeton and Oxford: Princeton University Press. Marcovich, Anne, and Terry Shinn 2014. Toward a New . Exploring the Nanoscale. Oxford: Oxford University Press. Mody, Cyrus. 2011. Instrumental Community: Probe Microscopy and the Path to Nanotechnology. Cambridge, MA: The MIT Press. Raible,Martin,M.Evstigneev,F.W.Bartels,R.Eckel,M.Nguyen- Duong, R. Merkel, R. Ros, D. Anselmetti, and P. Reimann. 2006. “Theoretical Analysis of Single-Molecule Force Spectroscopy Experiments: Heterogeneity of Chemical Bonds.” Biophysical Journal 90: 3851–3864. Raible, Martin, M. Evstigneev, P. Reimann, F. W. Bartels, and R. Ros. 2004. “Theoretical Analysis of Dynamic Force Spectroscopy on Ligand- Receptor Complexes.” Journal of Biotechnology 112: 13–23. Toulmin, Stephen. 1960. The Philosophy of Science: An Introduction. New York: Harper & Row.

Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/POSC_a_00180 by guest on 28 September 2021