<<

Philosophia Scientiæ Travaux d'histoire et de philosophie des sciences

23-1 | 2019 Y a-t-il encore de la place en bas ? Le paysage contemporain des nanosciences et des

Jonathan Simon et Bernadette Bensaude-Vincent (dir.)

Édition électronique URL : http://journals.openedition.org/philosophiascientiae/1693 DOI : 10.4000/philosophiascientiae.1693 ISSN : 1775-4283

Éditeur Éditions Kimé

Édition imprimée Date de publication : 18 février 2019 ISSN : 1281-2463

Référence électronique Jonathan Simon et Bernadette Bensaude-Vincent (dir.), Philosophia Scientiæ, 23-1 | 2019, « Y a-t-il encore de la place en bas ? » [En ligne], mis en ligne le 01 janvier 2021, consulté le 30 mars 2021. URL : http://journals.openedition.org/philosophiascientiae/1693 ; DOI : https://doi.org/10.4000/ philosophiascientiae.1693

Tous droits réservés Y a-t-il encore de la place en bas ? Le paysage contemporain des nanosciences et des nanotechnologies

Introduction. Nanotechnoscience: The End of the Beginning

Bernadette Bensaude-Vincent Université Paris 1 Panthéon-Sorbonne ()

Jonathan Simon Archives Henri-Poincaré – Philosophie et Recherches sur les Sciences et les Technologies (AHP-PReST), Université de Lorraine, Université de Strasbourg, CNRS, Nancy (France)

Is there still room at the bottom? The question providing the theme for the present issue of Philosophia Scientiæ is, of course, adapted from Richard Feynman’s well-known speech at the 1959 meeting of the American Physical Society. On this occasion he attracted physicists’ attention to the vast potential of working at the scale of the nanometre if not the ångström, using the catchy title: “Plenty of Room at the Bottom” [Feynman 1959]. This hookline from a famous Nobel laureate physicist served as a motto for the emerging field of nanoscience and (which we will here abbreviate to nanoresearch) in the early 2000s. Almost 20 years on, nanoresearch has become a well-established area of research and development in both the public and the private sectors with a large volume of publications, undergraduate and graduate programmes, international conferences, start-ups and commercial applications. It seems timely, therefore, to reflect on where nanoresearch has come from, where it is now and the orientation for its development over the coming decades. The purpose of this issue is not to assess what has been achieved in nanoresearch in terms of breakthrough discoveries or return on research investment. Rather, the aim is to look back and consider whether nanoresearch changed the landscape of academic research and what kinds of reconfiguration of research practice it has brought about. More generally, what lessons can the philosophy of science learn from the form and development of nanoresearch?

Philosophia Scientiæ, 23(1), 2019, 5–17. 6 Bernadette Bensaude-Vincent & Jonathan Simon

With this project in mind, we launched a call for papers aimed principally at philosophers to address questions such as: what are the relationships between science and technology in nanoresearch? What is the impact of nanoresearch on traditional disciplines, notably in terms of the much-vaunted NBIC (Nanotechnology, Biology and medicine, Information sciences, and Cognitive Sciences) convergence hypothesis? We also invited researchers who have been active in the field since the early days of nanoresearch to present their own views in a short commentary format. These more personal texts are extremely valuable as they bring to light interesting critical issues in the field (both concrete and theoretical) in a lively and engaging manner.

1 Drexler’s Feynman: A founder myth for nanoresearch

Feynman’s futuristic visions of miniaturised machines and information systems were communicated to a wider general public by K. Eric Drexler, the self- appointed prophet of the coming nanotechnological revolution, in his successful 1986 book [Drexler 1986]. In his energetic campaign for a nanotechnology revolution, Drexler featured Feynman as the founding father of the nanotechnology era [Drexler 2004]. The , a non-profit organization founded by Drexler’s disciples, even funds a Feynman Prize which has been awarded every year since 1997. This celebration of a charismatic physicist as the founder of a new scientific field, if not a new scientific era is a quite typical instantiation of the phenomenon of the invention of precursors, familiar to historians of science. This being said, associating Feynman’s name with the emergence of nanoresearch is certainly justified, since Feynman clearly did envision new opportunities for design at the atomic level. The miniaturisation of materials, machines and computing systems, Feynman argued, could be pursued all the way down to the displacement and rearrangement of individual . In particular, information storage could be inspired by the “biological example of writing information on a small scale” seen in the molecule of DNA.1 Feynman pointed out that this new kind of design would require more powerful microscopes, which would allow physicists to cooperate with chemists in the synthesis of materials and molecules. He also insisted on the fact that atoms have specific behaviours guided by specific forces, and argued that these characteristics could provide new opportunities for design. For us, there is a striking resonance between Feynman’s predictions and today’s nanoresearch, but we can safely assume that Feynman’s colleagues. . . ..

1. The famous paper on the double-helix structure of DNA—“The molecular structure of nucleic acids”—was published by Francis Crick and James Watson in 1953. Introduction. Nanotechnoscience: The end of the beginning 7 did not interpret his speculations about the future in the same way as we do. In the context of the Cold War, Feynman did not really need to draw his colleagues’ attention to atoms and their properties since a significant number of them were working in atomic physics. What might well have come across to them in the phrase “room at the bottom” is an alternative message like: “Hey guys! Instead of splitting atoms to make bombs or nuclear reactors, why not try using them to build something?” Feynman was suggesting a new alternative route for leaving the “military-industrial complex”, and this option seems to have attracted the attention of the physics community for about three years. But because most research infrastructure continued to be oriented towards nuclear physics, the perspectives for research outlined in “Room at the bottom” failed to engage physicists’ interest over a longer period. Drexler’s reading of Feynman’s vision of the future proceeds from its decontextualization and recontextualization against the background of the emergence of new techniques of microscopy. As Feynman had rightly pointed out, microscopy turned out to be a critical factor for working effectively at the level of atoms and molecules, since it was the construction of the Scanning Tunelling Microscope (STM) by two IBM researchers in 1981 and its rapid appropriation by academic scientists that triggered the take-off of nanotechnology [Mody 2004a,b]. The possibilities of near-field microscopy were reinforced by the development of the Atomic Force Microscope (AFM), another crucial contribution to the take-off of nanoresearch. In addition, Drexler adopted and translated Feynman’s visions in terms of “molecular manufacture”, while ignoring Feynman’s remarks on the speci- ficities of physics at the nanolevel. Drexler was subsequently caught up in a controversy around these issues with chemists [Smalley 2001], [Whitesides 2001] and physicists [Jones 2004] alike, which discredited him in the eyes of the scientific community. Indeed, most scientists active in the early decades of nanoresearch had never heard of Drexler and his speculative views on molecular manufacturing [Toumey 2005, 2008]. Does this mean that Drexler and Feynman had no impact whatsoever on nanotechnology? One could argue, with a touch of cynicism, that the title of founding father of nanotechnology should be awarded not to Richard Feynman but to Bill Clinton who launched the first, generously- funded National Nanotechnology Initiative at the end of his mandate at the White House. Furthermore, Drexler’s prediction that the bottom-up approach would soon consign the conventional top-down methods of chemical manufacture to a bygone era has proven wrong. Thirty years on, lithographic methods are still widely used for fabricating electronic nanodevices, despite significant advances in the bottom-up synthesis of nanomaterials through the self-assembly of molecules. The contemporary world of nano-engineering deploys a broad spectrum of techniques of synthesis ranging from chemical vapour deposition to molecular recognition, but precision top-down design still has a future. Nevertheless, as Chris Toumey argues in this volume, Drexler’s claims deeply influenced the popular view of nanotechnology. Drexler’s 8 Bernadette Bensaude-Vincent & Jonathan Simon grand futuristic visions also informed the hype around the earliest programs of nanoresearch, calling forth not only a plethora of metaphors such as nanorobots and nanomissiles, which appealed to various audiences, but also a range of extravagant promises that nanoscience would provide the solutions to all our contemporary problems, including depleted resources, pollution, cancer, and epidemic disease [Berube 2006].

2 From prophecies to promises

The first US Nanoinitiative launched in 2000 led to increased on-the-ground engagement in nanoresearch as well as a multiplication of the promises associated with this research. This culminated in the more ambitious program of Converging Technologies for Improving Human Performance, known by its acronym NBIC (Nanotechnology, Biotechnology, Information technology and Cognitive science), as presented in a report from 2003 commissioned by the National Science Foundation [Roco & Bainbridge 2003]. Nanotechnology was clearly meant to be the driving force in the proposed process of convergence. This report held out three major promises. In simple terms, nanotechnology would make people healthier, wealthier, and wiser. First, the NBIC report announced a disciplinary “renaissance” in science due to the blurring of the boundaries between physics, chemistry, biology, computing and cognitive science at the nanoscale. The characteristic trajectory of modern science towards division into separate, independent disciplines and sub-disciplines would be countered by the cross-fertilization between the different disciplines characteristic of nanoresearch. Second, the rapid progress resulting from this convergence would result in technological advances that would be extremely interesting for industry. Third, from an anthropological perspective, this report clearly announced that the resulting “disruptive” technologies [Bower & Christensen 1995] would enhance human performance both individually and collectively. The report also established a strong connection between nanotechnology and , which is not altogether surprising considering that its co-editor, William Bainbridge, was chair of the World Transhumanist Movement. The association between nanoscience and human enhancement underlined in the original report triggered a lot of public debate, but this aspect disappeared from the ten-year report on NBIC, supporting the idea that it was only initially present because of Bainbridge’s influence [Roco 2011]. Nevertheless, along with concerns about the public acceptance of nanotech- nology, the transhumanist connection was an important factor behind the recruitment of social scientists onto a number of research programs on nanotechnology. Chris Toumey’s reflections on his own participation in this movement suggest that the various programs for anticipating the ethical, legal, and social impact (ELSI) of nanotechnology opened up opportunities Introduction. Nanotechnoscience: The end of the beginning 9 for scholars in the humanities. On the model of biotechnology, social science in the nanosciences generated a mini-industry within the humanities with hundreds of publications, a society S.Net (Society for the Studies of New and Emerging Technologies), and a journal NanoEthics that provided a forum for discussing the issues raised by technologies that converge at the nanoscale. In the early decades this intensive scholarly production greatly favoured studies concerned with predicting the future as well as speculative ethics focused on human enhancement. But the journal and S-Net conferences also welcomed and continue to publish more concrete studies of regulation based on risk analysis and nanotoxicology, sociotechnical imaginaries and issues around public engagement in science.

The promise of disruptive technologies that could provide a competitive edge continues to attract venture capitalists, entrepreneur-scientists with their own start-ups, as well as larger better established industrial groups, although public money remains the principal source of funding. The federal budget of the US National Nanoinitiative increased from $450 million in 2001 to 2.1 billion in 2011. While it actually decreased to 1.2 billion in 2018, the cumulative total investment has nevertheless topped $25 billion since the inception of the NNI at the beginning of the century. In Europe, the Graphene Flagship initiative launched in 2013 with a €1 billion budget over ten years suggests that nanomaterials retain their attractiveness. Graphene, a one- layer of graphite often described as a “pure surface” is presented as a wonder material that will lead to a significant leap forward in electronic devices, energy generation and storage, composite materials, and biomedical appli- cations, among other prospective uses. Constructed around this innovative product, the Graphene Flagship project finances research into 2D materials while supporting partnerships between the scientific research community and private companies in order to leverage the available resources and encourage innovations with a direct impact on the European economy.

From the perspective of scientific disciplines, we have still to see any clear signs of the reorganization of knowledge as a result of the convergence of technologies, notably the NBIC convergence triggered by nanoresearch. While traditional scientific disciplines are not showing signs of weakening, nanoresearch has certainly contributed to the blurring of boundaries between science and technology. It is through the synthesis of objects that researchers expect to better understand the physics of the nanoworld. In this research, knowing and making become fused into a single performance. Researchers are more interested in what atoms, molecules and genes can do, or in what they can do with them, than in what they are in a deeper ontological sense. The building blocks of matter and life are reconfigured as nanomachines and explored for their functionalities rather than as the structural units of matter and life. In his contribution to this journal Alfred Nordmann underlines this shift in epistemic culture from theory to practice, suggesting that the nanosciences work with “closed theories” which they do not seek to elaborate. Thus, the goal of 10 Bernadette Bensaude-Vincent & Jonathan Simon the nanosciences is first and foremost “proof of concept” which, according to industrial lore, represents the first step on the path to commercial applications.

Thanks in large part to the constitution of interdisciplinary teams to conduct nanoresearch around diverse projects, there are now many examples of cross-fertilization between disciplines and the emergence of new domains such as bioinformatics, nanophotonics and spintronics. Still, while research in the areas of nanophotonics, spintronics, and optoelectronics has been pursued under the aegis of nanoresearch projects, these areas are understood as being sub-disciplines of physics.

If anything, nanoresearch seems to have strengthened disciplinary affilia- tions. While many supramolecular chemists jumped onto the nanotechnology bandwagon in the 2000s they did not leave their chemistry departments, nor did they stop publishing in their specialist disciplinary journals. The effect of nanoresearch has been to open up new sub-disciplines within chemistry such as surface chemistry or biomimetic chemistry rather than to construct a new independent discipline. The 2016 Nobel Prize in Chemistry awarded to James Stoddart, Jean-Pierre Sauvage and Ben Feringa clearly reinforced the chemists’ authority over the nanoworld. In their Nobel lectures both Stoddart and Sauvage insisted that they used only the resources of chemistry—templates and molecular topology in particular—to make the catenanes, rotaxanes, switches, and shuttles that fuelled their research.

The positive reinforcement of traditional disciplines through nanoresearch is even more pronounced in the case of biology. Xavier Guchet’s contribution to this issue clearly demonstrates the gulf between the NBIC discourses and the actual practices of . Nanotechnology has been instrumentalized by biologists to pursue either their own biomedical research as is illustrated by the case of tissue engineering, or their own dreams as can be seen more broadly in the area of synthetic biology. Although the Biobrick program, which has been the driving force for the promotion of synthetic biology, can be viewed as a perfect illustration of the NBIC agenda, combining as it does information technology with molecular biology, and exemplifying the bottom- up approach, from bricks to modules and then to systems, its promoters have chosen to identify their research field as a branch of biology, thus marking their distance from the NBIC program [Bensaude-Vincent 2013]. In examining the practices involved in synthetic biology in this issue, Luis Ujéda shows how the discourse about NBIC distorts and oversimplifies the complex process of interaction between biology, chemistry and computer engineering. We can conclude, therefore, that the NBIC program has triggered a centripetal dynamic in nanoresearch rather than the predicted convergence of disciplines. Furthermore, as many have remarked [Vinck & Hubert 2017], and it is a theme taken up by several authors in the current volume, the contemporary world of nanoresearch in the late 2010s is highly diverse. Today “nano” functions more as a generic marker than as a label for any specific scientific domain. Introduction. Nanotechnoscience: The end of the beginning 11 3 The problematic epistemic status of nanoresearch

Thirty years after the emergence of the field, there is still no agreement as to what constitutes nanotechnology. Indeed, if the domain exists at all, it is defined a minima by the reference to the nanometer scale, which is used, for example, by the National Nanoinitiative and by the ISO. In this “official” literature, nanotechnology is science, engineering, and technology conducted at the nanoscale, which is generally defined as running from 1 to 100 nanometers. This standard definition provides a tenuous basis on which to construct a unified field of research let alone a viable scientific discipline. There is no consensual definition shared between the various actors in- volved in the investigation, design, engineering or manufacture of nano-objects either. There are as many definitions as there are teams of nanoresearchers, and probably even more, as not everyone in a single team would agree on a single definition. Thierno Guèye in his contribution to this volume deplores the plurality of definitions which he regards as a symptom of social relativism. He makes a brave attempt to construct an “objective definition” based on the genesis of nano-objects and detached from social actors. However, his epistemological approach leads to a definition of nanos as “all techno-scientific activities aimed at knowing or manufacturing more or less complex objects by means of STMs or similar microscopes”. Such a definition is far too restrictive to embrace the range of nano-objects that have come into existence over the past decades. Not all of them are produced using STMs, far from it. The definition is even less adequate to cover the range of research in the field, as many researchers use neither STMs nor AFMs and yet this work is clearly accepted as valid nanoresearch by their colleagues. A central paradox of contemporary nanoresearch is the diversity that hides behind an apparent unity or at least a proposed coherence born by the terms nanoscience and nanotechnology. Take, for example, a leading journal in the domain: Nature Nanotechnology. This is a domain-specific publication of the Nature group launched in 2006. If we look at the four featured research highlights articles from the May 2018 issue, we see a clear illustration of the diversity in the nanosciences. One article is about the two-dimensional structure of perovskites, exploring questions in materials sciences [Bubnova 2018]. A second treats issues of photon transfer for possible applications in nano-based telecommunications [Heinrich 2018]. A third article takes us into the domain of synthetic biology with a treatment of functional cellular material (in this case polymersomes) [Pastore 2018] and the last of the four “highlights” proposes a way to obtain electricity from water using oxygen-modulated carbon nanotubes in combination with untreated nanotubes as electrodes [Sun 2018]. While diversity is doubtless a criterion for selecting these highlight articles, it is clear that the editor had a range to choose from. When we compare these four articles with one another, it is not at all evident how the four subjects 12 Bernadette Bensaude-Vincent & Jonathan Simon could be considered to belong to the same scientific field. And that, in a nutshell, is the glory of contemporary nanotechnology. The lack of consensus about a definition of nanos and the irreducible diversity of practices covered by this umbrella term are the symptoms of its peculiar epistemological status. By no means can nanoscience be considered a “normal science” in Thomas Kuhn’s sense. Take the description Kuhn gives of normal science at the beginning of his classic book:

In this essay, “normal science” means research firmly based upon one or more past scientific achievements, achievements that some particular scientific community acknowledges for a time as supplying the foundation for its further practice. [Kuhn 1962, 10]

Here, we could follow the path suggested by Thierno Guèye and take STM (and to a lesser extent AFM) as paradigmatic, or the construction of certain nano-materials, maybe nanotubes or graphene, as the scientific achievements in question, which might serve as the foundations for nanoscience. In very abstract terms, this approach may seem to do the job, lending coherence to a dispersed field. But, as we have just suggested, we do not have to look very hard to find projects that are considered nanoresearch but do not rely on these forms of microscopy and do not use these particular nanomaterials. More fundamentally, however, if we consider the examples that Kuhn used himself—Einstein’s theory of relativity, Lavoisier’s oxygen theory or Copernicus’ heliocentric astronomy—we see that our suggestions are not going to give us a science that resembles those Kuhn was thinking about. Even if STM and certain nano-materials were adequate references for unifying the field, what the nanosciences would offer us is not a very good fit in terms of Kuhnian paradigms. Kuhn’s choice of examples suggests he is thinking about coherent englobing theories that dictate the scientist’s engagement with the world (both theoretical and practical) and it is hard to find any equivalent in the different areas of nanoresearch. There is no consensus on key theoretical assumptions, no universal (or generalizable) exemplars, nor is there really any well-defined, consensual body of practical nanotechnology that could constitute something like a paradigm. Furthermore, nanoscience never became a discipline in the academic sense, despite being several decades old and boasting the typical markers of the establishment of a distinct research field such as specialist journals and annual conferences. In terms of scientific identity, when people in a nanoresearch team talk about their colleagues, they will refer to the physicist, the biologist, the chemist or the pharmacist in the team, rather than thinking of themselves as all being nanoscientists together. These older, better established disciplines are where the scientists have received their professional training and it is where they look for recognition as well as the rules that guide their professional life and their research. Disciplinary affiliations are remarkably resilient and solidly resist the revolutionary claims of a new transversal field like nanoresearch. Introduction. Nanotechnoscience: The end of the beginning 13

One reason that nanoresearch has not (yet?) established itself as a new transdisciplinary domain is that it seems to be a particular kind of approach that can be applied to different types of objects, “enabling” interdisciplinary research projects. For instance, a physicist like Christian Joachim has taken advantage of the technological possibilities offered by new-generation STMs to further explore the quantum properties of atoms with the vague objective of challenging the foundations of quantum mechanics as proposed by Louis de Broglie. Nanoresearch has also enabled pharmaceutical research to develop targeted drug-delivery techniques in order to improve the therapeutic effect of well-established molecules rather than searching for new wonder miracle molecules. Hence the importance and the success of the military metaphors used to convey the view of increased precision and control rather than a new “arsenal” of drugs (see the contribution by Gry Oftedal to the present issue). More globally, the nano approach broadens the field of potentialities and enhances our actionability on molecules. Nanoresearch has certainly encour- aged interdisciplinarity but failed to generate the expected transdisciplinarity through the convergence of disciplines. Significantly, the 2018 NSF budget of the US National Nanotechnology Initiative is still distributed among the different disciplines such as the biological sciences, computer and information sciences and technology, engineering, the geosciences, the mathematical and physical sciences [NNI 018]. In other words, traditional disciplines have resisted the NBIC offensive remarkably well.

4 A paradigmatic technoscience

If nanotechnology does not fit the Kuhnian paradigm-based model of normal science, it may be because it belongs to a new regime of knowledge production. Following other philosophers, we characterize this regime as technoscience, a term referring primarily to the merging of science and technology, not only in the sense of application-oriented research, but also in the sense that technology drives research projects, as STM has done in the emergence of nanoresearch.2 Nonetheless, nanoresearch is much more than technology- oriented or instrument-driven research. It is above all a painstaking laboratory practice of design of tiny, invisible, unfamiliar objects. Such molecular mate- rials and machines are made neither as potential candidates for technological breakthroughs, nor as samples of physical or biological phenomena used to test theoretical hypotheses. These material realisations are interesting in their own right, and they serve as tools for exploring a new world of surprising and often unpredicted possibilities and opportunities.

2. This reciprocal internalization of science into technology and technology into science is what Gilbert Hottois had in mind when he coined the term technoscience [Hottois 1984]. 14 Bernadette Bensaude-Vincent & Jonathan Simon

In his assessment of the impact of nanoresearch on the philosophy of science, Alfred Nordmann emphasizes how it has elicited new responses to familiar epistemological issues such as the respective roles of theory and experiment or observation. However, his epistemological analysis of nanoresearch practices strongly suggests that nanoresearch is also raising new philosophical and ethical issues. For instance, the fact that nano-objects such as carbon nanotubes are constituted through the contingent circumstances of their particular histories raises difficult questions for risk assessment, and at the same time, brings about a new relationship between human subjects and the objects they produce. Nano-objects are so individualized by their genesis and their interactions with their environment that they cannot be treated as representatives of a class of natural phenomena. They are just made to work, individually, project by project, meaning that nanoscientists are often content with publishing “proofs of concept”. In this context, demonstrating the existence and the performance of a nanoresearch object is more important than establishing overarching natural laws. Nanoresearch thus subverts the hierarchy of epistemic values in science. Certain forms of nanoresearch look like a quest for exceptional performance in extreme conditions, like a spectacular sports race or the attempt to establish a new world record. Sacha Loeve’s contribution to this issue presents the International NanoCar Race as an example of the “gamification” of nanoresearch through an international competition. But although this race is presented as a game, it is a serious game. The race was set up with the purpose of defining standards for nanocars through a comparison of their behaviour in the same experimental setting.

5 Enduring ethical challenges

Faced with a proliferating and ramifying domain of nanoresearch that conforms to the classical canons of neither theoretical science nor technological innova- tion, we need to think carefully about the ethical issues raised by this domain of technoscience. Up until now, a significant proportion of the philosophical literature about the ethics of nanoresearch has been oriented towards the transhumanist potential of a convergence between biological engineering and nanoresearch. Nevertheless, as such applications of nanoresearch still seem a long way off, the ethical discussions around them remain rather speculative [Nordmann 2007]. On the other hand, researchers, policy makers, industrial managers, and civil activists are very concerned about the potential toxic effects of nanomaterials, and these issues will become all the more pressing as nanomaterials proliferate and find their way into greater numbers of everyday items. Such application-related issues have attracted most of the attention in the framework of ELSI programs. Thus, what we might term the practical or technical ethical engagement has focused on the identification of risks (mostly Introduction. Nanotechnoscience: The end of the beginning 15 health risks) and attempts to evaluate or to quantify them. Nevertheless, when identified, such as the discussion around the potential pulmonary toxicity of carbon nanotubes, the potential health and environmental risks have so far been largely outweighed by the projected benefits expected from future applications of the materials in question. To conclude this introduction, we would like to raise the much broader question of how to engage with the ethical questions raised by today’s nanoresearch. As we have just suggested, the current norm consists in anticipating and preventing the adverse effects of innovation in nanoresearch, particularly concerning the environmental effects of new nanoparticles. But this work remains in the mode of a managerial approach: risk assessment, prospective studies and preventive measures. Although undoubtedly wise, this kind of prospective preventive approach does not exhaust the range of ethical issues in a domain like nanoresearch, and it is time to start thinking more broadly about the subject. Let us start from the principle that we will adopt a consequentialist position and judge the rightness of our actions based on their consequences (rather than on the intentions of an agent or the inherent “goodness” of the action). This seems to be the position behind programmes of Responsible Research and Innovation (such as the one funded by the European Union—https://ec.europa.eu/programmes/horizon2020/). The problem here is that the range of consequences taken into account remains too limited and we find ourselves restricted to the domain of the technical fix for what is conceived of as a technical problem. If we are to develop an adequate analysis, we need to open up the scenarios and range of relevant actors much wider. Here, we can learn from the history of other synthetic materials, such as ethylene-based polymers, where the use and subsequent dissemination of the material has been far in excess of what was necessary or reasonable at the time of their introduction, leading to the ubiquitous presence of microparticles of these plastics across the globe. The distribution and use of these synthetic polymers is the consequence of a whole range of factors, notably industrial and political policies, but also trajectories of individual use that have depended as much on the ready availability of the material as on its adaptation to unforeseen uses. The risk with novel nanomaterials is to remain in the mode of a technical fix for a well-identified problem rather than to conceive the introduction of these new materials into a world where industrial and political policies are the principal determinants of use. Thus, to be prepared for the new world of nanos, we need to open up the field instead of limiting it to the classically conceived issue of risk. In this context, it is important to address a distinctive feature of nanoresearch which is picked up in a number of papers in this issue: the weaving together of commercial and economic goals with cognitive aims. These traditionally distinct goals have become so tightly bound up together in nanoresearch that it is difficult to tease them apart. The claims of economic competitiveness or national prestige that are put forward to legitimize the different national or regional nanoinitiatives are more than just rhetorical discourses. They impact the research practices 16 Bernadette Bensaude-Vincent & Jonathan Simon and the design of nano-objects. Thus, these tiny objects come into being at the intersection of heterogeneous systems of values: epistemic values (proof of concept), technological values (control, efficiency), environmental and health values, as well as political and economic values. As they bring together these potentially conflicting systems of values they challenge ethical and political categories, obliging philosophers to reconsider how democratic societies can handle such densely value-laden objects. In the face of the variety of actors involved, and the diversity of scientific, industrial, and geopolitical interests at stake, nanoresearch requires a continuous dose of reflexivity concerning both its fundamental research and its commercial applications.

Bibliography

Bensaude-Vincent, Bernadette [2013], Discipline-building in synthetic biol- ogy, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 44(2), 122–129, doi: 10.1016/j.shpsc.2013.03.007.

Berube, David M. [2006], Nano-Hype. The Truth behind the Nanotechnology Buzz, Amherst; New York: Prometheus Books.

Bower, Joseph. L. & Christensen, Clayton M. [1995], Disruptive technolo- gies: Catching the wave, Harvard Business Review, 73(1), 43–53.

Bubnova, Olga [2018], Stand up to stand out, Nature Nanotechnology, 13(5), 356–356, doi: 10.1038/s41565-018-0151-x.

Drexler, K. Eric [1986], Engines of Creation. The Coming Era of Nanotechnology, New York: Anchor Books, trad. fr. par M. Macé, révisée par Th. Hoquet, Engins de création : l’avènement des nanotechnologies, Paris: Vuibert, 2005.

—— [2004], Nanotechnology: From Feynman to Funding, Bulletin of Science, Technology & Society, 24(1), 21–27, doi: 10.1177/0270467604263113.

Feynman, Richard P. [1959], There’s plenty of room at the bottom, in: Miniaturization, edited by H. D. Gilbert, New York: Rheinhold Publishing Co, 282–296, http://www.zyvex.com/nanotech/feynman.html.

Heinrich, Benjamin [2018], Telecom single photons thaw, Nature Nanotechnology, 13(5), 356, doi: 10.1038/s41565-018-0152-9.

Hottois, Gilbert [1984], Le Signe et la Technique. La philosophie à l’épreuve des techniques, Paris: Aubier.

Jones, Richard [2004], Soft Machines, Oxford; New-York: Oxford University Press. Introduction. Nanotechnoscience: The end of the beginning 17

Kuhn, Thomas [1962], The Structure of Scientific Revolutions, Chicago: University of Chicago Press, 4th edn., 2012.

Mody, Cyrus [2004a], How probe microscopists became nanotechnologists, in: Discovering the Nanoscale, edited by D. Baird, A. Nordmann, & J. Schummer, Amsterdam: IOS Press, 119–133.

—— [2004b], Small, but determined: Technological determinism in nanoscience, Hyle – International Journal for Philosophy of Chemistry, 10(2), 99–128.

Nordmann, Alfred [2007], If and then: A critique of speculative nanoethics, NanoEthics, 1(1), 31–46, doi: 10.1007/s11569-007-0007-6.

Pastore, Chiara [2018], Sensing cellular cues, Nature Nanotechnology, 13(5), 356–356, doi: 10.1038/s41565-018-0153-8.

Roco, Mihail C. [2011], The long view of nanotechnology development: the National Nanotechnology Initiative at 10 years, Journal of Nanoparticle Research, 13(2), 427–445, doi: 10.1007/s11051-010-0192-z.

Roco, Mihail C. & Bainbridge, William Sims (eds.) [2003], Converging Technologies for Improving Human Performance. Nanotechnology, Biotechnology, Information Technology and Cognitive Science, Dordrecht: Springer, doi: 10.1007/978-94-017-0359-8.

Smalley, Richard [2001], Of chemistry, love and nanobots, Scientific American, September, 76–77.

Sun, Wenjie [2018], Energy from quiescent water, Nature Nanotechnology, 13(5), 356–356, doi: 10.1038/s41565-018-0154-7.

Toumey, Chris [2008], Reading Feynman into nanotechnology. A text for a new science, Techné: Research in Philosophy and Technology, 12(3), 133– 168, doi: 10.5840/techne20081231.

Toumey, Christopher P. [2005], Apostolic succession: Does nanotechnology descend from richard feynman’s 1959 talk?„ Engineering & Science, 68(1), 16–23, http://resolver.caltech.edu/CaltechES:68.1.Succession.

Vinck, Dominique & Hubert, Matthieu [2017], Les Nanotechnologies : l’invisible révolution, Paris: Le Cavalier Bleu.

Whitesides, George M. [2001], The once and future nanomachines’, Scientific American, September, 78–83.

Les nanotechnologies, par-delà l’« indéfinissabilité »

Thierno Guèye Institut de Technologie Agroalimentaire, QC (Canada)

Résumé : Les premiers spécialistes, tant en sciences humaines qu’en physique et en chimie, se sont vite rendus à l’idée selon laquelle chacun pourrait avoir sa définition des nanotechnologies. Ils s’inscrivent ainsi dans ce que nous consi- dérons comme une culture des « faits alternatifs » plutôt répandue dans les sciences contemporaines. Ce texte est une note critique autour de la difficulté à définir précisément ce qu’il faut entendre par nanosciences et nanotechno- logies. Dans la première partie, nous faisons ressortir la multiplication des définitions, liée aux luttes économiques, aux enjeux stratégiques et aux intérêts hétérogènes des acteurs. Puis, nous cherchons à dépasser ce relativisme en nous focalisant sur les outils qui permettent l’accès à l’échelle nanométrique et sur les changements de propriétés inhérents à l’échelle quantique.

Abstract: Specialists in the humanities as well as in physics and chemistry quickly came to the conclusion that anyone could have his or her own definition of nanotechnology. This issue thus becomes part of what we see as a culture of “alternative facts” widely accepted in contemporary sciences. This paper is an analysis of the difficulty of defining what is meant by nanoscience and nanotechnology. In the first part, it highlights the proliferation of definitions, due to economic struggles, taking into account the strategic stakes and the heterogeneous interests of the actors involved. In this second part, we seek to overcome this relativism by focusing on the tools that allow access to the nanoscale and the changes in properties inherent to the quantum scale.

1 Introduction

La plupart des travaux actuels sur les nanosciences et les nanotechnologies posent deux problèmes récurrents : celui de la diversité des définitions et celui

Philosophia Scientiæ, 23(1), 2019, 19–37. 20 Thierno Guèye de l’impossibilité de définir ce champ des nouvelles technosciences dont on semble ne pas pouvoir ou vouloir circonscrire les contours. Ainsi, en nous basant sur les études de Céline Lafontaine [Lafontaine 2010], Dominique Vinck & Matthieu Hubert [Vinck & Hubert 2017] mettant à jour les différentes raisons qui rendent si compliquée la question de la définition des nanotechno- logies, nous nous proposons de nous lancer dans l’aventure épistémologique qui consistera dans le présent article à critiquer cette « indéfinissabilité » de fait, sur laquelle on bute inéluctablement quand il faut parler des nanos. Ainsi, nous allons travailler à l’esquisse d’une définition objective, c’est-à-dire fondée sur l’objet, indépendamment de la cacophonie des acteurs qui contribuent largement à brouiller les pistes dès que la question de la définition se pose. Aussi allons-nous poser, ci-dessous, le problème que nous nous proposons de mettre en contexte dans un premier temps, avant d’avancer notre argument majeur qui met en avant, non plus les acteurs ou plus précisément, les communautés d’acteurs, comme il est de coutume de le faire en sociologie, mais les objets qui concentrent les activités de ces communautés. À cet égard, Loeve ouvre quelques pistes intéressantes, mais se dit clairement non intéressé par une définition convergente lorsqu’il affirme : Sharing nano-objects does not mean seeking a consensus. On the contrary, this paper gives prominence to divergence. [Loeve 2010, 5] Or, notre volonté dans cette analyse est d’explorer la capacité de ces objets à nous fournir des lignes directrices pertinentes pour fonder un projet de définition qui met en épochè les acteurs afin de se focaliser sur leurs cibles. Nous terminerons notre propos en insistant sur le caractère fondamental des propriétés quantiques induites par le passage à l’échelle des nanos et dont aucune définition sérieuse ne saurait faire l’économie. Nous reprenons ici certains arguments de l’ouvrage que nous consacrons précisément à cette question, Critique des nanotechnologies [Guèye 2019].

2 Contextualisation du problème

Si l’on en croit les experts de l’Organisation internationale de normalisation (ISO), notamment les travaux de son comité chargé de l’élaboration du document sur les « nanotechnologies », l’ISO/TC 229 (Nanotechnologies) et le comité technique IEC/TC 113 (Normalisation dans le domaine des nano- technologies relatives aux appareils et systèmes électriques et électroniques) dans leur deuxième édition, qui annule et remplace la première (ISO/TS 80004-1:2010) : « les applications des nanotechnologies affectent pratiquement tous les aspects de la vie et permettent des avancées considérables dans les domaines des technologies de la communication, de la santé, de la fabrication, des matériaux et des technologies de la connaissance ». Mais la principale préoccupation de l’ISO n’est pas de nature épistémologique. En effet, pour Les nanotechnologies, par-delà l’« indéfinissabilté » 21 cet institut « il est nécessaire de fournir à l’industrie et aux chercheurs des outils d’aide au développement, à l’application et à la communication des nanotechnologies ». Et, par-dessus tout, l’ISO considère que « l’harmonisation de la terminologie et des définitions est un objectif fondamental pour faciliter une compréhension mutuelle et un usage cohérent dans toutes les communautés qui développent et utilisent les nanotechnologies ». Le comité technique no 229 de l’ISO chargé de la série des normes ISO/TS 80004 estime que « les nanotechnologies évoluant sans cesse, les termes et définitions destinés à faciliter la communication sont devenus de plus en plus spécifiques et précis. Pour de nombreuses communautés, la signification de termes tels que “échelle nanométrique”, “nanomatériau” et “nanotechnologies” découle de l’application logique de l’échelle des unités SI » [https://www.iso.org/ obp/ui/#iso:std:iso:ts:80004:-1:ed-2:v1:fr, (consulté le 21 mars 2018)]. Selon Marie-Hélène Fries [Fries 2016] le contenu de la définition proposé par ce comité correspond à peu près à celui que donne l’Initiative nationale pour les nanotechnologies (NNI) des États-Unis dans son rapport de 20101. Dans la plupart des disciplines scientifiques, les désaccords portent rare- ment sur la définition, mais plutôt sur des interprétations ou sur le choix d’une approche. Cependant, en nanoscience et/ou nanotechnologie, voire « nanotechnoscience », la communauté des chercheurs s’investissant dans ces domaines de Recherche et Développement semble résignée à accepter l’idée que les nanos sont indéfinissables ou, à tout le moins, difficiles à définir. Nous allons essayer de montrer que la difficulté de cette définition n’est pas imputable à la nature de l’objet, mais plutôt à la nature des enjeux. Selon Dominique Vinck & Matthieu Hubert :

La situation est compliquée, effectivement, parce que les acteurs – chercheurs, institutions et entreprises notamment – ne sont pas d’accord entre eux sur ce qu’il convient d’inclure ou non dans le domaine des nanotechnologies. Ils n’ont pas le même point de vue. Ils ont aussi des intérêts différents et chacun lutte pour imposer une définition plutôt qu’une autre. Comme les nanotechnologies recouvrent une grande diversité d’objets, certains tentent de les définir de façon large ou, au contraire, restreinte, tandis que d’autres limitent ou tirent la définition vers un sous-ensemble. Des chercheurs estiment qu’il ne faut pas définir les nanotechnologies de façon trop large, sinon trop de scientifiques s’en réclameront abusivement. D’autres pensent qu’il faut éviter une définition trop restrictive, qui limiterait les nanotechnologies à la construction de nano-objets à partir d’atomes individuels, car très rares sont les chercheurs dans le monde à s’y intéresser. [Vinck & Hubert 2017, 21]

1. https://obamawhitehouse.archives.gov/sites/default/files/microsites/ ostp/nni_2011_budget_supplement.pdf, p. 3. 22 Thierno Guèye

Ce constat nous amène à nous interroger sur la pertinence de laisser la question de la définition à la discrétion exclusive des acteurs. On est en droit de se demander si une définition potentiellement objective peut émerger de la tête de ceux qui ont des intérêts non dissimulés qui excèdent le cadre académique. Il convient également de tenir compte de l’importance des montants en jeu autour du développement des technologies nanos qui étaient de 1,081 milliard de dollars en 2005, puis 1,775 milliard en 2006 aux États-Unis. Cependant, après quelques années de leadership de ce pays, qui détenait la plus grande diversité de publications et de dépôts de brevets sur les « nanotechnologies », depuis 2016 au moins, la Chine a pris de l’avance sur ses concurrents et s’impose actuellement comme le premier acteur mondial. Selon Dominique Vinck & Matthieu Hubert l’Empire du Milieu s’arroge plus d’un tiers de la production scientifique mondiale (34,4 %) sur les « nanotechnologies » [Vinck & Hubert 2017, 45–46]. Afin de prendre le problème de la définition sous un autre angle, nous devons nous poser la question de l’identité de leurs auteurs. Autrement dit, qui parle ? Il se trouve que chercheurs, institutions et entreprises n’ont pas les mêmes intérêts, donc pas les mêmes motivations en ce qui concerne les nanos. Si les chercheurs s’appliquent encore plus ou moins à faire reculer les frontières de notre ignorance tout en essayant de s’attirer les faveurs des bailleurs, les institutions sont obligées de tenir compte de leurs possibilités financières et des retombées économiques et sociétales. Les entreprises, en revanche, ne sont motivées que par l’optimisation des profits possibles à travers chaque début d’innovation. Cette situation entraîne des conflits d’intérêts majeurs entre ces trois parties pourtant condamnées à collaborer. C’est cette situation que résume l’expert, Sylvain C. (l’un des répondants de Lafontaine au nom fictif pour les besoins de l’anonymat) dans l’étude menée par Céline Lafontaine, quand il affirme : J’étais dans l’industrie pharmaceutique aux États-Unis au mo- ment où les conseils d’administration et les directions d’entreprises composés de scientifiques, dont la mentalité était de faire des découvertes pour aider les gens à améliorer leur santé, ont été remplacés par des conseils d’administration qui ont changé de philosophie [...]. Ce qui est important, ce n’est pas de soigner ou de guérir des malades ; ce qui est important, c’est d’obtenir du rendement pour les actionnaires. [Lafontaine 2010, 113–114] Il ressort de ces propos que dans une telle configuration d’acteurs la recherche peut être orientée selon la volonté et les intérêts des bailleurs qui la financent et qui en fixent les objectifs. Une telle mainmise du pouvoir économique, en plus d’avoir des répercussions sur la question de la définition, pose des problèmes qui relèvent de ce que nous appellerons « l’éthique épistémologique ». On peut soupçonner que cette omniprésence des acteurs non-scientifiques au cœur même de la politique scientifique leur permet d’orienter et de manipuler à souhait le cours des recherches qu’ils financent Les nanotechnologies, par-delà l’« indéfinissabilté » 23 selon les intérêts qui sont les leurs. De ce fait, la science des publications scientifiques cède de plus en plus le pas à la science des brevets. À cet égard, Lafontaine dit, en parlant du modèle québécois, que : Si le modèle du chercheur-entrepreneur mis de l’avant par NanoQuébec et les organismes subventionnaires soulève des ques- tions quant à la formation scientifique des jeunes chercheurs, la course à la rentabilité et la place grandissante de la recherche industrielle au sein de l’université entrent en contradiction, de leur côté, avec le mode de diffusion et d’échange propre au système académique. Alors que la carrière d’un chercheur repose sur sa capacité à inscrire ses travaux dans les réseaux internationaux de diffusion et d’échange scientifiques grâce à des publications et des conférences, la recherche industrielle suppose le secret et le brevetage des découvertes. [Lafontaine 2010, 109–110] La chercheuse Sandra V. interrogée par l’équipe de Céline Lafontaine en tire la conséquence suivante : [le] problème, c’est que dans les industries en général on garde les secrets ; en science, on ne garde jamais de secret. [Lafontaine 2010, 110] Quoi qu’il en soit, la culture du secret ne peut qu’entretenir le flou qui règne autour de la définition des nanos tout en mettant en péril l’objectivité des acteurs qui s’intéressent à la question. Dans un tel contexte que peuvent bien valoir les définitions des nanosciences et des nanotechnologies ? Quel crédit pouvons-nous accorder à une définition quand on sait qu’elle revêt en elle- même une plasticité liée au bon vouloir intéressé de celui ou celle qui définit ? Le problème a été circonscrit par Vinck & Hubert dans leur ouvrage intitulé Les Nanotechnologies : l’invisible révolution [Vinck & Hubert 2017], où ils reconnaissent, d’une part, que la grande diversité des objets recouverts par les nanotechnologies incite certains à vouloir les définir de façon large ou restreinte tandis que d’autres veulent limiter celles-ci à des sous-ensembles. D’autre part, selon eux, certains chercheurs veulent éviter les définitions trop restrictives qui réduiraient les nanos à la construction de nano-objets atome par atome. Mais, la raison pour laquelle un tel type de définition devrait être évitée, de leur point de vue, ne nous semble pas pertinente dans la tentative de définition que nous envisageons ici. En effet, selon eux, la raison pour laquelle cette partie de la communauté scientifique pense que l’on devrait se méfier des définitions trop restreintes, c’est que rares sont les chercheurs dans le monde à s’y intéresser [Vinck & Hubert 2017, 21]. Même si en sociologie des sciences une telle démarche peut être pertinente et justifiée, d’un point de vue épistémologique, une bonne définition ne saurait se laisser distraire par l’engouement ou non, la compétence ou non des acteurs. Un travail similaire à celui de Vinck & Hubert a été effectué par Xavier Guchet dans une partie de son ouvrage, La Philosophie des nanotechnologies [Guchet 2014]. Ce dernier va jusqu’à 24 Thierno Guèye considérer que l’anachronisme qui consisterait à classer des activités passées à la nanoéchelle serait pertinent puisqu’il exhorte les « historiens des sciences et des techniques [à] retracer la construction de ces différentes manières de se rapporter à la nanoéchelle » [Guchet 2014, 51]. Selon ce philosophe des nanotechnologies, « point de définition univoque des nanotechnologies, et point d’unité dans cette approche ». Pour lui, « le seul point commun à toutes ces recherches est l’échelle d’intervention ». La définition que nous allons proposer ici ne tiendra compte ni de ce paramètre [Guchet 2014], ni de l’habileté ou non des chercheurs [Vinck & Hubert 2017], qui, malgré leur pertinence sociologique, seraient propres à brouiller davantage la question. À cet effet, examinons les problèmes posés par quelques enjeux de la multiplication des définitions afin d’en construire une qui soit, à notre sens, plus crédible. À cet effet, référons-nous une nouvelle fois au texte de Vinck & Hubert où ils exposent clairement la question. Selon eux :

En fait, les enjeux liés à la définition sont importants pour les chercheurs et les industriels parce que, derrière les définitions, il est effectivement question d’allocation de ressources (subventions pour la recherche ou pour le développement industriel, adhésion du public et acceptation sociale des produits), et de contraintes (standardisation des produits, législation, organisation des pro- grammes de recherche et développement). La définition est donc stratégique pour les acteurs. [Vinck & Hubert 2017, 22]

Lafontaine soutenait déjà que : La pluralité des définitions données aux nanotechnologies, leur caractère relatif et plus ou moins englobant, sont en fait indis- sociables [sic] des enjeux stratégiques et financiers qui leur sont associés. [Lafontaine 2010, 21] Selon elle, les chercheurs ont pleinement conscience des dimensions politiques liées à la définition de leur domaine de recherche. Selon Schmid, Decker et al., ces définitions et ces classifications influencent aussi la façon dont la recherche est organisée et la possibilité de relier des programmes de recherche spécifiques au financement de la recherche [Schmid, Decker et al. 2003, 19]. Ainsi, compte tenu de cette réalité de l’influence des enjeux soulignée par les deux auteurs sus-cités, que devons-nous faire face à la « pluridéfinissabilité » convenue des nanos qui se caractérise par une multiplication de définitions parfois incompatibles ? Cette situation de flou que l’on entretient autour des nanos nous paraît surfaite et difficilement acceptable, et ce, d’autant plus qu’il semble y avoir un consensus autour de la difficulté à les définir, à un tel point que ce chercheur en pharmacie interrogé par l’équipe de Lafontaine soutient ce qui suit :

Je pense que la définition de la nanotechnologie dépend aussi du domaine dans lequel on travaille. [...] Chaque chercheur en Les nanotechnologies, par-delà l’« indéfinissabilté » 25

science appliquée ou fondamentale peut trouver une définition de la nanotechnologie qui va correspondre à ses activités de recherche. [Lafontaine 2010, 21]

Autant dire que chacun a « sa physique » et que ce qui importe, ce n’est pas la définition de « la physique » en tant que telle, mais celle que chaque individu qui prétend faire de la physique pourrait donner à cette discipline. C’est comme si on paraphrasait Protagoras, représenté par Théodore dans le Théétète de Platon, en prétendant que « l’homme est la mesure de toute chose ». Un tel relativisme ne fait que conforter l’idée qu’en science, un même fait peut se prêter à des interprétations opposées. Voyons maintenant comment cette situation émerge des définitions de quelques scientifiques et « métascientifiques ».

3 Quelques exemples de définitions

Selon Schmid, Decker et al. : Analyzing what kind of endeavors are summarized under the topic “Nanotechnology” leads, e.g., to the well-known distinction between so-called “top down”—and “bottom up”—approaches. [Schmid, Decker et al. 2003, 13] Au-delà de cette catégorisation en « top-down » et en « bottom-up » sur laquelle nous reviendrons, au terme de leur analyse, Schmid et ses collègues ont recensé six types de définitions des nanotechnologies. Cependant, alors que nous considérons ces définitions comme parfois incompatibles et souvent inappropriées, eux en tirent une conclusion pragmatique du recours à l’ordre de grandeur, sans rejeter le caractère arbitraire de la seule référence à celui-ci. En effet, ils affirment qu’en mettant ensemble ces six définitions, on peut avoir une bonne impression de ce que sont les nanotechnologies [Schmid, Decker et al. 2003, 16]. En ce qui nous concerne, exposons quelques-unes des définitions que nous avons recensées avant d’en livrer une classification que nous empruntons à Vinck & Hubert. Si l’on se réfère à l’analyse de ces derniers : Une première définition consiste à dire que les nanotechnologies sont l’exemple des connaissances et des techniques grâce auxquelles on crée, manipule, visualise et utilise des objets (matériaux ou machines) qui sont de l’ordre du nanomètre. Elles concernent la conception, la caractérisation, la production et les applications de matériaux et de systèmes à cette échelle. [Vinck & Hubert 2017, 16]

Quant aux nanosciences, l’ethnologue des sciences et le sociologue nous en proposent la définition suivante : 26 Thierno Guèye

Les nanosciences désignent l’étude scientifique des phénomènes et des objets à l’échelle nanométrique, dont les propriétés diffèrent parfois par rapport à ce qui est observé à plus grande échelle. [Vinck & Hubert 2017, 16]

Afin de se faire une petite idée de la diversité des définitions, nous pouvons en considérer quelques-unes. Si l’on prend la définition admise par Crichton dans l’introduction de son roman-fiction, La Proie, « elles [les nanotechnologies] s’intéressent à la construction de machines d’une taille infiniment petite, de l’ordre de cent nanomètres, soit cent milliardième de mètre, et donc environ mille fois plus petites que le diamètre d’un cheveu » [Crichton 2003, 11–12]. La Royal Society britannique, quant à elle, estime que « les “nanotechnologies” sont la conception, la caractérisation, la production et l’application de structures, dispositifs et systèmes par contrôle de la forme et de la taille à l’échelle nanométrique ». Wautelet, en revanche, propose de définir les nanosciences et les nanotechnologies « comme étant les sciences et les technologies des systèmes nanoscopiques » [Wautelet 2006, 5]. Pautrat, en ce qui le concerne, considère que la dénomination « nanotechnologie » est complètement galvaudée puisqu’à son avis :

Très logiquement, elle désigne les techniques que développe la microélectronique pour se faire toujours plus miniaturisée et parvenir à fabriquer des objets dont les dimensions s’approchent du nanomètre. Par extension, ce vocable recouvre aussi les nombreuses technologies qui bénéficient des procédés mis au point pour la microélectronique, mais sont exploitées dans d’autres domaines scientifiques et techniques, en biologie et en médecine notamment. [Pautrat 2002, 16]

La multiplication des définitions, contrairement à ce que pensent Schmid, Decker et al., rend bien compte du caractère controversé du concept [Schmid, Decker et al. 2003]. C’est aussi pourquoi, après avoir noté que l’essentiel des désaccords porte sur les frontières du domaine, Vinck & Hubert font la présente analyse qui nous conforte dans la conviction qu’on ne peut pas négliger la question qui nous préoccupe ici :

Quand on parle d’un objet nanométrique, de quoi s’agit-il ? Seulement des objets mesurant environ quelques nanomètres ou ceux mesurant maximum quelques nanomètres ? Dans ce cas, il faut considérer que chimistes, physiciens et biologistes font déjà des nanosciences depuis longtemps. Les physiciens, dans le domaine nucléaire, travaillent sur les atomes depuis plus de cinquante ans, les chimistes depuis plus d’un siècle — en catalyse, pharmacochimie et chimie des polymères. De même pour les biologistes qui étudient les virus ou les interactions entre la cellule et son environnement. [Vinck & Hubert 2017, 17] Les nanotechnologies, par-delà l’« indéfinissabilté » 27

Ainsi, nous pouvons dire sans équivoque que si les nanos correspondent à quelque chose qui existait auparavant, alors elles ne sauraient mériter l’intérêt qu’on leur prête aujourd’hui puisqu’elles s’inscriraient dans une continuité presque ordinaire. Et, d’une certaine façon, le diable est dans la taille puisqu’en voulant définir les nanotechnologies par leurs unités de mesure, on se trouve confronté à l’impossibilité de délimiter le champ d’extension des nanotechnologies. Ce qui discrédite les discours qui cherchent à présenter les nanos comme quelque chose d’inouï. D’autres distinguent les nanotechnologies et les nanosciences. Dans cette catégorie nous avons les définitions des Ratner [Ratner & Ratner 2003, 1] qui considèrent que « les nanosciences se situent au point de rencontre des sciences et des techniques traditionnelles, de la mécanique quantique et des processus les plus fondamentaux de la vie ». Alors que « les nanotechnologies, quant à elles, exploitent les connaissances tirées des nanosciences pour créer des matériaux, des machines et des dispositifs qui vont modifier de façon radicale notre manière de voir et de travailler ». Ainsi, de leur point de vue :

Les nanosciences étudient les principes fondamentaux des molé- cules et des structures [nanostructures] dont la taille est comprise entre 1 et 100 nm. Les nanotechnologies utilisent ces nano- structures pour l’élaboration de dispositifs à l’échelle nanomé- trique. [Ratner & Ratner 2003, 9]

Mais, nous savons, ainsi qu’en attestent fort à propos les conclusions de l’analyse de Lafontaine, que ce qui ressort le plus fortement de son analyse de la distinction entre nanosciences et nanotechnologies, c’est que d’une part, « cette distinction renvoie davantage à des dimensions économiques qu’à des dimensions épistémologiques » [Lafontaine 2010, 32] ; d’autre part, « si la distinction entre nanoscience et nanotechnologie subsiste dans la plupart des manuels et des ouvrages de vulgarisation, elle se montre beaucoup moins claire et rigide dans le discours des chercheurs » [Lafontaine 2010, 84]. Pour ces deux raisons et une troisième liée à la nature intrinsèque de l’objet dont la contrôlabilité dépend des possibilités de manipulation et de maîtrise des effets quantiques, sur lesquelles nous reviendrons plus tard , nous optons pour une définition des nanotechnologies qui tient compte de cette difficulté à faire une franche dichotomie entre ces champs des nanos. On peut noter que toutes les définitions citées dans ce texte s’intéressent à la taille de l’objet des nanosciences ou des nanotechnologies qui constitue un enjeu important. Dans cette optique, Vinck & Hubert [Vinck & Hubert 2017], tout comme Schmid, Decker et al. remarquent que certains insistent sur toutes les dimensions de l’objet [Schmid, Decker et al. 2003], auquel cas, les nanotechnologies se limiteraient aux nano-objets, nanorobots, nanoparticules dont toutes les dimensions seraient inférieures à 100 nanomètres, alors que d’autres considèrent qu’au moins une des dimensions doit s’exprimer en nanomètre. De ce fait, tous les objets qui peuvent mesurer plusieurs microns, 28 Thierno Guèye millimètres, centimètres ou mètres de long, tels que les nanofils ou les nanotubes, et de large, à l’instar des couches nanométriques, sont concernés. Ensuite, il y a les cas où ce sont les dimensions d’un élément faisant partie de l’ensemble qui détermine la classification nano. Vinck & Hubert donnent l’exemple des microsystèmes composés de plusieurs pièces dont certaines sont de dimension nanométrique à l’instar des microprocesseurs de l’industrie électronique qui sont de l’ordre du centimètre, mais comportent des transistors nanométriques. Puis on peut noter un autre type de définition, qui ne tient pas compte des dimensions extérieures des objets, mais plutôt du degré de précision lors de sa fabrication. Dans ce cas, ce qui compte, c’est la précision de l’ordre du nanomètre avec laquelle les dimensions de l’objet ou son maillage interne sont réalisés. Enfin, certaines définitions intègrent la structuration « au hasard » dans la masse d’un matériau comme l’inclusion de nano-objets ou de nanofils pour le renforcer, notamment dans les raquettes et les pneus. Selon Vinck & Hubert, on parle alors de nanomatériaux, même si « cette définition est également ambiguë car quasiment tous les matériaux (ciment, métaux, bois...) sont, naturellement ou pas, nanostructurés » [Vinck & Hubert 2017, 19]. D’où cette conséquence qui problématise très clairement la question comme suit :

Les nanotechnologies, oui, sont donc bien des technologies de l’infiniment petit, mais derrière cette définition englobante se cachent des choses très différentes les unes des autres. Les experts, soucieux de ne pas tout mélanger, construisent alors des définitions plus précises, mais ils ne s’accordent pas entre eux sur ces définitions, même quand il s’agit de préciser la taille des objets « nanos ». [Vinck & Hubert 2017, 19]

Nous inscrivant résolument dans l’optique de la rigueur et de la précision épistémologique, même si celle-ci doit être à la fois normative et descriptive, nous allons à présent décliner l’approche définitionnelle que nous préconisons.

4 Deux contraintes et quatre paramètres pour une définition

Peut-on se contenter d’un tel relativisme dans la définition d’un champ scientifique ? Nous pensons que la confusion orchestrée par les intérêts hétérogènes des acteurs et les grands enjeux autour des nanos ne sauraient avoir raison des projets de définitions dépassionnées de ce nouvel espace technoscientifique. C’est pourquoi nous avons jugé utile, d’une part, de mettre en cause toute situation favorable à la confusion afin de proposer une définition basée sur l’histoire de l’avènement des nanos. D’autre part, nous pensons qu’une définition crédible doit se démarquer de ce qui existait déjà avant l’avènement des nanos, notamment les microsciences et les microtechnologies qui ont révolutionné la miniaturisation au milieu du xxe siècle. En définitive, Les nanotechnologies, par-delà l’« indéfinissabilté » 29 on ne devrait pas confondre miniaturisation et nanotechnologie. Puisque, ce qui est inouï, et que n’omettent pas de souligner Schmid, Decker et al. [Schmid, Decker et al. 2003, 15–16] dans les définitions 5–62 et les Ratner, c’est qu’« inversement, la nanofabrication ascendante consiste à construire une nanostructure à partir d’atomes individuels » [Ratner & Ratner 2003, 14]. À ce propos, Christian Joachim parlera de « monumentalisation » [Joachim & Plévert 2008]. Nous savons qu’il y a deux éléments déterminants dans l’avènement des nanotechnologies. Le premier est un évènement historico-technologique : la création du microscope à effet tunnel par Gerd Binnig et Heinrich Rohrer en 1981, qui leur a valu le prix Nobel de physique en 1986. Le second est la localisation des propriétés de la matière, à cette échelle, au cœur des lois de la mécanique quantique. De ce fait, toute définition désireuse d’échapper au piège des enjeux doit, à notre sens, tenir compte de ces deux dimensions pour les raisons suivantes que nous tenons pour des contraintes :

1. La mise au point de ce microscope à effet tunnel nous a permis d’avoir le premier contact matériel qui soit avec des objets à cette échelle, jusqu’ici chasse gardée de la mécanique et de la physique quantique. Ce n’est pas le début de l’observabilité des nano-objets, comme nous allons le voir, mais c’est ce qui ouvre l’ère de leur manipulation plus ou moins directe par l’humain ainsi que de leur contrôlabilité. 2. L’accès à ce niveau de la matière ouvre un nouvel angle de l’univers quantique à la science et à la technologie avec les effets d’échelle propres à cette dimension qui conforte l’idée selon laquelle « ce qui est à l’échelle nanométrique n’est pas simplement tout petit, c’est aussi et surtout quelque chose de différent dans la manière d’être tout petit » [Ratner & Ratner 2003, 8].

Si nous tenons compte de ces deux points, nous pouvons proposer de définir les « nanotechnologies », ce concept nous paraissant moins approprié

2. Definition 5 : « Object of Nanotechnology is the production and application of structures, molecular materials, internal and external surfaces in critical dimensions or production tolerances of some 10 nm to atomic scales. [...] Aim is the preparation of material dependent properties of solids and their dimensions and new functionalities based on new physical-chemical-biological impact principles, caused by the submicro- scopic respectively the atomic or molecular area. [...] Nanotechnology is dealing with systems with new functions and properties which depend solely on nanoscale effects of their components » [Schmid, Decker et al. 2003, 15]. Definition 6 (BMBF2002) : « Object of Nanotechnology is the production, analysis and application of functional structures whose scales are in the area of below 100 nm. [...] An atom or a molecule does not show the physical properties we are “used to” like electrical conductivity, magnetism, color, mechanical rigidity or a certain melting point. [...] Nanotechnology is taking place in the intermediate area between individual atoms or molecules and larger ensembles of atoms and molecules. In this area new phenomena appear which cannot be detected on macroscopic objects » [Schmid, Decker et al. 2003, 16]. 30 Thierno Guèye que nanotechnosciences, comme toute activité scientifico-technologique visant, par la manipulation contrôlée, à connaître ou à manufacturer des objets plus ou moins complexes dont le moyen exclusif d’y parvenir, dans l’état actuel de nos connaissances et de nos moyens technologiques, est un microscope de la génération des microscopes à effet tunnel ou les technologies de nanofabrication automatisées, en développement. Par conséquent, faire œuvre de nanotechno- science requiert la réunion de ces quatre paramètres : l’« observabilité », la manipulabilité, la contrôlabilité, et le changement de propriétés à l’échelle atomique et/ou moléculaire. En effet, la seule chose qui fait que la taille compte, ce n’est en aucun cas l’observabilité, mais le changement de propriétés à l’échelle nanométrique. Autrement, rien ne distinguerait les nanotechnologies des microtechnologies ni même des macrotechnologies. Ce qui les inscrirait de facto dans la dynamique d’évolution, de la miniaturisation plutôt que dans une révolution. Il n’y aurait aucune rupture, mais seulement une conversion d’unités de mesure qui pourrait se faire du micro au nano ou au pico, voire à une échelle encore inférieure. L’histoire du microscope de Savile Bradbury apporte une démonstration très éloquente de l’observabilité à ces échelles, dès les années 1960, notamment dans The Evolution of Microscope [Bradbury 1967]. En effet, on savait déjà accéder aux échelles de l’angström (comparable nano), bien avant les années 1960 comme en atteste le tableau ci-dessous auquel nous avons ajouté la conversion en nanomètre :

Date Instrument Approximate resolving power 1934 Ruska ca. 500 Å [50 nm] 1934 Driest and Müller ca. 400 Å[40 nm] 1936 Marton et al., E.M.I better than 1 µ (micron) 1939 Siemens first commercial model 100 Å [10 nm] 1940 RCA type “B” 25 Å [2,5 nm] 1940 Siemens 25 Å [2,5 nm] 1944 RCA type EMU ca. 20 Å or better [2 nm] 1945 Metro-Vick E.M.2 100 Å or better [1 nm] 1950 Metro-Vick E.M.3 average 35 Å ; best 25 Å [3,5 nm] à [2,5 nm] 1953 Siemens ÜM100 e 10 Å [1 nm] 1954 Siemens Elmiskop I ca. 9 Å [0,9 nm] 1956 A.E.I. E.M.6 Prototype 10 Å [1 nm] ; later improved to ca. 5 Å [0,5 nm] 1964 A.E.I. E.M.6B 3 Å [0,3 nm].

Tableau 1 – Some electron microscopes and their approximate resolving power [Bradbury 1967, 31] Les nanotechnologies, par-delà l’« indéfinissabilté » 31 5 L’apport des nouveaux outils qui nous ont ouvert le champ des nanotechs

Parmi les outils les plus connus pour observer et mesurer les nanostructures, les microscopes à balayage sont des premiers instruments à avoir été utilisés en nanosciences. La spectroscopie, qui est l’étude du spectre des rayonnements lumineux émis, absorbés ou diffusés par un milieu matériel, est bien plus ancienne que les techniques des nanoscopes à balayage ; l’électrochimie, moyen par lequel on transforme des processus chimiques en courant électrique ou inversement, on produit des réactions chimiques à partir de courant électrique. À cette énumération, il faut ajouter le microscope électronique, qui remplace la lumière utilisée en optique classique par des flux d’électrons. En vérité, les premiers instruments à balayage réellement efficaces qui ont été développés sont les Scanning Tunneling Microcopes (STM) ou nanoscopes à effet tunnel dont nous avons parlé plus haut. Cette découverte a permis de déplacer des atomes de xénon un par un et de les arranger sur une surface de nickel pour dessiner le sigle IBM. Afin de lever une équivoque, il n’est pas superflu de préciser que le choix du concept « nanotechnologies » plutôt que celui de « nanosciences » tient au fait que dans le champ des nanos on manipule pour comprendre à défaut de voir, tel que le précise Lafontaine :

Contrairement à la conception courante de l’instrument comme simple prolongement des sens, l’instrumentation utilisée en na- notechnologies est indissociable d’une logique de contrôle et de manipulation. Puisque l’observation physique d’un atome est impossible, seules la manipulation des atomes et la modélisation informatique permettent de visualiser le phénomène. Le rap- port fusionnel qui s’instaure entre perception et manipulation à l’échelle nanométrique participe d’une épistémologie proprement technoscientifique qu’on peut résumer par l’expression « voir, c’est faire ». [Lafontaine 2010, 82]

Ainsi, les limites des capacités de visualisation et de contrôle des acteurs en salle blanche sont liées à celles des mêmes instruments qui leur permettent d’accéder à l’information à cette échelle. Du coup, toutes les propriétés ne sont effectivement pas contrôlées, puisque de l’aveu de ce chercheur en génie physique, « on ne peut pas les voir » [Lafontaine 2010, 82]. Ainsi, on fait pour connaître et pour connaître il faut faire. Cette exigence liée à la nature même de l’objet nanométrique justifie l’étroite proximité entre science et technologie. Ce qui fait que :

La primauté épistémologique accordée à la manipulation atteste du développement d’un modèle technoscientifique où le faire a préséance sur le connaître. De fait, il n’est pas surprenant de 32 Thierno Guèye

constater que les frontières entre sciences et technologies sont, à l’échelle du nanomètre, fluides et changeantes. [Lafontaine 2010, 83]

C’est pourquoi, en plus d’être enchevêtrées, nanosciences et nanotech- nologies ont des frontières quasi plastiques difficiles, voire impossibles, à circonscrire. Alors, très souvent, le terme nanotechnologie est synonyme de celui de nanoscience, car l’activité nanoscientifique est inséparable du nanotechnologique, le « voir » est toujours assujetti à la manipulation et au « faire ». Dans ce contexte, parler de « nanotechnoscience » ne serait pas qu’un simple jeu de mots. Si une ligne de démarcation devait subsister, on la trouverait en aval de la pratique nanotechnologique, autrement dit, dans les résultats. Ce qui constituerait une forme de conséquentialisme, tel qu’une activité de recherche aboutirait à une application technologique ou une explication scientifique. On pourra donc utiliser ce résultat comme critère de démarcation a posteriori afin de distinguer les deux domaines de recherche et retrouver ainsi, avec une certaine contrariété certes, nos repères traditionnels de clivage entre science et technologie. C’est dans la perspective de cette perturbation que Jean-Louis Pautrat prédisait, dans une plus large mesure, que « la société, elle, sera inévitablement perturbée par l’introduction des technologies de miniaturisation » [Pautrat 2002, 17]. En vérité, depuis leur naissance, les nanos n’ont fait que bousculer nos us scientifico-technologiques, à un tel point que certains parlent de seconde « révolution industrielle » et d’autres évoquent même l’idée d’une « révolution scientifique ». L’examen de ces questions ne constitue pas l’objet du présent article, même si nous pensons qu’elles méritent plus d’attention de la part des chercheurs.

6 Effets d’échelle et changements de propriétés

Nous pouvons, à présent, introduire notre second argument qui consiste à dire que l’une des nouveautés ayant accompagné l’avènement des nanotechnologies est l’entrée dans l’univers quantique, qui obéit à des lois différentes de celles de l’échelle microscopique. De ce fait, tant qu’on ne fait qu’exploiter les propriétés de l’univers quantique dans l’optique des technologies du tout petit, on peut considérer que l’on est toujours dans le cadre de la miniaturisation que nous connaissions déjà avant l’avènement des nanos, ainsi que les microtechnologies savaient déjà le faire avec succès. Afin d’illustrer notre propos, référons-nous de nouveau à l’ouvrage Les Nanotechnologies : la révolution de demain [Ratner & Ratner 2003], où, s’appuyant sur les propriétés de l’or, les auteurs nous fournissent l’illustration suivante :

Imaginons, nous disent-ils, un lingot d’or de 1 m de côté. Découpons-le de moitié en largeur, hauteur et longueur. Cela Les nanotechnologies, par-delà l’« indéfinissabilté » 33

donne huit petits cubes faisant chacun 50 cm de côté. Les propriétés de ces huit morceaux sont les mêmes que celles du cube initial : ils sont en or, jaunes, brillants et lourds, en métal ductile, conducteur d’électricité et avec le même point de fusion. À présent, découpons de nouveau de moitié chaque nouveau morceau obtenu, cela donne huit petits cubes de 25 cm de côté. Rien n’a encore fondamentalement changé. Continuons l’opération de découpage pour obtenir des sections de l’ordre du centimètre, puis du millimètre, et enfin du micron : chaque minicube obtenu, invisible à l’œil nu (il faut recourir à des instruments d’optique), a toujours les mêmes propriétés chimiques que le tube de 1 m de côté. À l’échelle macroscopique, les propriétés chimiques et physiques d’un matériau, quelle que soit la matière (or, fer, plomb, plastique, glace ou cuivre), ne dépendent pas de sa taille. En revanche, à l’échelle nanométrique, tout change, même la couleur de l’or, son point de fusion et ses propriétés chi- miques. Cela tient aux interactions qui, dès lors, s’effectuent entre les atomes constitutifs de l’or. Si celles-ci se neutralisent (en s’équilibrant) et annulent leurs effets au sein d’un morceau massif de métal, il en va différemment pour un morceau de taille nanométrique du même métal [Ratner & Ratner 2003, 14]. La question qui se pose est celle de la légitimité d’une définition des nano- technologies qui ne tiendrait pas compte des observables ci-dessus évoqués. En tout état de cause, toute définition qui ne prend pas en considération ce dépassement de l’univers de la physique classique ne saurait tirer ses fondements des propriétés intrinsèques de la matière qu’elle se prévaut d’étudier. Car, à l’instar de Wautelet, nous savons que : Du macro au nano on traverse deux limites physiques. Le problème est compliqué par le fait que : 1o ) ces limites ne sont pas abruptes ; 2o ) elles dépendent de l’effet considéré ; 3o ) elles sont fonction du ou des matériaux impliqués. Un prérequis à tout travail sur les nanotechnologies demande donc de comprendre comment les propriétés varient avec l’échelle des dimensions. Pour cela, le recours aux lois d’échelles est d’une grande utilité. [Wautelet 2006, 9]

Une telle assertion rend bien compte de la complexité de la tâche. C’est pourquoi, quand on parle de physique macroscopique, il ne faut pas omettre le fait que quelques-unes des propriétés macros peuvent être interprétées par la physique quantique et s’expliquer dans la mécanique quantique. Mais les microtechnologies n’interviennent pas directement au cœur du « nanomonde » au niveau où les interactions se font selon l’électromagnétisme qui caractérise la taille nano, c’est-à-dire 10−9. Les nanoscopes qui doivent composer avec les effets d’échelle, par contre, le permettent. C’est pourquoi, pour Louis Laurent : 34 Thierno Guèye

Les lois de la physique s’appliquent à toutes les échelles, mais leur importance relative dépend de la taille des objets considérés. À l’échelle macroscopique, c’est la physique classique qui domine. Les effets de la mécanique quantique, qui gère le comportement des particules élémentaires, sont alors imperceptibles (bien qu’ils existent puisque des phénomènes comme la conductivité électrique en dépendent). À l’échelle sub-atomique, c’est le contraire : la mécanique quantique prend le pas sur la physique classique. Or, les objets nanométriques se situent en quelque sorte à la « frontière » entre ces deux échelles. C’est pourquoi, des effets dus à la mécanique quantique se font parfois très présents... et surprennent notre intuition forgée dans le monde macroscopique. [Laurent 2007, 35–36]

La National Nanotechnology Initiative (NNI), par la voix de Mihail Roco, dit que « le nanomètre (un milliardième de mètre) est un point magique de l’échelle des dimensions. Les nanostructures se situent entre ce que l’homme peut fabriquer de plus petit et les plus grandes molécules du monde vivant » [Ratner & Ratner 2003, 9]. Cependant, la démarche définitionnelle des communautés scientifiques que rapportent, de façon précise et fort à propos, [Lafontaine 2010], [Loeve 2010] et [Vinck & Hubert 2017], nous semble poser un problème épistémologique, voire éthique (aspect non traité ici), qui mérite un peu plus d’attention de la part des spécialistes des questions scientifiques en sciences humaines. Une des conséquences de cet amalgame est qu’experts et dilettantes en arrivent à insérer dans la définition de cette discipline une autre qui, dans le fond, n’a probablement qu’un seul lien avec celle-ci, la miniaturisation. C’est ce que l’on pourrait penser, à la lecture de Pautrat dont nous avons exposé la définition plus haut [Pautrat 2002]. Si l’on en croit ce dernier, « nanotechnologies » rimerait avec « microtech- nologies » et les premières ne seraient que l’aboutissement des dernières. Ce qui implique la continuité de nos savoir-faire qui, par une sorte d’évolution naturelle, ont atteint une étape importante dans leur développement, mais non cruciale ni révolutionnaire. Cette vision évolutionniste des nanos, même si elle ne nie pas la manifestation de nouvelles propriétés à cette échelle, ne la considère pas comme « démarquante ». En cela, elle tranche d’avec la vision de ceux qui considèrent ce basculement des technologies de l’infiniment petit vers la physique quantique, comme capital voire essentiel pour comprendre le caractère radicalement nouveau de ce qui se joue autour des nouvelles technologies du lilliputien. Le problème ici, comme nous pouvons le constater, c’est que l’on accepte de définir les nanotechnologies comme une partie des microtechnologies ou à en faire, purement et simplement, un synonyme. C’est l’un des mélanges de genres que nous avons voulu mettre en évidence dans ce texte tout en essayant de le juguler en proposant une démarche qui nous paraît plus appropriée, épistémologiquement parlant, afin de contourner le problème Les nanotechnologies, par-delà l’« indéfinissabilté » 35 des lois sociologiques qui prévalent dans la construction d’une définition des nanotechnologies fondée sur les acteurs.

7 Conclusion

En définitive, la recherche des budgets ainsi que leur multiplication dans certains secteurs de la recherche académico-scientifique est en train de modifier durablement le paysage académique et d’installer une nébuleuse conceptuelle qu’exploitent bon nombre de chercheurs en mal de financement et suffisamment réactifs pour saisir l’opportunité des finances privées dans les laboratoires. Du coup, l’offre marketing l’emporte sur l’honnêteté et la rigueur des chercheurs. Dominique Vinck parle de « marchandisation » qu’il définit comme l’« appropriation privée de la connaissance par la protection industrielle du dépôt de brevets, contrats spécifiant les termes d’un transfert ou d’un partage de connaissance et son espace de confidentialité » [Vinck 2005, 73–91]. Le financement devient l’une des préoccupations essentielles des laboratoires de recherche avec son pendant moins péjoratif, la valorisation qui prend le sens de « création de valeur économique ». Ce qui se joue derrière ces querelles définitionnelles, c’est la réalité d’une mutation de plus en plus perceptible dans nos univers académiques avec le développement de stratégies de survie ou d’expansion des équipes de recherches qui se répandent, avec en ligne de mire la poursuite ou le démarrage de certaines recherches scientifiques qui n’auraient pu se poursuivre ou avoir lieu sans ces stratagèmes des chercheurs relativement à la disponibilité des financements et les projets pour lesquels ceux-ci sont octroyés. Ainsi, ces derniers ne dépendent plus exclusivement des projets scientifiques, mais ce sont ceux-ci qui se mettent au service des projets industriels civils et/ou militaires qui offrent les meilleurs budgets. D’où ce constat de Lafontaine sur la recherche canadienne : L’engouement suscité par la maîtrise de l’infiniment petit à l’échelle internationale a donné lieu à un repositionnement des secteurs de recherche afin qu’ils puissent bénéficier de la manne des subventions. Presque exclusivement dépendants des fonds publics, les chercheurs québécois et canadiens participent activement à ce redéploiement afin de s’assurer une place au soleil du nanomètre. [Lafontaine 2010, 23] Du coup, outre le caractère fédérateur des nanotechnologies, un certain nombre de recompositions des équipes de recherche et/ou la redéfinition de leurs domaines d’activité pourraient avoir un impact sur la configura- tion intrinsèque des équipes académiques. Cette reconfiguration à l’intérieur des sciences est l’un des aspects importants de la révolution scientifico- technologique annoncée. Pour Lafontaine, « au-delà de leurs multiples applica- tions réelles ou virtuelles, les nanotechnologies annoncent non seulement une 36 Thierno Guèye nouvelle façon de concevoir et de manipuler la matière, mais aussi un nouveau mode d’organisation de la recherche et du rapport entre science, économie et société » [Lafontaine 2010, 10]. Enfin, contrairement au Monsieur Jourdain du Bourgeois gentilhomme, on ne fait pas de la nanotechnologie sans le savoir. Dans le but d’éviter toute équivocité de notre définition liée aux facteurs que nous avons exposés dans ce texte, nous avons décidé de mettre en relief deux critères plus fiables et plus stricts. Le premier se base sur les instruments spécifiques qui nous permettent d’accéder à l’objet des nanotechnologies (nous n’en parlons pas ici, mais nous pensons que le terme « nanotechnoscience » serait plus approprié) et qui n’existaient pas avant 1981, date de la découverte du microscope à effet tunnel. Et, le second se base sur les propriétés intrinsèques de l’échelle 10−9 dont Roco dit qu’elles se trouvent à un niveau intermédiaire entre la taille des atomes ou des molécules isolées et celle des matériaux macroscopiques. Les nanotechnologies peuvent donc être définies selon ces deux caractères dont le premier permet l’« observabilité » haptique ou l’accessibilité à l’objet nanométrique rendue possible par les nouveaux instruments d’observation et de manipulation. Ce qui implique une prise en compte et une maîtrise des propriétés de la matière à cette échelle, d’où la contrôlabilité indissociable des changements de lois au niveau quantique. Tout compte fait, est-il encore pertinent de parler de nanotechnologie plutôt que de « nanotechnoscience » quand on connaît l’enchevêtrement sui generis entre science et technologie à l’échelle du nanomètre ? Telle est la question qui pourrait faire suite à la présente analyse critique de l’indéfinissabilité des nanotechnologies à laquelle un certain nombre d’acteurs impliqués dans ce domaine semblent se résigner. À ce sujet, Loeve ouvre une perspective tout à fait intéressante qu’il n’exploite pas jusqu’au bout malheureusement, lorsqu’il affirme :

Nanotechnology is a true techno-logy in the sense of knowledge of technical operations. The knowledge developed in its practices cannot be reduced to scientific knowledge. Molecular machines are not simply useful objects, although they might become that in the future. They can certainly generate applications, but they leave open other possibilities of constructing meaning and finality in their eventual uses. [Loeve 2010, 15]

Bibliographie

Bradbury, Savile [1967], The Evolution of Microscope, Londres : Pergamon Press.

Crichton, Michael [2003], La Proie, Paris : Éditions Robert Laffont. Les nanotechnologies, par-delà l’« indéfinissabilté » 37

Fries, Marie-Hélène [2016], Nanomonde et Nouveau Monde. Quelques méta- phores clés sur les nanotechnologies aux États-Unis, Grenoble : Ellug.

Guchet, Xavier [2014], Philosophie des nanotechnologies, Paris : Hermann.

Guèye, Thierno [2019], Critique des nanotechnologies, New York : Peter Lang, à paraître.

Joachim, Christian & Plévert, Laurence [2008], Nanosciences : La révolution invisible, Paris : Seuil.

Lafontaine, Céline [2010], Nanotechnologies et société, Montréal : Les Éditions du Boréal.

Laurent, Louis [2007], Les Nanos vont-elles changer notre vie ?, Paris : Spécifique Éditions.

Loeve, Sacha [2010], About a definition of nano : How to articulate nano and technology, HYLE – International Journal for Philosophy of Chemistry, 16(1), 3–18, www.hyle.org/journal/issues/16-1/loeve.htm.

Pautrat, Jean-Louis [2002], Demain le nanomonde : Voyage au cœur du minuscule, Paris : Fayard.

Ratner, Mark & Ratner, Daniel [2003], Nanotechnologies : La révolution de demain, Paris : CampusPress.

Schmid, Günter, Decker, M., et al. [2003], Small Dimensions and Material Properties : A Definition of Nanotechnology, Graue Reihe, vol. 35, Bad Neuenahr-Ahrweiler : Europäische Akademie.

Vinck, Dominique [2005], Ethnographie d’un laboratoire de recherche techno- logique : analyse de la médiation entre recherche publique et appropriation privée, Sciences de la Société, 66, 73–91.

Vinck, Dominique & Hubert, Matthieu [2017], Les Nanotechnologies : l’invisible révolution, Paris : Le Cavalier Bleu.

Wautelet, Michel [2006], Les Nanotechnologies, Paris : Dunod.

The Role of “Missile” and “Targeting” Metaphors in Nanomedicine

Gry Oftedal Department of Philosophy, Classics, History of Art and Ideas, University of Oslo (Norway)

Résumé : Dans cet article, je soutiens que les métaphores « missile » et « ci- blage » dans la recherche sur les médicaments à base de nanoparticules jouent différents rôles. Je soutiens que le « missile » joue un rôle scientifique marginal et un rôle plus central dans la communication, tandis que le « ciblage » est devenu une métaphore organisatrice dans le domaine de la recherche. La raison, je le suggère, est que le ciblage est l’explanandum principal du domaine et que cette métaphore continue d’être pertinente même lorsque la nanomédecine mûrit pour s’attaquer à des problèmes plus complexes. Enfin, je discute des suggestions récentes de nouvelles métaphores axées sur la complexité, et je suggère que le rôle significatif des modèles simplificateurs et le besoin réel de tels modèles peuvent expliquer pourquoi les métaphores de complexité n’ont pas retenu l’attention des chercheurs.

Abstract: In this paper I suggest that the “missile” and the “targeting” metaphors in research on nanoparticle-based drug delivery play different roles. I argue that the “missile” plays a marginal scientific role and a more important role in communication, while “targeting” has become an organizing metaphor for the research field. The reason, I suggest, is that targeting is the main explanandum of the field and that this metaphor continues to be relevant even as nanomedicine matures towards tackling more complex problems. Finally, I discuss recent suggestions of new complexity-oriented metaphors, and I suggest that the significant role of simplifying models and the genuine need for such models may explain why complexity metaphors have not caught on among researchers.

Philosophia Scientiæ, 23(1), 2019, 39–55. 40 Gry Oftedal 1 Introduction

If a nanocarrier is a ship in your body, it is a ship far out at sea, caught in a powerful storm. Objects are floating and flying around everywhere, constantly hitting it, and pushing it off course. What are the chances of arriving at the planned destination? Rather slim, it would seem. A nanocarrier is a nanoparticle developed to transport and deliver a substance, such as a drug, to specific sites in the body [Peer, Karp et al. 2007], [Pérez-Herrero & Fernández-Medarde 2015]. Many a carried substance, however, goes down with its carrier or is stranded somewhere along the way. One goal of targeted nanomedicine is to construct nanoparticles that do reach their target and release the drug they are carrying at that site, typically in cancerous tissues, so that the cancer can be treated without there being severe systemic side-effects [Singh & Lillard 2009]. Nanoparticle-based drug delivery has the potential of increasing the dose and targeted toxicity of drugs reaching the cancer while protecting healthy cells from their toxic effects [Landesman- Milo, Qassem et al. 2016]. We often imagine what is going on inside our bodies at the nanoscale as a shrunken version of the world as we experience it (at least approximately). The nanoworld is, however, much messier and more chaotic than the macroworld. Extracellular space and cells are packed with rapidly moving proteins and other molecules constantly bumping into each other, ready for interaction [Morelli, Allen et al. 2011]. The order produced is underwritten by random and chaotic processes [Elowitz, Levine et al. 2002], [Balázsi, van Oudenaarden et al. 2011]. Despite the messiness of the nanoscale, nanomedicine literature and communications are dominated by precision metaphors, typically in terms of war and weapons [Bensaude-Vincent & Loeve 2014], [Loeve, Bensaude- Vincent et al. 2013]. Nano-carriers have been conceptualized as “therapeutic missiles” armed with poison that will seek out and strike the enemy in the war against disease [McCarron & Faheem 2010], [Yan, Liao et al. 2015]. This use of the missile metaphor has some similarities to P. Ehrlich’s vision in the early 20th century of a “magic bullet” drug that would be able to kill specific pathogens with chemical precision and without harming the human body [Strebhardt & Ullrich 2008]. The image of nanoparticles as precision weapons shooting through the space of our bodies to reach and destroy their cancerous targets has been criticized in recent literature [Bensaude-Vincent & Loeve 2014], [Loeve, Bensaude-Vincent et al. 2013]. The precision, control and efficiency suggested by these metaphors are a far cry from what is going on in nanoenvironments, where nanoparticles continuously interact with innumerable other substances. Bensaude-Vincent & Loeve suggest that the metaphors of missiles and targets do not provide a constructive framework for nanomedicine and that we should search for new metaphors that better portray the significant complexity of living organisms [Bensaude-Vincent & Loeve 2014]. According to these The Role of “Missile” and “Targeting” Metaphors in Nanomedicine 41 authors, nanomedicine needs to take better into account the interactive environment of the body in which the nanocarriers find themselves, and we need a switch of metaphors from those of precision missiles to those of an oikos, where we are to work with complexity, rather than fighting disease in a one-dimensional war [Bensaude-Vincent & Loeve 2014, 13]. But, one might ask, why should we consider introducing any new metaphors? If metaphors can become problematic for science, why not minimize the use of metaphors or try to eliminate them altogether? Many philosophers, myself included, hold that a minimization strategy would not serve science at all. Since the 1960s, philosophers of science have argued that many scientific metaphors cannot be substituted by literal language without significant loss of scientific understanding [Black 1962], [Hesse 1966, 1988], [Brown 2003], [Levy 2011]. Metaphors can play important epistemological roles by allowing the meaning of scientific concepts to be formed in the light of concepts drawn from different areas. In addition to conveying understanding, metaphors may also inspire scientific ideas and serve as the basis for developing scientific models [Stegmann 2016]. Because of the vital roles metaphors can play in science, we should instead investigate metaphors of nanomedicine further. The following discussion will, however, not hinge on a view of the indispensability of metaphors in nanomedicine. The “missile” and “targeting” metaphors have played interesting roles in this field of research independently of whether they could have been dispensed with or not. In this paper, I will be taking a step back and looking at the role that these metaphors have played in nanomedicine. Have they contributed to scientific development, or have they trapped nanomedicine in an unconstruc- tive language of precision warfare? Indeed, the biological nanoscale provides great complexity, but simplifying concepts and models can still prove very useful [Sober 2015]. I will take stock regarding the role of “missile” and “targeting” metaphors in the development of nanomedicine and ask whether conceptualizing nanoparticle-based drug delivery in terms of these metaphors has been useful. More specifically, how have they contributed to guiding this research? And last but not least, do metaphors of missiles and targets have any role in the future developments of nanomedicine? The paper is organized as follows. I first differentiate the roles of “the missile” and “the target” metaphors, arguing that these two metaphors have played different roles in nanomedicine: the first more as a rhetorical device in science communication and the other as an effectively epistemologically useful metaphor guiding nanomedical research and models. In the light of relevant discussions in the philosophy of science, I then suggest how the “target” and “targeting” metaphors have played vital roles in nanoparticle-based drug delivery development, as well as suggesting a marginal role for “the missile” in the trial-and-error process of this research. Towards the end I will argue that the need for simple models partly explains how metaphors can play useful roles in research even though they present simplistic pictures of the processes under investigation. 42 Gry Oftedal 2 The missile and the target

In the scientific literature on nanomedicine, nanoparticles used in drug delivery are sometimes likened to missiles. For example, according to Yan, Liao et al. “just like an active radar guides a missile to its target, nanocarriers equipped with homing devices as a specific recognition mechanism for malignant cells have been studied in various cancers [...]” [Yan, Liao et al. 2015, 56]. Similarly, and more pronounced, the “missile” figures in science communication. For instance, nanoparticles are pictured as “smart missiles” (internetmedicine.com 2016), and headlines from internet popular science pieces proclaim that “targeted ‘nano-missiles’ in your bloodstream make you tumor-proof” (io9.gizmodo.com 2008) and that a “nano-missile fights cancer with green tea” (healtcareasia.org 2014). I will ultimately argue that the missile metaphor has played only a marginal scientific role in nanomedicine compared to the targeting metaphor, but I do nevertheless recognize that the missile has played a role, at least as a motivation as well as being a powerful tool for communication. I will also discuss a possible heuristic role for “the missile” in the design of both simpler and more complex nanoparticles. A simple google scholar search gives an indication of the difference in significance between the “missile” and the “targeting” metaphor. A search for “nanomedicine and missile” brought up 335 hits, while a search for “nanomedicine and targeting” received 97 500 hits. What can we make of these numbers? Although they are crude measures, they seem to reflect a central role of the “target” and “targeting” metaphors and a rather marginal role of the missile metaphor in the nanomedicine literature. Doing the same search with the general google search engine, however, gave the numbers 143 000 and 540 000, respectively. Thus, in all searchable internet documents, there is still an overrepresentation of “targeting” compared to “missile”, but the difference is not that pronounced. And there is no doubt that talking about missiles in the same breath as nanoparticles is much more common in non-academic documents than in academic ones. The pronounced use of “targeting” in scientific articles I take to indicate that this metaphor is allowed to play a significant conceptual role in nanomedicine. The fact that the “missile” is not mentioned a lot in scientific papers, however, does not exclude the possibility that it plays a role in the conceptual apparatus of scientists. Still, the difference in frequency of use in scientific documents is quite striking and suggests a more prominent role for “the target” than for “the missile”. Why is the missile metaphor so common in popular science and yet marginalized in scientific literature? The missile is a powerful image capturing a simplistic version of the aim and process of nanoparticle-based drug delivery, probably considered simple enough for scientists and journalists to use as a tool for communication in order to explain the bare basics of nanomedicine The Role of “Missile” and “Targeting” Metaphors in Nanomedicine 43 to the lay person. Bensaude-Vincent & Loeve significantly argue that the “therapeutic missile” does not offer much for researchers to work with when approaching the complexities of the nanoparticles’ journey inside the body [Bensaude-Vincent & Loeve 2014]. Most current work on nanoparticles and drug delivery typically addresses the complexities of interactions between nanoparticles and their environment, and for this work the missile may not reflect relevant questions and hypotheses. As a tool for communication, “the missile” also carries considerable rhetorical power. Similar to many political metaphors, it evokes emotions and may be effective in persuading the public and policymakers that nanomedicine is worth funding as it provides a way to “win the war against cancer” with increasingly smarter and precise weapons. In M. Black’s seminal work on models and metaphors [Black 1962], he suggested how describing war in terms of the metaphor of a game of chess may change people’s attitudes: “[...] to describe a battle as it were a game of chess is accordingly to exclude, by the choice of language, all the more emotionally disturbing aspects of warfare” [Black 1962, 42]. In nanomedicine, the opposite strategy seems to have been adopted. By introducing war metaphors, emotions linked to harsh conflicts may be triggered and could have a motivating effect for allocating resources to this area of research. This is not necessarily problematic, but in the cases where metaphors communicate simple and clear-cut solutions when researchers know there is a long road ahead, this mode of science communication may need to be reconsidered.

3 Metaphors as filters: How to connect nanoparticles and missiles

Several authors have contributed accounts of the role of metaphors in science [Black 1962], [Hesse 1966, 1988], [Harré 1970], [Van Fraassen 1980], [Lakoff & Johnson 1980], [Searle 1993], [Keller 2002], [Brown 2003], [Levy 2011]. The logical positivists were very much against the use of scientific metaphors [Carnap 1959], [Hempel 1965]. “[L]ogical generalizations, laws, and literal deductive relations” [Hoffman 1980, 394] were supposed to secure the objectivity of scientific theories and explanations, leaving no room for the ambiguities of metaphors. The positivist campaign against metaphors in science was opposed by M. Black’s [Black 1962] interaction perspective, challenging the reductive view that metaphors could be substituted by literal language. Black argued that metaphors provide new, modulated meanings by emphasizing some details and downplaying others. According to this interaction view, metaphors can give new insights not conveyed by literal language. A metaphor delivers a new meaning in the given context “which is not quite its meaning in literal use, nor quite the meaning which any literal substitution would have” [Black 1962]. A suitable 44 Gry Oftedal example in our context is “a nanoparticle is a missile”. Here the meaning of the missile metaphor is, in the light of Black’s view [Black 1962, 39] extended via the connection between two ideas, the nanoparticle and the missile. The metaphor, however, “selects, emphasizes, suppresses, and organizes features of the principal subject by implying statements about it that normally apply to the subsidiary subject” [Black 1962, 44]. Thus, the metaphor works like a filter that emphasizes certain selected features. Black also suggests how this connection between ideas takes place. For words like “missile”, there is a “system of associated commonplaces” [Black 1962, 40], which are what ordinary men and women hold to be true about missiles. This could be, for instance, that missiles are dangerous, they are fast, they are weapons, they can explode, and they are destructive. Imprecise and wrong statements could also be part of the system of associated commonplaces, as long as these mistaken beliefs are held to be true by the man and woman in the street. This system of associations is activated when “missile” is used as a metaphor, but how it is activated depends on the context the metaphor is placed in. When a nanoparticle is referred to as a missile, other associations may be emphasized, compared to when, for example, a hurtful comment is termed a missile. There are, in Black’s view, no general rules for how much weight or emphasis is to be put on different parts of the system of associations triggered by a metaphor. This is also why a literal substitution constitutes a cognitive loss or a “loss of insight”. A literal substitution would typically be too crude or simply say too much, because the delicate and context-dependent weighing of associations will be lost. Although aware that metaphors can be “dangerous”, Black defends the significance of metaphors in science holding that “[...] a prohibition against their use would be a willful and harmful restriction upon our powers of inquiry” [Black 1962, 47]. As suggested in recent accounts [Bensaude-Vincent & Loeve 2014], [Loeve, Bensaude-Vincent et al. 2013], the “missile” metaphor can be dangerous in the sense that it may emphasize associations that are found unfruitful both when it comes to how nanomedicine is perceived by the public (as a game of war rather than an activity of care) and concerning how it may affect the scientific process: the emphasis on precision and control may result in a science that is less focused on complexity and interaction than it should be. Going through scientific papers on nanoparticle-based drug delivery published in recent years gives the impression that the latter worry is not necessarily warranted. While it is true that many papers focus on the development of more efficient nanoparticles giving increased control over the delivery process, this “increased control” is sought via addressing the complexity of the delivery process and trying to come up with the means to work with this complexity [Hillaireau & Couvreur 2009], [Skotland, Iversen et al. 2014], [Shi, Kantoff et al. 2016], [Kakkar, Traverso et al. 2017], [Liu, Li et al. 2017]. There is a worry that the complexity may be just too large, and that research on nanoparticles in drug delivery may not be able to handle this [Leroux 2017]. Many are still optimistic, though, that better drug delivery with The Role of “Missile” and “Targeting” Metaphors in Nanomedicine 45 nanoparticles will be achieved in the future. A frequently suggested solution is more complex designs of nanoparticles that can tackle complex environments. The first generation of particles were typically very simple and did not wear protective molecules to prevent interactions. Soon it was discovered that these particles had a very short lifespan in a living system. A strategy was then developed to add particular chemical compounds, so-called PEGs (polyethylene glycol), to the surface of the particles in order to increase their lifetime. The adding of PEGs to nanoparticles (pegylation) is now the standard to reduce unwanted interaction and cellular uptake of nanocarriers [Schöttler, Becker et al. 2016]. Yet new generations of nanoparticles are modified in various ways to cope with the challenge of interactions. This process depends on the strategy of trial and error. Because the cellular environment is so complex, it is impossible to predict what will happen when introducing a new nanoparticle into a living system. Thus, the challenges and the way to overcome them are discovered in a trial-and-error fashion along the experimental road. Accordingly, although first generation nanoparticles were not prepared to travel through a complex environment, the reason for their shortcomings was probably not that the researchers did not foresee that extra- and intracellular environments could interact with these particles. Researchers had to start somewhere and face problems when they appeared. It seems that a frequently used strategy has been to start with a simple picture, and then employ a trial-and-error process to figure out the next steps towards more complex and better performing nanoparticles. In some early research, the “missile” metaphor played such a simplifying role focusing attention on some selected associations of the “missile”: the ability to strike a selected target [Yokoyama, Inoue et al. 1989], [Gref, Miralles et al. 1999]. What is the extended meaning created when the idea of the missile connects to the idea of the nanoparticle in the way M. Black suggests? Since what experts and lay people know about nanoparticles (and missiles) is generally very different, the interaction between the meaning of nanoparticle and missile could be expected to differ between experts and non-experts. The woman in the street would probably not know a lot about nanoparticles but would have some superficial knowledge about missiles as indicated above. A nanoparticle researcher, on the other hand, who has knowledge about the latest scientific findings on nanocarriers, will put weight on different associations triggered by the missile metaphor in a much more sophisticated manner than the lay person. She would, for instance, know that the nanoparticles are not steered through the body towards the target. The passage through the body is rather random, but if and when the particle reaches cancerous tissues, various mechanisms, such as the EPR effect1 or active targeting of

1. The EPR effect (Enhanced Permeability and Retention) is when nanoparticles and other molecules of certain sizes end up and stay in tumor tissues more easily than in other tissues due to anatomical differences between diseased and healthy tissues. 46 Gry Oftedal receptors, are employed for the nanoparticles to stay there and release the drug at this site. Thus, the researcher may give more weight to associations of the bombing of a specific target compared to associations of a device with a navigation system. A lay person would probably not know the details about how the nanoparticle travels through the body or how it ends up in cancerous tissues and may give equal weight to associations of the bombing of a specific target and steering via a navigation system. If such differential emphasis of associations exists between experts and non-experts, this means that a rather simplistic metaphor, like the missile, may still contribute to understanding in the conceptualization of nanoparticles. It also means that communicating science via metaphors may be a trickier business than using them as tools for understanding and conceptual change within science. Recently, the missile metaphor has been invoked anew in the context of nanoparticle-based drug delivery. Petrenko & Gillespie suggest that there is an on-going “paradigm shift in bacterio-phage mediated delivery of anticancer drugs: from targeted ‘magic bullets’ to self-navigated ‘magic missiles’ ” [Petrenko & Gillespie 2017]. Criticisms of the missile metaphor are partly based on the metaphor’s misrepresentation of precision and navigation; smart missiles have navigation systems, nanoparticles do not. The point in [Petrenko & Gillespie 2017] is, however, that future approaches will develop delivery systems that do have navigation systems. Through adding several functional ligands2 to the surface of bacterio-phages,3 the particles will navigate toward the cancer cells “by a variety of ligands binding different components of tumor vasculature and its microenvironment” [Petrenko & Gillespie 2017, 379]. Still, even though future nanoparticles may have more controlled journeys, the smart missile metaphor continues to communicate the image that the nanoparticles search for and seek out their target, while the journey of the nanoparticles will still be passive and partial. Nanoparticles will not seek out their target, but by adding several compounds to the surface of particles, they may be able to react with several selected compounds in their environment that will direct a larger number of them towards the cancer cells. So even though future nanoparticles may be “smarter” than before, their journey will still be much less precise compared to a smart missile. The existing criticism of the missile metaphor will thus still be relevant. As my examples show, even though “the missile” metaphor may have its main role in science communication and not in research, an epistemological role for “the missile” is not excluded. Even though metaphors may only hold to a certain degree before they break down, the way they work in communication and understanding adds to the scientific epistemological apparatus.

For example, tumor tissue tends to have high vascular density and large between endothelial cells in blood vessels [Fang, Nakamura et al. 2011]. 2. Ligand: a small molecule that may bind to proteins or DNA, often taking part in cell signaling processes. 3. Bacterio-phage: a virus that may infect bacteria and that may be used in constructing nanoparticles for drug delivery. The Role of “Missile” and “Targeting” Metaphors in Nanomedicine 47 4 Targeting

Unlike the missile metaphor, “target” and “targeting” metaphors are found in most scientific literature on nanoparticle-based drug delivery. A couple of examples among the thousands of papers that exist are “Active targeting of brain tumors using nanocarriers” [Béduneau, Saulnier et al. 2007] and “Diagnostic nanoparticle targeting of the EGF-receptor in complex biological conditions using single-domain antibodies” [Zarschler, Prapainop et al. 2014]. In these two, and in the majority of papers on nanoparticle-based drug delivery, “targeting” is a key concept. In what follows, I discuss and suggest why the “targeting” metaphor seems to be more helpful and useful to researchers than the “missile”. First some thoughts on the military connotations of “target” and “target- ing”. When mentioned in the same sentence as missile, “target” is obviously interpreted as what the missile is aimed at; a target for a weapon of war. And “targeting” is the aiming or directing of the missile towards its target. When disconnected from the missile, however, which is the case for most uses of “targeting” in scientific papers on nanomedicine, the “target” metaphor need not trigger strong associations with war. Even though the literal meaning of “target” is “an aim for shooting”, the word “target” is so often used figuratively in different contexts, that this word may well trigger various associations besides “war” or “weapon”. It is for example used a lot in various sports. And if you sell cars, you may have a sales target (or you may be the target of ridicule). More generally, targets can be indicators that orient you in some way or the other. “Targeting” is also so deeply incorporated into the scientific language on nanoparticles that researcher in this field may not even think of it as a metaphor. For some researchers, “targeting” may literally concern what they try to make nanoparticles do. “Target” and “targeting” can also be understood as two of many asso- ciations with the word “missile”. As suggested in the previous section, the bombing of a specific target may be an association with “missile” that will receive more emphasis from a nanoparticle researcher than for example the association of a device with a navigation system. Thus the “target” and “targeting” metaphors may correspond to those associations of “missile” that fit best with current nanoparticle research. In one respect, the target is a more specific metaphor than the missile as it can be viewed as concerning one aspect of the missile. In another respect, it is a more general metaphor, since the total system of associations connected to “target” is larger than for the “missile”. “Targeting”, I suggest, is an organizing metaphor for research on nanoparticle-based drug delivery. Importantly, it organizes the research in the sense that it defines the goal of the scientific work. The research on nanocarriers has succeeded when effective targeting is achieved. “Targeting” is more specific than the more general goal of cancer medicine, which is simply 48 Gry Oftedal to kill cancer cells and cure or improve the health of affected individuals. “Targeting” embodies the goal of a certain branch of cancer medicine that searches for ways to reach cancer cells more specifically through exploiting knowledge about characteristics specific to cancer cells or cancerous tissues. Research on nanoparticle-based drug delivery is an important part of this branch of cancer medicine. Although it may not be impossible, explaining the goal of this research without using the “targeting” metaphor is very difficult. As argued by Black, literal translations are typically too crude, either in being too general (to kill cancer cells) or too specific (to use receptors on the surface of cancer cells for nanocarriers to specifically latch on to) [Black 1962]. The use of “targeting” allows for a better understanding of the goal of research without the need for describing specific mechanisms that may be many and diverse (and new ones can still be discovered). For effective targeting to be achieved, one needs to discover mechanisms through which this is possible and design particles that exploit these mech- anisms. Thus, to achieve targeting, targeting needs to be understood and explained. “Targeting” is therefore the explanandum and not part of the explanations of how to achieve the goal of research. Since the “targeting” is not found in the models and the explanations, but rather is what is to be explained, it plays a somewhat different role compared to many other scientific metaphors. For instance, when computer metaphors are used about the brain, these metaphors often take place in explanations of how the brain works and are then parts of the explanatory framework. The same goes for the “code” metaphor in genetics. “The genetic code” suggests how genes work, how genetic “information” may be “transferred” from genes to RNA and proteins. “Target” or “targeting” in research on nanocarriers, on the other hand, is something one wants to obtain in one way or the other. And this “one way or the other” is the explanatory part of the story. When we establish “targeting” as the explanandum, an implication is that these metaphors remain open to both simple and complex models for how targeting is achieved. “Targeting” is therefore not as simplistic a metaphor as the “missile” can be, and can therefore work as a constructive framework, leaving room for complexity-oriented approaches to nanocarriers. While research is focused on achieving effective targeting, it can simultaneously focus on nanoparticles’ “ability to interact with complex cellular functions in new ways” [Singh & Lillard 2009, 215]. Although what it means to reach the target in many cases is rather well-defined, the processes through which the target is reached can still be intricate and messy. Complexity oriented approaches may be necessary to achieve effective targeting. Since “targeting” says more about the end-goal of research than about the process of reaching it, chances are good that “targeting” will continue to play the role of an organizing metaphor in future nanomedicine. The Role of “Missile” and “Targeting” Metaphors in Nanomedicine 49

Still, there are discussions within the field of nanomedicine whether targeting should define the goal of nanocarrier research [Cheng, Tietjen et al. 2015], [Lammers, Kiessling et al. 2016]. Lammers et al. asks “is targeting our target?” [Lammers, Kiessling et al. 2016]. Part of this discussion problematizes the fact that targeting efforts in nanomedicine have only led to very small increases in the percentage of a drug ending up in cancerous tissues, and thus we may say that targeting efforts have not really led to effective targeting as understood within the science. Still, small increases in the percentage of a drug reaching tumors seem to have clinically relevant effects, although the actual targeting of the drug is miniscule (0,7 percent on average). The question is therefore asked, whether such a small percentage deserves the label targeting and whether we need to worry too much about targeting when such small increases in drugs reaching tumors still have substantial effects. These are relevant questions, and although it may be argued that such ineffective processes do not deserve the label “targeting” at all, it seems that achieving these small increases in the tumor specificity of drugs may still have depended on a conceptual framework of targeting. These results have been achieved partly because “targeting” has organized the research around certain goals.

5 The need for simple models of a messy world

Bensaude-Vincent & Loeve have suggested that we need the metaphor of an oikos in nanomedicine to guide researchers towards approaching complexity and to work with complexity rather than seeing it as an obstacle to be overcome [Bensaude-Vincent & Loeve 2014]. Correspondingly, in the field of genetics, D. Noble has suggested viewing the interactions between genes and the environment as a symphony or as an orchestra rather than viewing the genome as a control unit [Noble 2008]. I am very sympathetic to suggestions of introducing metaphors that reflect complexity, but such metaphors do not seem to catch on as organizing metaphors for research. I think an important reason for this is that researchers already appreciate the presence of complexity and that living organisms are extremely messy, but in addressing such intricate systems, simplifications are needed. When a research field matures, however, more and more complexity can be addressed and accounted for in models. A consequence may be that some metaphors may lose their relevance or acquire new meanings within the field. In this light we can see the continued importance of the “targeting” metaphor as evidence for its ability to stay relevant as the field matures, because it does not work against a stronger focus on complexity. It may well be that how researchers approach complexity could have been different and perhaps more constructive within the framework of an oikos metaphor, but it does not seem like the field of nanomedicine needs it to address the complexities of nanoparticle-based drug delivery. 50 Gry Oftedal

A significant point is also that a system does not have to be simple for simple models to help us grasp important aspects of it. A nice example from the field of genetics is the metaphor of the “genetic code”. Although the code metaphor is too simple to give us a full understanding of how genes contribute to protein production, it has had a large influence on models and explanations through the history of genetics. Stegmann shows how the “genetic code” metaphor in the initial phases of research on the gene-protein relation was used to suggest different mechanism schemas providing researchers with hypotheses for the nature of this relationship [Stegmann 2016]. Even in a context where researchers are obtaining more tools and knowl- edge to be able to work with complexity, they will not stop looking for patterns, principles and causes. The goal is still to find or create some order in the mess. There is not much to work with if systems are generally unpredictable. Thus, researchers look for pockets of tidiness that can be exploited for intervention or they may even create their own pockets like in the synthetic biology approaches [Oftedal & Parkkinen 2013]. Accordingly, there need not be an opposition between the quest for precision and control and a more holistic approach to nanomedicine. Although systems are complex, there is still precision and order that can be exploited. For example, active targeting in nanomedicine can be understood as exploiting biological mechanisms for specificity. Specificity is ubiquitous in living systems and is based on recognition, affinity and chemical interaction between particular substances. Via these mechanisms, certain interactions are excluded while others are enabled and thereby certain outcomes are ensured. In active targeting, knowledge about such mechanisms is used in getting nanoparticles to latch onto certain receptors. Living systems have evolved functionality partly through developing processes that ensure that certain inputs give certain outputs, and when modeling living systems researchers may look for this kind of order to find some handles they can use as a basis for interventions. Metaphors that help researchers in this quest for order in the mess seem to be those that catch on. An oikos is not without order but may not provide the desired simplification.

6 Conclusion

Being a nanocarrier in the messy environment of the body is a struggle. The large majority never complete their assignment, and their functionality is effectively hampered by bombardment from interactive molecules. I have in this paper defended the thesis that, although nanomedicine needs to approach complexity in an appropriate manner, precision metaphors continue to play valuable roles in conceptualizing nanomedicine and organizing the direction of research. Employing an interaction view of metaphors, I have suggested that even though a “missile” might be a simplistic metaphor for nanoparticle-based The Role of “Missile” and “Targeting” Metaphors in Nanomedicine 51 drug delivery, it may still play some epistemological role and be of value in conveying understanding. Nevertheless, the differential weight of the associations of “missile” may differ substantially between researchers and lay persons, making communication of science to the public using this metaphor a difficult task. “Missile” is still a marginal metaphor in nanomedicine compared with “targeting”, which I have argued is an organizing metaphor for nanocarrier research and will continue to be so in the future. “Targeting” is the main explanandum and can accommodate the change in focus from simple first generation nanoparticles to more intricate designs and approaches addressing the complex interactions with the biological environment. In future work on this topic, it would be interesting to address the difference in the role of metaphors as parts of two different toolboxes: the toolbox of research and the toolbox of communication to the public.

Acknowledgements

I would like to thank Siv H. Oftedal for commenting on an earlier version of this manuscript and an anonymous referee for useful suggestions. I am grateful to participants in the NANOCAN project for giving me insight into research on nanoparticle-based drug delivery and for inspiring this paper. This work is funded by the Norwegian Research Council and the University of Oslo.

Bibliography

Balázsi, Gábor, van Oudenaarden, Alexander, et al. [2011], Cellular decision making and biological noise: From microbes to mammals, Cell, 144(6), 910–925, doi: 10.1016/j.cell.2011.01.030.

Béduneau, Arnaud, Saulnier, Patrick, et al. [2007], Active targeting of brain tumors using nanocarriers, Biomaterials, 28(33), 4947–4967, doi: 10.1016/ j.biomaterials.2007.06.011.

Bensaude-Vincent, Bernadette & Loeve, Sacha [2014], Metaphors in nanomedicine: The case of targeted drug delivery, NanoEthics, 8(1), 1–17, doi: 10.1007/s11569-013-0183-5.

Black, Max [1962], Model and Metaphors: Studies in Language and Philosophy, Ithaca: Cornell University Press, 1972.

Brown, Theodore L. [2003], Making Truth: Metaphor in Science, Urbana: University of Illinois Press.

Carnap, Rudolf [1959], Psychology in physical language, in: Logical Positivism, edited by A. J. Ayer, New York: Free Press, 165–198. 52 Gry Oftedal

Cheng, Christopher J., Tietjen, Gregory T., et al. [2015], A holistic approach to targeting disease with polymeric nanoparticles, Nature Reviews Drug Discovery, 14, 239–247, doi: 10.1038/nrd4503.

Elowitz, Michael B., Levine, Arnold J., et al. [2002], Stochastic gene expression in a single cell, Science, 297(5584), 1183–1186, doi: 10.1126/ science.1070919.

Fang, Jun, Nakamura, Hideaki, et al. [2011], The EPR effect: Unique features of tumor blood vessels for drug delivery, factors involved, and limitations and augmentation of the effect, Advanced Drug Delivery Reviews, 63(3), 136–151, doi: 10.1016/j.addr.2010.04.009.

Gref, Ruxandra, Miralles, Grégory, et al. [1999], Polyoxyethylene-coated nanospheres: effect of coating on zeta potential and phagocytosis, Polymer International, 48(4), 251–256, doi: 10.1002/(SICI)1097-0126(199904)48: 4<251::AID-PI104>3.0.CO;2-4.

Harré, Rom [1970], The Principles of Scientific Thinking, London: Macmillian.

Hempel, Carl G. [1965], Aspects of Scientific Explanation and Other Essays in the Philosophy of Science, New York: The Free Press.

Hesse, Mary B. [1966], Models and Analogies in Science, Notre Dame: Notre Dame University Press.

—— [1988], The cognitive claims of metaphor, The Journal of Speculative Philosophy, 2(1), 1–16, doi: 10.2307/25668224.

Hillaireau, Hervé & Couvreur, Patrick [2009], Nanocarriers’ entry into the cell: Relevance to drug delivery, Cellular and Molecular Life Sciences, 66(17), 2873–2896, doi: 10.1007/s00018-009-0053-z.

Hoffman, Robert R. [1980], Metaphor in science, in: Cognition and Figurative Language, edited by R. P. Honeck & R. R. Hoffman, Hillsdale: Lawrence Erlbaum Associates, 393–423.

Kakkar, Ashok, Traverso, Giovanni, et al. [2017], Evolution of macromolec- ular complexity in drug delivery systems, Nature Reviews Chemistry, 1, 63, doi: 10.1038/s41570-017-0063.

Keller, Evelyn Fox [2002], Making Sense of Life: Explaining Biological Developments with Models, Metaphors and Machines, Cambridge, Mass.: Harvard University Press.

Lakoff, George & Johnson, Mark [1980], Metaphors We Live By, Chicago; London: University of Chicago Press. The Role of “Missile” and “Targeting” Metaphors in Nanomedicine 53

Lammers, Twan, Kiessling, Fabian, et al. [2016], Cancer nanomedicine: Is targeting our target?, Nature Reviews Materials, 1(9), 16 069, doi: 10.1038/ natrevmats.2016.69.

Landesman-Milo, Dalit, Qassem, Shahd, et al. [2016], Targeting cancer using nanocarriers, in: Nanomedicine, edited by K. A. Howard, T. Vorup- Jensen, & D. Peer, New York: Springer, 131–155, doi: 10.1007/978-1-4939- 3634-2_7.

Leroux, Jean-Christophe [2017], Editorial: Drug delivery: Too much complexity, not enough reproducibility?, Angewandte Chemie International Edition, 56(48), 15 170–15 171, doi: 10.1002/anie.201709002.

Levy, Arnon [2011], Information in biology: A fictionalist account 1, Noûs, 45(4), 640–657, doi: 10.1111/j.1468-0068.2010.00792.x.

Liu, Junjie, Li, Menghuan, et al. [2017], Design of nanocarriers based on complex biological barriers in vivo for tumor therapy, Nano Today, 15, 56– 90, doi: 10.1016/j.nantod.2017.06.010.

Loeve, Sacha, Bensaude-Vincent, Bernadette, et al. [2013], Nanomedicine metaphors: From war to care. Emergence of an oecological approach, Nano Today, 8(6), 560–565, doi: https://doi.org/10.1016/j.nantod.2013.08.003.

McCarron, Paul A & Faheem, Ahmed Moustafa [2010], Nanomedicine-based cancer targeting: A new weapon in an old war, Nanomedicine, 5(1), 3–5, doi: 10.2217/nnm.09.89.

Morelli, Marco J., Allen, Rosalind J., et al. [2011], Effects of macromolecu- lar crowding on genetic networks, Biophysical Journal, 101(12), 2882–2891, doi: 10.1016/j.bpj.2011.10.053.

Noble, Denis [2008], The Music of Life: Biology Beyond Genes, Oxford: Oxford University Press.

Oftedal, Gry & Parkkinen, Veli-Pekka [2013], Synthetic biology and genetic causation, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 44(2), 208– 216, doi: 10.1016/j.shpsc.2013.03.016.

Peer, Dan, Karp, Jeffrey M., et al. [2007], Nanocarriers as an emerging platform for cancer therapy, Nature Nanotechnology, 2, 751–760, doi: 10. 1038/nnano.2007.387.

Pérez-Herrero, Edgar & Fernández-Medarde, Alberto [2015], Advanced targeted therapies in cancer: Drug nanocarriers, the future of chemotherapy, European Journal of Pharmaceutics and Biopharmaceutics, 93, 52 –79, doi: https://doi.org/10.1016/j.ejpb.2015.03.018. 54 Gry Oftedal

Petrenko, Valery A. & Gillespie, James W. [2017], Paradigm shift in bacteriophage-mediated delivery of anticancer drugs: From targeted “magic bullets” to self-navigated “magic missiles”, Expert Opinion on Drug Delivery, 14(3), 373–384, doi: 10.1080/17425247.2016.1218463.

Schöttler, Susanne, Becker, Greta, et al. [2016], Protein adsorption is required for stealth effect of poly(ethylene glycol)- and poly(phosphoester)- coated nanocarriers, Nature Nanotechnology, 11, 372–377, doi: 10.1038/ nnano.2015.330.

Searle, John R. [1993], Metaphor, in: Metaphor and Thought, edited by A. Ortony, Cambridge: Cambridge University Press, 2nd edn., 83–111, doi: 10.1017/CBO9781139173865.008, 1979.

Shi, Jinjun, Kantoff, Philip W., et al. [2016], Cancer nanomedicine: Progress, challenges and opportunities, Nature Reviews Cancer, 17, 20–37, doi: 10.1038/nrc.2016.108.

Singh, Rajesh & Lillard, James W. [2009], Nanoparticle-based targeted drug delivery, Experimental and Molecular Pathology, 86(3), 215–223, doi: 10. 1016/j.yexmp.2008.12.004, special Issue: Structural Biology.

Skotland, Tore, Iversen, Tore-Geir, et al. [2014], Development of nanopar- ticles for clinical use, Nanomedicine, 9(9), 1295–1299, doi: 10.2217/nnm.14. 81.

Sober, Elliot [2015], Ockham’s Razors: A User’s Manual, Cambridge: Cambridge University Press.

Stegmann, Ulrich E. [2016], “Genetic Coding” reconsidered: An analysis of actual usage, The British Journal for the Philosophy of Science, 67(3), 707– 730, doi: 10.1093/bjps/axv007.

Strebhardt, Klaus & Ullrich, Axel [2008], Paul Ehrlich’s magic bullet concept: 100 years of progress, Nature Reviews Cancer, 8, 473–480, doi: 10.1038/nrc2394.

Van Fraassen, Bas [1980], The Scientific Image, Oxford: Oxford University Press.

Yan, Jing-Jun, Liao, Jia-Zhi, et al. [2015], Active radar guides missile to its target: Receptor-based targeted treatment of hepatocellular carcinoma by nanoparticulate systems, Tumor Biology, 36(1), 55–67, doi: 10.1007/s13277- 014-2855-3.

Yokoyama, Masayuki, Inoue, Shohei, et al. [1989], Molecular design for missile drug: Synthesis of adriamycin conjugated with immunoglobulin G using poly(ethylene glycol)-block-poly(aspartic acid) as intermediate car- rier, Die Makromolekulare Chemie, 190(9), 2041–2054, doi: 10.1002/macp. 1989.021900904. The Role of “Missile” and “Targeting” Metaphors in Nanomedicine 55

Zarschler, Kristof, Prapainop, Kanlaya, et al. [2014], Diagnostic nanopar- ticle targeting of the EGF-receptor in complex biological conditions us- ing single-domain antibodies, Nanoscale, 6, 6046–6056, doi: 10.1039/ C4NR00595C.

Nanotechnology and Synthetic Biology: The Ambiguity of the Nano-Bio Convergence

Louis Ujéda Paris (France)

Résumé : Cet article étudie l’étendue de la convergence réelle entre les nano- technologies et la biologie de synthèse, symbole des technosciences biologiques. Pour traiter la question de la dichotomie entre le niveau des objets auquel on observe un processus de pluralisation plutôt qu’une convergence, et le niveau des discours, où le scénario de la convergence semble rester l’explica- tion dominante, nous développons une analyse des disciplines comme disposi- tifs au sens de Foucault. Cela permet de décrire précisément les différentes strates composant les dispositifs et leurs dynamiques. La convergence des Nanotechnologies, des Biotechnologies, des technologies de l’Information et des sciences Cognitives (convergence NBIC) apparaît alors comme l’interprétation réductrice d’un contexte scientifique et technologique complexe.

Abstract: This article studies the extent of the concrete convergence between nanotechnology and synthetic biology, emblematic of the biological technosciences. To address the issue of the dichotomy between the level of the objects, where we observe a pluralization process more than a convergence, and the level of the discourses, where the convergence scenario seems to remain the dominant one, we develop an analysis of the disciplines as apparatuses (“dispositifs”) in Foucault’s sense. This enables a precise description of the different strata composing the apparatuses, and of their dynamics. The convergence of Nanotechnology, Biotechnology, Information technology and Cognitive science (NBIC convergence) then appears as a simplified interpretation of a complex scientific and technological context.

Philosophia Scientiæ, 23(1), 2019, 57–72. 58 Louis Ujéda 1 Introduction

Fifteen years after Roco’s & Bainbridge’s foundational report on the con- vergence of Nanotechnology, Biotechnology, Information technology and Cognitive science (NBIC convergence) [Roco & Bainbridge 2003], we may start assessing to what extent these scientific and technological fields are effectively converging. In 2003, when the report was released, most of the fields were considered emergent, and the convergence was thus a projection based on potential developments and on clues about the direction of these developments. Fifteen years later, even if the developments of the different disciplines were slower than predicted, we have sufficient elements to confront the scenario of the convergence of the emerging technologies with the reality of what is happening. We will focus, in this article, on the convergence of nanotechnology and biotechnology. When it was proposed in 2003, the NBIC convergence was not formulated based on any observable reality. It was a projection of current tendencies and potentials into the future. Furthermore, it was an ideological program, putting forward the utopian vision of “a fantastic new wave of innovations” [Est, Stemerding et al. 2014, 13] leading automatically to the “improvement of human performance” [Roco & Bainbridge 2003, ix]. Guchet underlines the two presuppositions of this program [Guchet 2014, 166]: first, the sciences and technologies taking part in this convergence are only considered through the applications they will be able to deliver, and second, it is possible to mobilise an entire society towards one ultimate goal—in this case developing applications that improve human performance. Thus, the NBIC convergence is not another term for interdisciplinarity or for the usual process of the integration of technology, it is a transhuman utopia based on capitalism: in the framework given by Roco & Bainbridge, the ultimate meaning and direction of the convergence is to be found in ideology. The epistemological translation of the transhumanist program is the idea that matter manipulation at the nanoscale (i.e., manipulation of atoms, molecules or groups of them) will give birth to a transdisciplinary techno- scientific culture. In this article, we will investigate the reality of such an epistemological culture, and ask if there is a convergence of the sciences and technologies beyond this kind of ideological reconstruction. To answer this question, we will focus on the methods and objects of the different disciplines, as well as on their epistemic cultures.

2 Apparatus analysis

Xavier Guchet explains that it is possible to take a step aside from the ideo- logical program of the NBIC convergence to understand what is really at work within the field of nanotechnology. According to this analysis, nanotechnology The Ambiguity of the Nano-Bio Convergence 59 is not so much a convergence as a “proliferation, a pluralisation of the strategies of intervention on matter1” [Guchet 2014, 167]. This observation is largely valid for the other dimensions of the NBIC convergence as well, and especially for biotechnology. Nevertheless, merely presenting the pluralisation process at work leaves us with no possibility for discussing the idea of the NBIC conver- gence. To avoid this conceptual trap, Guchet proposes talking about apparatus (dispositif, in the sense given by Deleuze to Foucault’s concept) [Guchet 2014, 169]. As an apparatus is composed of heterogeneous elements (from discourses or social entities to concrete technical objects), it integrates both the concrete scientific and technical reality and the values mobilized by the actors. For Guchet, the major advantage of this concept is that it unites the process of organisation (“territorialization”, “stratification”) and the resistance to this process, or the tendency towards another organisation (“deterritorialization”, “line of flight”)2; it unites the actuality, even the materiality of a discipline, and the potentiality still contained in the field as well as in the concrete objects to develop into other directions (other scientific approaches, other modes of action on the world, etc.). The question we should ask then is: to what extent is there a convergence between the NBIC apparatuses? The NBIC convergence program even postulated the realisation of an NBIC apparatus in the near future. Its foundation, its first stratum should now be observable, but leaving aside the transhumanist reactivation of the NBIC convergence discourse, very few scientific and technical results explicitly place themselves under the NBIC label. There may be, nonetheless, an underlying converging process in day- to-day scientific cultures and practices. We will investigate this hypothesis by focusing on the convergence between the nanotechnology apparatus and the biotechnology apparatus, and more precisely on the apparatus of synthetic biology. Synthetic biology is indeed the perfect candidate for analysing the state of the convergence of biotechnology with nanotechnology. Synthetic biology was defined during the “Synthetic biology 1.0 conference” (SB 1.0) held in June 2004 at the Massachusetts’ Institute of Technology (MIT). While, the conference did not refer to the NBIC convergence, it took place soon after the release of Roco & Bainbridge’s report. Moreover, the conference was held in the very “application oriented” context of MIT, and claimed that synthetic biology is both “scientific and engineering research” [Endy, Knight et al. 2004]. By its focus on applications, its research in computationally assisted rational design, and its claim that it will technically master life, synthetic biology—in

1. All translations are mine, unless otherwise indicated. 2. Territorialization and deterritorialization are two key-concepts developed by Deleuze & Guattari [Deleuze & Guattari 1980]. They express antagonistic dynamics, the first is a dynamic of organisation, or “stratification”, the second of disorganisation, of creation of potential. But they are also circular dynamics, because any deterritorialization tends to stratify (either by reterritorializing or by creating a new stratum), while any stratum produces deterritorialization processes at its margins. 60 Louis Ujéda its MIT definition—displays a number of characteristics that qualify it as part of the NBIC convergence.

Moreover, as is the case for the other emergent scientific and technological fields of the early 2000s, synthetic biology (SB) can best be understood as a program, or an ambition rather than as a well-defined field. The presentation of SB 1.0 admits that the scientific and technical objectives (“to design and build biological parts, devices and integrated biological systems” [Endy, Knight et al. 2004]) need further technological support to be possible. Synthetic biol- ogy, as a discipline, structures itself around hypotheses and objectives, and not around a paradigmatic theory or shared methods. Bensaude-Vincent explains that the epistemic diversity in the discipline is possible because of an epistemic opportunism, grounded in a “hard-rock optimism”, shared by the researchers promoting synthetic biology [Bensaude-Vincent 2013a]. The scientists engaged in synthetic biology never seem to be discouraged in their claims by negative results: they adapt and pursue their program. In consequence, synthetic biology has developed in many directions, encompassing research previously included in different disciplines (chemistry, genetic engineering, molecular biology, bioinformatics, etc.). Nevertheless, the organisation of synthetic biology is similar to that of traditional disciplines [Bensaude-Vincent 2013b, 122]: it is organised around specific journals and international conferences, it has its own academic courses and degrees, etc. The annual conferences (SB x.0) and the international Genetically Engineered Machines (iGEM) competition for students have done much for the international visibility and the rapid organisation of the discipline, as well as for the diffusion of epistemic optimism as a shared culture.

As with nanotechnology, which subsumed very diverse scientific and technological domains of research, synthetic biology is not reducible to any simple definition. It was indeed rapidly used as a label for many research programs, such as gene editing, gene regulation or gene deleting (all of which are now rather simply obtained with CRISPR-Cas9)3 [Tremblay 2015], complete genome editing [Dymond & Boeke 2012], complete genome synthesis [Venter 2013], metabolic pathway engineering [Cameron, Bashor et al. 2014], [Del Vecchio 2015], as well as attempts at cell synthesis [Wu & Tan 2014] or research programs in xeno-genetics [Malyshev, Dhami et al. 2014], [Taylor, Pinheiro et al. 2014]. All these research programs do not have many aspects in common, save the recourse to strategies of synthesis. The concept of apparatus is therefore essential for expressing this special form of very tenuous unity (that is not thereby pure diversity either) and for encompassing a range of research objects as well as epistemic positions.

3. CRISPR-Cas9 is a molecular complex formed by a specific structure of ri- bonucleic acid (CRISPR meaning Clustered Regularly-Interspaced Short Palindromic Repeats), and by the Cas9 enzyme. The Ambiguity of the Nano-Bio Convergence 61 3 The convergence towards the molecular scale

Taking a step sideways from the programmatic stratum of the NBIC conver- gence, we would like to evaluate the concrete convergence of the apparatuses of nanotechnology and of synthetic biology, and, for this, our starting point will be at the level of objects. We now have fifteen years of experiments in the different fields that we can use to measure a potential overlapping of the research strategies and a potentially increasing intertwining of the objects. One of the claims of the NBIC convergence program is indeed the transversal manipulation of matter at the atomic level. In nanotechnology, this may mean quite literally moving atoms one at a time using a Scanning Tunneling Microscope (STM). Nevertheless, as Guchet demonstrates, scale is problematic even within the domain of nanotechnology and nanoscience [Guchet 2014, 38], and most of the experiments in nanotechnology as in synthetic biology involve a large number of atoms, in particular when dealing with proteins. Can we nevertheless identify converging capacities to act on matter? When we look at the two ends of the spectrum, action on matter in nanotechnology seems radically different from what it is in synthetic biology. Nanotechnology, as we pointed out, can move individual atoms with a STM, while synthetic biology can use the metabolic processes of bacteria or yeasts to assemble large fragments of synthetic DNA (tens of thousands of genetic base-pairs) via the very efficient homologous recombination processes of the cell [Venter 2013, 93], [Annaluru, Muller et al. 2014, 55]. The middle of the spectrum of scale might well be more useful for understanding why there may be a convergence, but the middle is located in the domain of neither physics nor biology, instead it concerns chemistry. The “nanomachines” of nanotechnology, as well as the molecular blocks (monomers) that constitute the polymers (proteins, ribonucleic acids (RNA), deoxyribonucleic acids (DNA), etc.) with which synthetic biology works, are molecules of the order of magnitude of the nanometre. Many experiments in nanoscience and nanotechnology take advantage of chemical dynamics to assemble artificial molecular structures that can then use a specific chemical reaction (or reaction cycle) to produce movement. The work of Jean-Pierre Sauvage, who shared the Nobel prize in chemistry in 2016, illustrates this: [2]catenane, for example, is composed of two intertwined rings that are assembled using the spatial organization property of a copper atom. Once assembled, one of the rings can move relative to the other using reduction- oxidation reactions [Sauvage 2017, 38]. Similarly, DNA and RNA are synthesized in vitro using a repeated four-step chemical protocol [Ma, Tang et al. 2012]. The technique on which synthetic biology is based is thus a form of nanotechnology, even if it is very rarely described this way, in the sense that it takes advantage of the assembling properties of chemical dynamics. 62 Louis Ujéda

Chemistry can thus be seen as the real scale of the convergence, although this thesis would require a redefinition of the epistemological core of the NBIC convergence. While the NBIC convergence postulates a reductionist and atomistic approach centered on the idea of incremental engineering at the scale of the atom (as symbolised by the application of STM), Jean-Pierre Sauvage’s chemically synthesised “nanomachines”, as well as much of the synthesis and modification techniques used in synthetic biology, take advantage of the dynamics of quite complex chemical systems. This convergence is thus less a mastery over matter in a mechanical sense, and more a “patient negotiation” with the forces active at the nanoscale [Guchet 2014, 167]. Other techniques link, at the molecular level, atomic organisation of matter to biological applications, substantiating the convergence in its more usual sense. Microarrays are a good example of nanostructured materials used in a biological context. The microarray developed by the NanoBioSystèmes (NBS) group of the Laboratoire d’Analyse et d’Architecture des Systèmes (LAAS, CNRS, France) combines nanostructuring, molecular chemistry and optical properties of the system for biomedical diagnosis [Cau, Lalo et al. 2009]. The microarray itself is a glass slide on which nanogrooves of molecules, which serve as probes, are “printed”. The idea is that the molecules to be detected in the biosample will attach themselves to the molecular probes. Once they are linked, the new molecular structure on the microarray modifies the way light is diffracted, thus enabling the diagnosis by simple optical observation. More generally, nanostructured surfaces are powerful and versatile tools for biology: they can serve not only for detection, but also for catalysis, or as a biomimetic surface for starting the synthesis of biomolecules or biomimetic molecular complexes. Biomimetics is an important symbol of the nano-bio convergence, and provides ready arguments in favour of the NBIC convergence hypothesis. However, studying an example more closely leads us to qualify the storytelling behind the NBIC convergence. The NBS group of the LAAS developed a research program aimed at the synthesis of a molecular motor identical to the one in E-coli’s flagellum. It was a purely biomimetic approach but with important scientific and technological significance in nanotechnology, because it would have been a methodological proof of concept of the synthesis of a very complex biological structure. It would have been a significant step forward in the research on nano-engines as the motor in the flagellum is indeed a powerful source of mechanical work (at the cellular scale) that uses an electrochemical gradient to function. The experimental protocol uses surfaces with a nano-printed molecular layer mimicking the cell membrane [Chalmeau, Dagkessamanskaia et al. 2009]. The proteins, obtained through a biochemical reaction inside vesicle-bioreactors, auto-assemble on the surface as if on a cell membrane. Synthetic biology plays an important role in the experiment, while remaining quite separated: another laboratory, specialised in synthetic biology, delivered the materials (the proteins) that were assembled. This collaboration between nanotechnology and synthetic biology enabled the analysis of some The Ambiguity of the Nano-Bio Convergence 63 aspects of the flagellar motor, but fell short of delivering the complete synthesis that was promised. A quick search of the literature seems to confirm that the project of a complete synthesis of such a complex protein structure remains excessively ambitious: scientists are still figuring out the structure and the mechanisms of such ensembles. With this example, we see that there is a difference between what is announced at the programmatic level and what can actually be done in the laboratory: the “nanomotor” was never synthesised, the experiments just used the technical tools available to progress in the comprehension of a complex protein-based structure. While the research program put forward the application (the synthesis of a powerful nanomotor), the research results were in reality fundamental, and more significant as proofs of concept, on one hand for the use of protocell bioreactors, and on the other for the use of biomimetic nanosurfaces. It was thus a rather classic scientific program, closer to fundamental biochemistry than to nanotechnology. Moreover, it is difficult to call this collaboration between nanotechnology and synthetic biology a convergence in any strong sense, because there was neither a convergence of the research topics, nor a sharing of competences. In contradiction with what the NBIC convergence suggests, beneath the surface of the general branding terms (“nano” or “synthetic biology”) there is still a strong resistance from the traditional epistemic cultures. Despite diplomas in nanotechnology or in synthetic biology, the researchers still define themselves as molecular biologists, chemists, physicists, bioinformaticians, engineers, etc., who do nanotechnology or synthetic biology [Kastenhofer 2013, 133], [Klein 2010, 53]. The complexity of objects in these disciplines requires a level of specialisation from the scientists, engineers and technicians that is incompatible with a transdisciplinary professional identity. It is nonetheless possible to argue that the reason that interdisciplinary collaboration has become so frequent in contemporary science is because the “emergent” disciplines are converging, at least at the programmatic level. This (inter-) disciplinary paradox points to the complexity of the dynamics within and between the apparatuses in the nanosciences. We observe what we may call a “converging territorialization” at the programmatic level, producing examples of effective collaborations between laboratories from different disciplines, confirming the idea of the transdisciplinary value of epistemic optimism. But, at the same time, we observe a multiplication of objects, involving a plurality of scientific and technical skills, as well as a plurality of methods, strengthening the divergent territorialization of the more traditional disciplines.

4 Converging engineering ambitions

Following Bensaude-Vincent, who explains that the unity of synthetic biology has more to do with a general epistemic attitude than with any well-established 64 Louis Ujéda paradigm, homogeneous objects or methods, we have to evaluate the epistemic dimension(s) of the nanotechnology and synthetic biology apparatus. The nano-bio convergence at the material level of the objects may indeed be ambiguous, but it might still be strongly driven by a shared perception of the world and of our modalities of action on it, forming a common epistemic background for the plurality of the research programs. According to Guchet, the core epistemic attitude of the nanotechnology apparatus is defined by the ambition to invent and develop engineering modali- ties and processes at the nanoscale [Guchet 2014, 95], where classic engineering approaches are inoperative. It is thus a technology-oriented epistemic vision, grounded in the belief that engineering is possible at the nanoscale despite the quantic properties of matter, and especially its probabilistic behaviour (in opposition to the deterministic behaviour of matter at the macroscale). The hypothesis of bottom-up engineering—atom by atom—has proven difficult to implement concretely, but nanotechnology has invented more complex processes mixing bottom-up techniques (nano-structured surfaces for example) with top-down approaches (mimicking biochemical reactions, or using the auto-assembling properties of molecules). The idea of an opportunist epistemic attitude, requiring reasoning and imagination [Bensaude-Vincent 2013a], can thus be applied to nanotechnology as well as to synthetic biology. Conversely, the engineering culture is not only characteristic of nanotech- nology, it is also a prominent feature of some branches of synthetic biology. The engineering culture was determinant in structuring synthetic biology: its original distinction from the rest of biology was supported by scientists with engineering backgrounds (Drew Endy, Tom Knight, Robert Carlson, Roger Brent), who would even have preferred the more explicit name “intentional biology” [Campos 2010, 17]. A number of experiments seek to prove the pertinence of the analogy between cellular processes and micro-electronics, and between genetic regulation and computing. For example, in 2013, the implementation of several genetic logic-gates was reported: scientists managed to condition the expression of the gene coding for Green Fluorescent Protein (GFP) to the presence of two chemical inputs according to their logical relations (AND, OR, NOR) [Cameron, Bashor et al. 2014, 6]. In the same line of work, the title of an article is explicit concerning the computing and electronic analogy: “Robust multicellular computing using genetically encoded NOR gates and chemical ‘wires’ ” [Tamsir, Tabor et al. 2010]. These examples give weight to the heuristic image of genetically programming a biological chassis. Engineering also plays a more fundamental epistemological role in deter- mining directions of research, such as the pursuit of modularity and orthog- onality in genetic design. Modularity—usually illustrated by the image of Lego bricks—is typically a requirement of traditional engineering: it is indeed essential that the functional modules (in the case of synthetic biology, the genetic building blocks, such as genes and regulators) can be connected in The Ambiguity of the Nano-Bio Convergence 65 the desired order, and universally recognised. The entire BioBrick program4 developed around MIT and the iGEM competition is based on the modularity principle (BioBrick foundation). The orthogonality principle stipulates that independent modules should not have any effect on each other: put simply, one program should not change the execution of another. Once again, the computer analogy is very present. We have to note here that these two research orientations continue to encounter numerous obstacles [Vilanova, Tanner et al. 2015], [Del Vecchio 2015, 117], practical as well as theoretical, especially due to the chemical specificity of the different organisms, and due to Brownian chemical interactions in the cell. The chemical materiality of the genetic “software” interacts permanently with the non-universal cellular “hardware”. Nevertheless, because of the epistemic optimism of synthetic biologists, the computer analogy remains a powerful intuition in the discipline. The engineering epistemology is indubitably a force of convergence between the different apparatuses. It also clearly reflects the claim of the NBIC convergence that it will be possible to act, at both the atomic and the molecular levels, on life as well as on matter. This dynamic of convergence thus conveys a strong atomistic metaphysics in the field of biology. It is especially visible when considering the engineering strategies: modularity and orthogonality both require and produce atomistic interaction in the cell. These research strategies aim at breaking the concrete synergetic system of the cell down into independent, rationally designed building-blocks, in order to make the bottom-up engineering of life not only possible, but similar to the traditional engineering approach using mechanical parts. More generally, cells, in the field of industrial synthetic biology (including most of the genetic engineering), are considered abstractly as independent production units: a given cell must realise the entire metabolic process, and when the chemical transformation process is divided into several steps, these steps are usually distributed between different types of organisms, often separated into different milieux. The atomistic epistemology presiding over most engineering oriented branches of synthetic biology thus encompasses the regulation processes in the cell, as well as the cells themselves. This atomistic epistemology is reinforced by the classical use of “domesticated” yeasts and bacteria, which live well in homogeneous colonies, and consequently display less synergistic behaviours. By contrast, the studies of microbial life in natural environments give us a completely different picture of “normal” microbial existence and metabolic activity, in these contexts cooperation, specialization and continued association are very frequent [Dupré 2014, 221]. Engineering thus constitutes a force pushing towards strong converg- ing territorialization, catalysing the epistemological program of the NBIC convergence to master nature from atoms to cells. Conversely, we have seen previously that the concrete scientific and technological results produce a pluralisation of the objects and of the modes of interaction with the

4. https://biobricks.org/. 66 Louis Ujéda forces of nature. All these processes induce contradictory dynamics in the epistemic dimensions of the apparatuses, because the pluralisation process has consequences on the theoretical models and on the implicit metaphysics behind the epistemic positions. While some proofs of concept in nanotechnology as in synthetic biology confirm the epistemic validity of atomist engineering of mat- ter and of life, experimental failures and alternative strategies simultaneously support competing epistemic attitudes (emerging structures from complex interactions, ecological synergy, etc.) grounded in divergent metaphysical perspectives (emergentism, holism).

5 Epistemic metastability: a plurality of potential re-compositions

The apparatuses seem to be best understood as being in an ambivalent relationship to the NBIC convergence. Some strata are territorializing in the sense of a convergence, while other competing territorializations, as well as lines of flight, tend to produce a dynamics of divergence. Our analysis has enabled us to understand more precisely to what extent the nanotechnology apparatus and the synthetic biology apparatus converge, but we are no closer to understanding the general validity of the NBIC convergence theory. For that, it is necessary to take another step aside and consider that the apparatuses are intrinsically under-determined. To put it differently, as they are constantly evolving, the description we make of these apparatuses is always partially inadequate, asynchronous. The NBIC convergence is one way to describe the apparatuses, but there are also other ways to make sense out of them. As they are intrinsically plural and proliferating, they hardly qualify as “normal science”, but neither abnormal nor in crisis. Using the concept of apparatus to describe nanotechnology or synthetic biology draws our attention to their own normality: being complex, dynamic, evolving, unpredictable, laden with potential. Simondon’s concept of metastability characterises such a state, where a system is, so to speak, more than one: metastability defines a system so saturated in potential and in internal tensions that a perturbation of the system would produce a transformation towards a new equilibrium [Simondon 2005, 26]. While metastability might be the normal state of the “emergent” sciences and technologies, it could also be a transition phase in science. From this perspective, the NBIC convergence appears as a strong performative reduction, a means to impose direction and meaning on the relative indeterminacy of contemporary science. Thus, it is important to reinstate the apparatuses in their plurality of open potentials. The technical dimensions of both the nanotechnology and the synthetic bi- ology apparatuses are clearly convergent, but the meaning of this convergence is still to be determined. As we have said previously, specific experiments in one field or the other may use elements produced by the other field: synthetic The Ambiguity of the Nano-Bio Convergence 67 biology may be used to produce specific biomolecules for nanotechnological application, and nanotechnology may produce nanomaterials or nanosurfaces for synthetic biology to use as a catalyst, observation device, etc. The technical continuity is thus unquestionable, and constitutes a territorialization around the molecular scale common to both apparatuses. But, for now, the technical culture strata remain heterogeneous because of the specialization required to master these complex techniques. Even within the domain of synthetic biology, each modification technique, each model organism, almost each genetic design strategy requires a specific competence, and a dose of tinkering, of “kludging” [O’Malley 2009, 382], that participates in producing technical and scientific cultures specific to a laboratory and/or a team, sometimes even specific to a single researcher. The complexity of the technical protocols forces engineers and technicians to be highly specialised, thus rendering a transversal technical culture utopian. In this context, the NBIC convergence program nonetheless postulates an actualisation of the epistemic uncertainty around transdisciplinary engineer- ing, or in other words it anticipates the reinforcement of the territorialization dynamic. But the intrinsic diversity of the research programs in synthetic biology tends to oppose this program of convergence, especially because it opens numerous potentials for alternative territorializations. The recent and spectacular efficiency increase and cost decrease of genetic synthesis, genetic sequencing and genetic modification have indeed accelerated the externalization process of these technical aspects of synthetic biology: the technical manipulations are performed today by biology technicians and specialised bio-industries. While the existence of such industries confirms the convergent territorialization, the externalization of numerous technical aspects by the scientific laboratories acts as a competing deterritorialization process. Synthetic biologists, that have worked for around three decades to invent, perfect and master techniques enabling them to act on the genome, to read it and to reproduce it, can now focus on designing new experiments for “fundamental” research like, for example, using genetically modified cells to explore the spatial configuration of the genome, especially during specific phases of cellular development [Mercy, Mozziconacci et al. 2017], or on engineering metabolic processes of industrial or environmental interest. In other words, the more mature the technique is, the less important it is in the daily focus of the scientific fields. The technique stops being a research object in itself, and it becomes the background for action and for other research projects. The creation of new potential by the techniques thus opens up new development paths for the sciences, and contradicts the simplifying idea of a unidirectional convergence of the sciences and technologies. The epistemological dimensions of the nanotechnology and synthetic biology apparatuses are also more contrasted than the NBIC convergence program would have us believe. There are in particular other territorialization tendencies, and other potential epistemic reconfigurations than those of the NBIC convergence. For example, synthetic biology is often associated with 68 Louis Ujéda system biology. System biology can also be described as an emergent science, in the sense that it has developed rapidly following the increase of computational power, and especially the increased capacity to model complex systems, linked to the exponential capacity to collect huge amounts of biological data. The point of convergence between synthetic biology and system biology, justifying their institutional association, is the importance of modelling, and thus the central place of bioinformatics. Naïvely, system biology could be included in the NBIC convergence, as part of the bio-info convergence, but an interesting feature of system biology is its strongly analytical epistemic position [Kastenhofer 2013, 136]. From this perspective, then, the process of territorialization is very different from the one of NBIC: it is not driven by technical action goals, but by knowledge goals. Compared to the NBIC program, system biology represents a deterritorialization of the bioinformatics synergy towards complex interactions and interdependencies between the multiple levels of organisation inside the cell, and more generally in the living world. Dupré points out the renewed interest for the interactions between the organisms and the environment, notably through epigenetics, and for mutualistic collaborations and interdependency situations, as exemplified by the numerous studies of metagenomics about the microbiota—the microbial ecosystems that live in symbiosis with every macroscopic organism [Dupré 2014]. Dupré explains how this research profoundly transforms the paradigm of biology, from a theory structured by individual genetic identity and Darwinian selection (including competition for survival), to a theory structured by dynamic genetic interactions and vital interdependency [Dupré 2014, 203]. The pluralisation processes at the epistemological level present a challenge different from the pluralisation of the objects or even of the modes of action: how may a discipline—in this specific case, biology—continue to make sense as a whole, if its theories are pluralising? The picture of biology presented by metagenomics seems quite different from the picture presented by the engineering oriented branches of synthetic biology, yet they represent two possible developments of contemporary biology in convergence with other disciplines. The major difficulty in articulating the engineering oriented branches of synthetic biology and the branches of microbiology oriented towards complexity analysis is the divergent ontological furniture used in each field. On one hand, the biological entities are expressed in terms of atomistic technological elements (program, chassis, logic gates, regulation circuits, etc.), and on the other, these same entities are expressed in terms of holistic ecological parameters (chemical conditions of the milieu, messages, mutualistic interactions, etc.). To make sense of this ontological divergence resulting from the different dynamics at play in an apparatus, Quine’s philosophy may be of help. Quine explains that the ontological furniture of a discipline, its system of objects, is relative to its particular system of word relations, i.e., its general theory [Quine 1969, 50]. Based on a holistic and pragmatic theory of language, the relativity of ontology to epistemology implies that translation is always possible between theories, on the express condition that it respects the The Ambiguity of the Nano-Bio Convergence 69 relations organising the ontological system [Quine 1969, 50]. The possibility of translation explains why all epistemic positions included in a metastable epistemic system remain commensurable. In the example of contemporary biology, it would indeed be absurd to think that two sources of knowledge such as synthetic biology and metagenomics, or epigenetics, are oblivious of each other. There is knowledge circulation between these approaches, thanks to translation processes. We can generalise this idea in considering all territorialization and deterri- torialization processes in situations of epistemic metastability as consequences of translation choices. The NBIC convergence relies on translating the different ontologies in terms of mechanical engineering. One example in nanotechnology of this choice of translation is the description of the dynamic molecular structures as “machines” or “motors”: most of the phenomena could just as well be described in terms of chemical reactions and configurations as in terms of the transformation of an energy impulsion into mechanical work. In the case of synthetic biology, the example of the “genetically engineered machine” (the “GEM” in iGEM) is similar: talking about implementing a genetic program in a biological chassis, sometimes even using the terms software and hardware, includes the objects of synthetic biology in the field of electronic and computational engineering, while it is also possible to talk about gene expression, metabolic functions, etc., and indeed these two forms of discourse coexist in the field of synthetic biology. As both the apparatuses are complex mixes of dynamics, some territorializing them as part of the NBIC convergence, others, on the contrary, opening deterritorializing potential, the NBIC convergence appears to be only one possible system of translation, indeed overlapping parts of both of them, but without exhausting the entire potential of either of them.

6 Conclusion

Evaluating the reality of the nano-bio convergence by comparing the nan- otechnology apparatus and the synthetic biology apparatus enables us to see the NBIC convergence for what it is: one of the processes that the two apparatuses have in common. On one hand, the NBIC convergence is thus a powerful performative territorialization process, based on concretely converging technical and epistemic tendencies, and motivated by an ideological vision, on the other hand, it is only one way to make sense of the reality of the apparatuses by translating compatible variables and objects into the same discourse. The NBIC convergence, and more generally, the interpretation of the metastable state in which contemporary sciences and technologies are as a convergence, is a simplification. It presents the epistemic plurality and the proliferation of the modes of interaction with the world as a unidimensional dynamic. The pertinence of the idea of convergence is 70 Louis Ujéda thus questionable, and since it clearly is an ideologically oriented program, it should be presented as such. Another way to make sense out of this complexity is therefore to analyse the translation processes of ontologies that happen at the epistemic level, the knowledge circulation, the decompartmentalization of traditional disciplines, and the actual modes of existence of the objects and beings produced by the disciplines. As Guchet explains clearly in his study of nanotechnology, finding the right angle to make sense out of emergent technosciences is difficult, but it is an ethical necessity.

Bibliography

Annaluru, Narayana, Muller, Héloïse, et al. [2014], Total synthesis of a functional designer eukaryotic chromosome, Science, 344(6179), 55–58, doi: 10.1126/science.1249252.

Bensaude-Vincent, Bernadette [2013a], Between the possible and the actual: Philosophical perspectives on the design of synthetic organisms, Futures, 48, 23–31, doi: 10.1016/j.futures.2013.02.006.

—— [2013b], Discipline-building in synthetic biology, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 44(2), 122–129, doi: 10.1016/j.shpsc. 2013.03.007.

Cameron, D. Ewen, Bashor, Caleb J., et al. [2014], A brief history of synthetic biology, Nature Reviews Microbiology, 12, 381–390, doi: 10.1038/ nrmicro3239.

Campos, Luis [2010], That was the synthetic biology that was, in: Synthetic Biology: The technoscience and its societal consequences, edited by M. Schmidt, A. Kelle, A. Ganguli-Mitra, & H. Vriend, Dordrecht: Springer Netherlands, 5–21, doi: 10.1007/978-90-481-2678-1_2.

Cau, Jean-Christophe, Lalo, Hélène, et al. [2009], Molecular analysis for medicine: A new technological platform based on nanopatterning and label- free optical detection, Oncologie, 11(1), 148–152, doi: 10.1007/s10269-009- 1825-7.

Chalmeau, Jérôme, Dagkessamanskaia, Adilia, et al. [2009], Contribution to the elucidation of the structure of the bacterial flagellum nano-motor through AFM imaging of the M-Ring, Ultramicroscopy, 109(8), 845–853, doi: 10.1016/j.ultramic.2009.03.035. The Ambiguity of the Nano-Bio Convergence 71

Del Vecchio, Domitilla [2015], Modularity, context-dependence, and insula- tion in engineered biological circuits, Trends in Biotechnology, 33(2), 111– 119, doi: 10.1016/j.tibtech.2014.11.009.

Deleuze, Gilles & Guattari, Félix [1980], Mille plateaux, Paris: Les Éditions de Minuit, 2009.

Dupré, John [2014], Processes of Life: Essays in the Philosophy of Biology, Oxford: Oxford University Press.

Dymond, Jessica & Boeke, Jef [2012], The Saccharomyces cerevisiae SCRaMbLE system and genome minimization, Bioengineered, 3(3), 170– 173, doi: 10.4161/bbug.19543.

Endy, Drew, Knight, Tom, et al. [2004], Synthetic Biology 1.0 Overview, Tech. rep., https://openwetware.org/wiki/Synthetic_ Biology:Synthetic_Biology_1.0.

Est, Rinie van, Stemerding, Dirk, et al. [2014], From Bio to NBIC convergence – From Medical Practice to Daily Life. Report writen for the Council of Europe, Committee on Bioethics, The Hague: Rathenau Instituut.

Guchet, Xavier [2014], Philosophie des nanotechnologies, Paris: Hermann.

Kastenhofer, Karen [2013], Two sides of the same coin? The (techno)epistemic cultures of systems and synthetic biology, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 44(2), 130–140, doi: 10.1016/j.shpsc. 2013.03.008.

Klein, Étienne [2010], Le Small Bang des nanotechnologies, Paris: Odile Jacob.

Ma, Siying, Tang, Nicholas, et al. [2012], DNA synthesis, assembly and applications in synthetic biology, Current Opinion in Chemical Biology, 16(3), 260–267, doi: 10.1016/j.cbpa.2012.05.001.

Malyshev, Denis A., Dhami, Kirandeep, et al. [2014], A semi-synthetic organism with an expanded genetic alphabet, Nature, 509, 385–388, doi: 10.1038/nature13314.

Mercy, Guillaume, Mozziconacci, Julien, et al. [2017], 3D organization of synthetic and scrambled chromosomes, Science, 355(6329), 1050, doi: 10. 1126/science.aaf4597.

O’Malley, Maureen A. [2009], Making knowledge in synthetic biology: Design meets kludge, Biological Theory, 4(4), 378–389, doi: 10.1162/BIOT_ a_00006. 72 Louis Ujéda

Quine, Willard Van Orman [1969], Ontological Relativity and Other Essays, New York: Columbia University Press.

Roco, Mihail C. & Bainbridge, William Sims (eds.) [2003], Converging Technologies for Improving Human Performance. Nanotechnology, Biotechnology, Information Technology and Cognitive Science, Dordrecht: Springer, doi: 10.1007/978-94-017-0359-8.

Sauvage, Jean-Pierre [2017], Entretien à La Recherche, La Recherche, 523, 36–68.

Simondon, Gilbert [2005], L’Individuation à la lumière des notions de forme et d’information, Krisis, Grenoble: Jérôme Millon.

Tamsir, Alvin, Tabor, Jeffrey J., et al. [2010], Robust multicellular comput- ing using genetically encoded NOR gates and chemical “wires”, Nature, 469, 212–215, doi: 10.1038/nature09565.

Taylor, Alexander I., Pinheiro, Vitor B., et al. [2014], Catalysts from syn- thetic genetic polymers, Nature, 518, 427–430, doi: 10.1038/nature13982.

Tremblay, Jacques P. [2015], CRISPR, un système qui permet de corriger ou de modifier l’expression de gènes responsables de maladies héréditaires, Med Sci (Paris), 31(11), 1014–1022, doi: 10.1051/medsci/20153111016.

Venter, J. Craig [2013], Life at the Speed of Light: From the Double Helix to the Dawn of Digital Life, New York: Viking.

Vilanova, Cristina, Tanner, Kristie, et al. [2015], Standards not that standard, Journal of Biological Engineering, 9(1), 17, doi: 10.1186/s13036- 015-0017-9.

Wu, Fan & Tan, Cheemeng [2014], The engineering of artificial cellular nanosystems using synthetic biology approaches, Wiley Interdisciplinary Reviews: Nanomedicine and Nanobiotechnology, 6(4), 369–383, doi: 10.1002/wnan.1265. Quoi de neuf chez les molécules-machines ? L’incroyable aventure des nanovoitures

Sacha Loeve IRPhiL, Institut de recherches philosophiques de Lyon, Université Jean-Moulin, Lyon III (France)

Résumé : Cet article s’intéresse à l’évolution récente d’une thématique par- ticulière des nanosciences et nanotechnologies, les nanomachines moléculaires, au prisme d’un événement singulier dont la préparation a mobilisé les efforts des chercheurs avec une intensité particulière : la première course internatio- nale de nanovoitures en avril 2017. Il retrace la genèse de ces objets, explique les motifs de l’organisation de la course, en raconte quelques épisodes et s’interroge sur la signification de cet événement : a-t-on affaire à un processus de gamification d’une technoscience en mal de légitimité socioéconomique et de pertinence scientifique ? Il montre qu’autre chose est en jeu, qui n’exclut pas la production de connaissances mais l’inscrit dans une nouvelle manière – technophanique – de réaliser des expériences.

Abstract: This article focuses on the recent evolution of a particular field of nanosciences and nanotechnology, molecular nanomachines, through the prism of an exceptional event whose preparation mobilized the efforts of researchers with particular intensity: the first international nanocar race in April 2017. It traces the genesis of these objects, explains the reasons for the organization of the race, presents a few episodes and reflects on the significance of this event: are we dealing with a process of “gamification” of a technoscience suffering from a lack of socio-economic legitimacy and scientific relevance? Here we argue that something else is at stake, which does not exclude knowledge production, but inscribes it in a new—technophanic—way of carrying out experiments.

Philosophia Scientiæ, 23(1), 2019, 73–98. 74 Sacha Loeve 1 Quoi de neuf chez les nanomachines ?

Le point de vue proposé ici porte moins sur le développement général des nanosciences et nanotechnologies (NST) que sur les récentes péripéties surve- nues dans une thématique particulière des NST, les nanomachines moléculaires artificielles (machines moléculaires ou molécules-machines1). En comparaison des innombrables laboratoires travaillant sur les nanoparticules, les nanotubes ou le graphène, les nanomachines ne mobilisent tout au plus qu’une douzaine d’équipes de recherche à l’échelon international. Il s’agit donc d’un domaine de recherche très circonscrit, mais son imagerie « technomorphique » [Jimenez- Bueno & Rapenne 2003], [Schummer 2006] et son recours central à l’instrument fétiche des NST, le microscope à effet tunnel (STM2), en font une thématique emblématique des NST. C’est à l’étude philosophique de ce domaine, de ses pratiques et de ses objets que j’ai consacré mes années de thèse [Loeve 2009], en dialogue avec les scientifiques qui m’en ont fait partager la passion tout en me laissant la liberté d’exprimer quelques nanodésaccords avec eux [Loeve 2008], [Joachim 2008]. Au motif que les molécules-machines ne sont pas directement engagées dans des développements applicatifs, les chercheurs les présentaient souvent comme des objets de science fondamentale en jetant un regard dépréciatif sur la technologie, identifiée à la poursuite d’objectifs utilitaires et commerciaux. La distinction entre « nanosciences » et « nanotechnologies » était généralement utilisée à cette fin. Ma thèse affirmait au contraire que, même en l’absence de finalités applicatives, il convenait de considérer les nanomachines avant tout comme des objets techniques – ou si l’on veut comme des objets technoscientifiques [Loeve 2016], [Bensaude-Vincent & Loeve 2018] – pour comprendre les pratiques épistémiques qu’elles engagent ; que ces pratiques ne relèvent ni d’une recherche fondamentale mue par la résolution de problèmes théoriques, ni d’une science appliquée et finalisée par des objectifs pratiques, mais d’une recherche technologique consistant à la fois et indissociablement en un travail d’individuation et de connaissance de l’individuation de nano-objets techniques dans leurs milieux d’opération [Simondon 1958]3. Il devenait alors possible

1. Voir la distinction opérée par Christian Joachim entre ces deux concepts dans sa contribution à ce numéro 151–159. Cette distinction n’est pas au cœur de notre propos ici. 2. Scanning tunnelling microscope. Le STM a joué un rôle symbolique très fort en diffusant des images du « nanomonde » bien au-delà de la seule sphère scientifique. Mais il n’est en fait que très marginalement utilisé dans la grande majorité des domaines de recherche des NST, où prédomine une instrumentation plus classique dite en « champ lointain », comme les microscopies électroniques et les différentes variétés de spectroscopie. 3. Ce travail technologique d’individuation n’exclut ni les « remontées » fonda- mentales (en théorie quantique par exemple [Joachim 2006]) ni les « retombées » applicatives (limitées pour l’heure à quelques brevets spéculatifs en nanoélectronique L’incroyable aventure des nanovoitures 75 de reconsidérer les NST au prisme d’un concept de technologie non soluble dans le schéma linéaire de la « science-pure-puis-appliquée », et plus proche de certains des sens anciens de la technologie comme logos des technai, discours sur les techniques et connaissance opérative des techniques [Carnino & Hilaire-Pérez 2017]. Je proposais enfin de montrer à l’œuvre dans ces pratiques et leurs discours d’accompagnement un logos narratif, irréductible aux différentes « -logies » des sciences théoriques et expérimentales aussi bien qu’aux logiques instrumentales du génie industriel : les molécules-machines, comme bien d’autres nano-objets, fonctionnent en racontant des histoires et racontent des histoires en fonctionnant. Quoi de neuf chez les nanomachines depuis ces années ? Deux événements tout en contrastes. Le premier est le prix Nobel de Chimie décerné en 2016 à Jean-Pierre Sauvage, Sir James Fraser Stoddart et Bernard Lucas Feringa « pour la conception et la synthèse de machines moléculaires4 ». Le récit qu’inspire ce prix Nobel, du moins autour du lauréat français, célèbre une chimie fondamentale et hautement créative menée par des universitaires « classiques » qui prennent le temps de la recherche sans se préoccuper des applications à court terme. Le second événement auquel cet article est consacré rompt avec cette image sage et académique. Il s’agit de l’organisation, portée par Christian Joachim, d’une course internationale de « nanovoitures » pilotées par microscope à effet tunnel (STM) [Joachim, Launay et al. 2016]. La première édition de cette NanoCar Race s’est tenue à Toulouse en avril 2017 au PicoLab du Centre d’Élaboration de Matériaux et d’Études Structurales (CEMES) ; la prochaine est prévue en 2021. Comment les chercheurs en sont-ils venus à organiser cette course de petites voitures ? Que révèle-t-elle de ce domaine de recherche que n’aurait pas capturé le concept susmentionné de technologie ? Un esprit de compétition à tout crin ? Un style ludique et débridé ? Il m’a fallu du temps pour prendre cette course au sérieux. Au départ, je pensais placer le questionnement sous l’angle du jeu : la NanoCar Race relèverait d’un processus de gamification (ou ludification) de la recherche [Deterding, Khaled et al. 2011]. Après tout, me disais-je, rien d’étonnant à ce que des recherches ne visant ni des objectifs théoriques ni des objectifs

[Heath, Kuekes et al. 2000], [Aono, Hasegawa et al. 2006]), mais il requiert de changer de logiciel conceptuel en délaissant la sémantique de « l’amont » et de « l’aval » propre au modèle linéaire. Il faudrait davantage parler de pratiques de décontexualisation et recontextualisation d’objets génériques [Marcovich & Shinn 2011], [Coutellec 2015] impliquant à chaque fois un travail de réindividuation dans de nouveaux milieux associés de fonctionnement [Loeve 2009]. 4. C’est au cours des années 1990 que les premières machines moléculaires ont été synthétisées [Anelli, Spencer et al. 1991] puis mises en fonctionnement par électrochimie en phase liquide par les groupes de Stoddart et de Sauvage [Bissell, Córdova et al. 1994], [Livoreil, Dietrich-Buchecker et al. 1994]. Le groupe de Feringa a introduit des moteurs moléculaires activables par la lumière dans des cristaux liquides [Koumura, Zijlstra et al. 1999]. 76 Sacha Loeve pratiques puissent prendre la forme d’un jeu. Les machines moléculaires sont parfois évoquées sous le nom de tinkertoys par leurs concepteurs. Certains comparent la NanoCar Race au jeu vidéo MarioKart5. Si donc il reste encore de la place en bas, pourquoi ne serait-ce pas pour jouer ? « Pour le fun », disait Feynman... [Feynman 1959]. Désormais, je soutiendrai plutôt que la NanoCar Race appartient au genre des « technophanies » définies par Gilbert Simondon comme des « ritualisations médiatrices entre la culture et la technicité » [Simondon 1960, 43]. Mais avant d’expliquer pourquoi, place au récit, à l’incroyable aventure des nanovoitures.

2 Coup de rétroviseur sur l’histoire des nanovoitures

En octobre 1995, un petit workshop, Nano-age Mechanics, est organisé par James Gimzewski et Christopher Gerber6 à Loch Lomond en Écosse, sur un financement « blanc » de l’OTAN. Selon Christian Joachim, qui y participait, l’initiative partait d’une idée émise par Heinrich Rohrer : la mécanique moléculaire serait plus adaptée que l’électronique pour réaliser des nano-composants fonctionnels. En effet, les limites de l’électronique moléculaire basée sur l’analogie avec les circuits connus (une molécule = un composant) avaient déjà été mises en évidence : trop de bande passante, pertes en fréquence, etc. La trentaine de participants de Nano-age Mechanics se répartit en trois groupes, chacun devant faire une proposition de nanomé- canique. Il est alors question notamment de « molécules-camions » [molecule- trucks]. Don Eigler propose une locomotive à atomes, Joachim et Gimzewski une roue moléculaire. À l’époque, ces derniers travaillent avec le STM sur des molécules plates à quatre pattes [Jung, Schlittler et al. 1996]7, dont certaines sont baptisées landers à partir de 1997 par analogie avec les véhicules d’exploration martienne. Ces landers servent plus tard de fil amovible pour mesurer des contacts électriques entre une molécule et un pas de la surface [Langlais, Schlittler et al. 1999]. Lors d’une discussion avec le microscopiste Sébastien Gauthier fraîchement arrivé au CEMES pour développer l’activité STM8, ils évoquent sur le ton de la plaisanterie la possibilité de remplacer les pattes du lander par des roues pour créer une brouette moléculaire.

5. Frank Eisenhut https://www.youtube.com/watch?v=VStpU0VNN40&t=7s, 3’46. 6. L’un des co-inventeurs du STM avec Heinrich Rohrer, Gerd Binnig et Eddie Weibel et de l’AFM (Atomic Force Microscope) avec Binnig. 7. Il est indispensable de surélever les molécules de la surface pour pouvoir les imager au STM, sans cela leurs états électroniques se mélangent à ceux de la surface. 8. Sébastien Gauthier est le premier chercheur à avoir construit un STM en France dans les années 1980. L’incroyable aventure des nanovoitures 77

La nanovoiture apparaît à peu près à la même période sous la forme d’un pastiche confectionné par le chercheur en physique computationnelle Marek Michalewicz à la cinquième Foresight Institute Conference on Molecular Nanotechnology9 à Palo Alto, en novembre 1997. Sa présentation s’intitule « Nano-cars : Feynman’s Dream Fulfilled or the Ultimate Challenge to Automotive Industry ? » [Michalewicz 1997a]. Dans la version publiée aux Annals of Improbable Research, il rappelle que L’idée d’une minuscule voiture [tiny little car] est apparue la première fois dans un papier de Richard Feynman. Cependant, Feynman n’imaginait qu’une micro-voiture10, approximativement 4000 fois plus petite qu’une automobile. Moi, par comparaison, je repousse les limites aux frontières du « moléculairement possible », en proposant la construction d’une nano-voiture. [Michalewicz 1998, 18, je traduis] L’un des défis lancés par Feynman en 1959 est en effet de « construire une automobile plus petite que ce point → .11 ». Feynman propose même d’organiser une compétition entre laboratoires, une sorte de « concours de lycéens ». Mais à quelle fin ? Qui sait ? Bien sûr, une automobile miniature pourrait ne servir qu’aux acariens à circuler tout autour de nous, et je suppose que nos intérêts chrétiens ne vont pas aussi loin. Nous avons toutefois évoqué le potentiel de la fabrication de petits éléments pour les ordinateurs dans des usines complètement automatiques. [...] Ce serait intéressant si en chirurgie vous pouviez avaler le chirurgien. Vous mettez le chirurgien mécanique dans les vaisseaux sanguins et il va jusqu’au cœur [...]. Il trouve quelle valve est fautive, prend son petit couteau et la découpe. D’autres petites machines pourraient être incorporées de manière permanente pour assister un organe défectueux. [...]

9. Les conférences du Foresight Institute, fondé par Christine Peterson et K. Eric Drexler en 1986, rassemblent futuristes, scientifiques et hommes d’affaires autour de la version drexlérienne des nanotechnologies, et délivrent tous les ans depuis 1993 le Feynman Prize in Nanotechnology. 10. Cette allusion suggère que les défis lancés par Feynman concernaient surtout des problèmes de micro-usinage et visaient à repousser les limites de la précision de fabrication. Si ces enjeux devinrent centraux pour l’industrie de la microélectronique, ils diffèrent qualitativement de ceux posés par la construction bottom-up à partir des atomes et des molécules. 11. Titre retenu pour le digest du discours de Feynman paru l’année suivante dans Popular Science [Feynman 1960]. Plus précisément, le premier challenge est d’écrire les 24 volumes de l’Encyclopaedia Britannica sur une tête d’épingle, le second de fabriquer le plus petit moteur électrique possible. Voir aussi la version réactualisée prononcée au Esalen Institute à Big Sur, Californie, « Tiny machines », [Feynman 1984]. 78 Sacha Loeve

Utiles ou pas, elles seraient sûrement amusantes à faire. [...] J’ai signalé quelques applications d’intérêt économique, mais bon, je sais que la raison pour laquelle vous le feriez pourrait être juste pour le fun [just for fun]. Quant à Michalewicz, voici comment il justifie l’utilité des nanovoitures :

Un des objectifs les plus intéressants et les plus difficiles de la nano-technologie dans les années à venir sera la construction de pyramides de buckyballons12. [...] D’un point de vue technolo- gique, le transport d’un grand nombre de buckyballons pose de sérieux problèmes. Je propose donc la construction de la plus petite voiture possible, une nanovoiture. [Michalewicz 1998, 18, je traduis]

Si le besoin de pyramides de buckyballons appelle le besoin de nanovoitures, on peut se demander : pourquoi a-t-on besoin de pyramides de buckyballons ? C’est d’abord, explique Michalewicz, une question de symbole : chaque civilisation laisse derrière elle des pyramides à la mesure de sa grandeur ; la construction de gigantesques structures étant désormais bien maîtrisée, la civilisation du futur se distinguera par la construction de toutes petites pyramides. Elles pourront être mises à profit par le SETI13 comme « preuves de l’ supérieure de notre civilisation et de sa capacité à contrôler la matière ». Elles auront aussi des usages scientifiques, pour tester avec précision les hypothèses sur l’énergie « paranormale » des pyramides en physique du solide. De par leur taille, elles seront démocratiques, accessibles à tous à travers une large gamme d’usages. Comme cadeaux de Noël, elles raviront les collectionneurs. Mais les nanovoitures seront aussi des technologies capaci- tantes [enabling] pour des usages autres que la construction de nanopyramides, comme modéliser le trafic routier de Mexico, tester de nouveaux codes de la route, prévenir les accidents, prototyper des véhicules intelligents, délivrer des médicaments sur des cibles précises, et distribuer du courrier à l’échelle nanométrique [Michalewicz 1997b]. Michalewicz tourne ainsi en dérision la reprise du discours de Feynman par des nanoprophètes comme Eric Drexler. Il parodie la manière dont les nanotechnologies survendent leurs bénéfices potentiels et inventent de toutes pièces des problèmes ajustés sur mesure à leurs « solutions14 ». Toutefois, Michalewicz fournit une liste des « ingrédients moléculaires » requis pour synthétiser sa nanovoiture : des roues ferriques, un

12. Les buckyballs, ou fullerènes, molécules sphériques de soixante atomes de carbone nommées d’après l’architecte Buckminster Fuller sont des icônes des nano- technologies [Loeve 2017]. 13. Search for Extra-Terrestrial Intelligence. 14. Michalewicz va même jusqu’à pasticher l’appel à étudier leurs implications « philosophiques et sociales » : « Les implications philosophiques et sociales de cette invention sont difficiles à prédire. Nous pouvons seulement spéculer sur le fait que ses dimensions et son échelle de production en masse rendront la civilisation globale plus égalitaire et plus juste. Peut-être verrons-nous arriver un jour où il y aura L’incroyable aventure des nanovoitures 79 plateau graphitique et des nanotubes de carbone ; le système de propulsion est intégré dans les roues15. Or il s’agissait de composés chimiques connus et disponibles, contrairement aux structures diamondoïdes imaginées par Eric Drexler et , jamais synthétisées, et peut-être infaisables... Pendant ce temps, Joachim et Gimzewski surprennent une molécule en train de tournoyer comme une girouette sous la pointe du STM [Gimzewski, Joachim et al. 1998]. Ce mouvement moléculaire spontané est non-directionnel, équiprobable dans toutes les directions. Pour le contrôler, c’est-à-dire pour le rendre directionnel, il faut obtenir la rotation à partir d’un mouvement de translation. Or le dispositif le plus simple pour convertir une translation en rotation n’est autre que la brouette ! À partir de 2000, l’équipe toulousaine du CEMES s’embarque dans un vaste programme de mécanique moléculaire16. Joachim confie la synthèse de la molécule-brouette à un chimiste talentueux, formé aux machines moléculaires chez Sauvage, Gwénaël Rapenne. La synthèse prend trois ans [Joachim, Tang et al. 2002], [Jimenez-Bueno & Rapenne 2003]. Une fois réalisée, la molécule- brouette est déposée sur une surface de cuivre afin d’être manipulée sous la pointe du STM [Grill, Rieder et al. 2005]. Mais les tentatives de manipulation échouent : les roues avant sont trop chimisorbées, la barrière de diffusion est trop grande ; la molécule se déforme, mais ne roule pas. En 2005, Michalewicz voit parader dans les pages de Nano Letters une nanovoiture semblable, à quelques modifications près, à celle qu’il avait imaginée, les buckyballons y jouant le rôle des roues [Shirai, Osgood et al. 2005]. L’article affirme que la nanovoiture est capable de rouler sur une surface d’atomes d’or de manière directionnelle (d’avancer tout droit ou de tourner) avec une vitesse variant en fonction de la température. Sa synthèse est dirigée par James Tour, de Rice University (Houston, Texas), qui avait assisté à la conférence du Foresight Institute de 199717. Huit ans plus tard, la nanoblague devient donc réalité, mais elle fait tellement mousser le nanomonde qu’elle ne fait plus rire son auteur qui, depuis, s’est rebiffé et en réclame la paternité. La nanovoiture de Tour provoque aussi un certain agacement chez les chercheurs français qui ironisent (« il faut mettre des gens dedans, [...] ça tombe bien, James Tour a aussi fait des bonhommes moléculaires, des nanokids, [...] du point de vue éthique il faudrait un nano- code de la route18 ! »), moquent les engagements idéologiques de Tour19, et une de buckyballons et une nano-voiture pour chaque personne sur terre » [Michalewicz 1997b, 4, je traduis]. 15. Antiferromagnétiques, censées interagir magnétiquement avec le substrat. 16. J’ai raconté ailleurs les péripéties de cette histoire [Loeve 2011]. 17. Il y présentait un papier sur l’électronique moléculaire [Reinerth, Jones et al. 1998]. 18. Entretien dans un laboratoire de chimie grenoblois, 2006. 19. Sympathisant des thèses de l’intelligent design, il a signé la pétition « A Scientific Dissent from Darwinism » lancée en 2001. Il donne des conférences sur les mystérieuses origines de la vie ou encore sur « Jésus Christ et les nanotechnologies ». 80 Sacha Loeve trouvent sa voiture « survendue » d’un point de vue chimique. En effet, les images qui la représentent dans la première version omettent volontairement de faire figurer les longs fils moléculaires (C10H21 et C12H25) greffés sur son châssis pour en rigidifier la structure, ce qui l’aurait fait ressembler à un hérisson. D’une certaine manière, on fait mentir la formule chimique pour la faire ressembler à une voiture. Du côté des physiciens, les critiques visent l’instrumentation et la méthode utilisée pour démontrer le mouvement. L’équipe de Tour n’étant composée que de chimistes, ceux-ci doivent externaliser leurs expériences en STM. Les premières étaient confiées à une autre équipe de son université, or « ils n’ont pas un microscope à sonde locale très précis à Rice20 ». Aussi l’affirmation selon laquelle la nanovoiture roulait de manière directionnelle reposait sur le crédit accordé à une impression visuelle et non sur une identification précise de signatures de manipulation permettant de discriminer, sous forme de signaux électriques du STM, un mouvement de rotation d’un mouvement de diffusion21. Pour améliorer cet aspect, Tour collabore désormais avec le physicien STMiste Leonhard Grill qui dirige depuis 2013 le groupe de Single Molecule Chemistry NanoGraz à l’université de Graz en Autriche. Les molécules de Tour traversent l’Atlantique et les Alpes et sont manipulées sous la pointe du STM de Leo Grill22. De leur côté, les chercheurs toulousains parviennent à contrôler la rota- tion moléculaire, non pas avec la brouette mais avec son essieu provenant d’un précédent dépôt de brouettes qui s’étaient décomposées sur la surface [Rapenne, Grill et al. 2006]. Ils réinventent la roue ! Ils mesurent ses signatures électriques de rotation [Grill, Rieder et al. 2007] et fournissent aussi une preuve visuelle en taguant chimiquement une branche d’une molécule-roue engrenée par la pointe du STM le long d’une « crémaillère » (le bord d’un cristal monocouche) [Chiaravalloti, Gross et al. 2006]. Au CEMES, la nanovoiture n’est pas la priorité mais un axe au sein d’un programme plus large de mécanique moléculaire dont est aussi issu un moteur moléculaire électrique

« La vie ne devrait pas exister. C’est tout ce que nous savons de la chimie » [Tour 2017, je traduis]. Tour énumère les difficultés de la synthèse des nanomachines pour soutenir que si les chimistes peinent encore à effectuer la synthèse de « simples » nanovoitures, alors il y a peu de chances qu’ils puissent un jour réaliser et comprendre la synthèse du plus simple être vivant [Tour 2016]. Son argument s’appuie sur le postulat implicite que synthétiser X = valider une compréhension de X. 20. https://www.youtube.com/watch?v=Q0TEKb2zx5Q&t=2133s, 20’37. 21. En mode imagerie, le STM ne produit que des images fixes, mais il permet aussi une sorte de spectroscopie du mouvement : différents mouvements moléculaires ont différentes signatures de courant tunnel. Ces méthodes sont développées par l’équipe toulousaine à grands renforts de calculs et de simulations dès la fin des années 1990 [Bouju, Girard et al. 1997], [Moresco, Meyer et al. 2001], [Alemani, Gross et al. 2005]. 22. Leo Grill était masterant puis thésard de Francesca Moresco au laboratoire de Karl-Heinz Rieder (un des pionniers du STM et collaborateur de H. Rohrer) à la Freie Universität Berlin ; il a beaucoup collaboré avec Joachim pendant sa thèse. L’incroyable aventure des nanovoitures 81 rotatif fixé entre deux électrodes [Perera, Ample et al. 2012]23. Ils synthétisent cependant un nanobuggy reprenant le design de la brouette avec quatre roues au lieu de deux [Joachim & Rapenne 2013]. C’est cette molécule qui concourra à la NanoCar Race de 2017. De son côté, le groupe de Tour livre une série de molécules à roulettes : demi-camions, tricyles, nanocamions, nanodragster, nanowagons... [Shirai, Morin et al. 2006], [Sasaki, Morin et al. 2007], [Vives, Kang et al. 2009], [Villagómez, Sasaki et al. 2010]. Certaines ont des parties chirales « moteur » sensibles à la lumière ultraviolette [Morin, Shirai et al. 2006]24, mais leur fréquence de rotation est bien trop lente : 1,8 tour par heure. Ils proposent aussi un moteur chimique [Godoy, Vives et al. 2011], mais le problème est qu’à cette échelle les déchets sont gros comme des camions, du même ordre de grandeur que le véhicule ! Ils confectionnent finalement un moteur capable de tourner à 3 millions de rotations par seconde [Chiang, Mielke et al. 2012], mais, nouveau problème : une fois excité, le moteur transfère l’énergie d’activation aux roues au lieu de l’utiliser pour tourner. Il faut donc changer les roues : ils remplacent les fullerènes par des carboranes25, insensibles à l’énergie de la lumière. Mais le problème des roues carboranes est leur faible adhérence à la surface métallique : les molécules glissent toutes seules comme sur de la glace ; quand on les photo-excite pour activer le moteur, elles disparaissaient du radar, trop mobiles. Changer de surface pour stabiliser les nanovoitures ? On substitue le verre à l’or [Khatua, Godoy et al. 2010]. Mais le verre ne permet pas d’imager au STM26. Il faut alors taguer les nanomachines pour les suivre en spectroscopie par fluorescence... [Vives, Guerrero et al. 2010]. En dix ans, les chercheurs sont donc passés de la poussette à la propulsion. Mais à peine une solution est-elle trouvée qu’elle génère à son tour moult problèmes inédits ! Surtout, les nanocars sont toutes singulières. Elles ne fonctionnent qu’insérées dans un dispositif qui comprend la surface et le STM. Chaque molécule est testée sur un STM différent27 avec des pointes différentes, un câblage, des pré-amplis pour remonter les signaux, des filtres et des bandes passantes qui ne sont jamais les mêmes. Comment, dès lors, comparer les performances des nanovoitures, dépasser leur singularité ? En les plaçant sur un même terrain d’expérience – une unique surface dans le même STM. Telle est la finalité première de la course : la mise en situation expérimentale équivalente et simultanée de plusieurs nanocars.

23. Celui-ci, imaginé en même temps que la brouette, nécessita six ans de synthèse et trois années supplémentaires pour sa mise en fonctionnement. 24. Ces parties « moteur » n’entraînent pas les roues via un mécanisme de transmission du mouvement mais constituent plutôt une roue motrice supplémentaire. 25. Une molécule polyédrique constituée d’atomes de bore et de carbone. 26. Le STM ne peut fonctionner que sur des surfaces métalliques ou semi- conductrices, car il doit former un circuit électrique comportant une jonction tunnel avec l’échantillon. 27. Les STM de recherche ne sont jamais complètement standardisés. 82 Sacha Loeve

Or cela est devenu possible grâce à l’apparition d’une nouvelle généra- tion de STM à quatre pointes dans les années 2010 : le LT-UHV 4-STM Nanoprobe28 fabriqué par l’équipementier ScientaOmicron. En 2013, quand Joachim & Rapenne [Joachim & Rapenne 2013] annoncent la course, ils ont commandé ce microscope et se préparent à l’installer.

3 Une curieuse cérémonie

La cérémonie officielle de lancement a lieu à Futurapolis à Toulouse en novembre 2015. C’est à une autre cérémonie que j’ai pu assister, dont je ferai un récit situé, fondé sur mes propres impressions et celles que j’ai pu recueillir des autres participants. Organisé en mars 2016, lors du dix-neuvième Forum des microscopies à sonde locale, l’événement visait à officialiser un partenariat entre les organisateurs de la NanoCar Race et PSA29 Peugeot Citroën, sponsor de l’équipe française. Le Forum 2016 se tient d’ailleurs au Musée de l’Aventure Peugeot à Sochaux, fief historique de la marque au lion. Parmi les événements conviviaux ponctuant une semaine de conférences scientifiques, l’annonce du sponsoring de la course intervient à la fin du forum en guise de bouquet final, avant le traditionnel repas de gala. Curieux, les chercheurs sont aussi un brin anxieux. Non pas sceptiques, mais soucieux de l’image de leur recherche devant les industriels et les poli- tiques qui participent à la financer (en particulier FEMTO-ST, la grosse UMR locale organisatrice du forum). Que vont-ils en penser ? Tout cela est-il réellement sérieux ? Joachim prend la parole. Sur un ton enjoué il explique les règles, décrit le dispositif de la piste avec le STM à quatre pointes, présente les équipes, leurs sponsors, et les caractéristiques de leurs nanocars respectives, sans rien promettre. Sur le mode du clin d’œil, il multiplie les allusions à un certain Claude : « Claude, désolé, on a dû y aller au marteau pour changer la géométrie de tes porte-pointes. » Présent au fond de l’auditorium, Claude Viguier, ingénieur pour ScientaOmicron, est le constructeur de l’incroyable 4-STM multipointes dans lequel aura lieu la course. Or son instrument a dû être optimisé et poussé au-delà de ce qui avait été prévu par le constructeur. En effet, l’appareil, dénommé « Nanoprobe », était initialement dédié à la mesure de caractéristiques électriques à l’échelle nano, avec ses pointes servant d’électrodes. Sa préparation pour la course de nanovoitures relève donc d’un détournement d’usage, le transformant en un véritable multi-STM à têtes indépendantes. Même Claude en a été étonné ! Le député président du Pays de l’agglomération de Montbéliard, Charles Demouge, prend ensuite la parole. Il commence par féliciter les chercheurs pour

28. LT-UHV : Low Temperature Ultra-High Vacuum. 29. Peugeot Société Anonyme. L’incroyable aventure des nanovoitures 83

« la maîtrise de l’infiniment petit » dont témoignent ces « petites voitures » tout en ajoutant « je sais aussi que vous savez faire autre chose » (sous-entendu : des choses « plus sérieuses »), puis déclare : « Nous sommes dans l’ir-ra-tio- nnel ! » Un frisson parcourt la salle. Les chercheurs sont blêmes. Bref silence, puis le président poursuit : « [...] Mais c’est bien ! Parce que la science est rationnelle, mais la technologie elle, a besoin d’un grain de folie pour avancer ! » Ouf ! Cette clarification met tout le monde à l’aise. Si c’est la technologie qui est en cause, l’honneur de la science est sauf. Et le Président conclut sur la fierté qu’il éprouve à voir Peugeot, fleuron du pays de Montbéliard, sponsor de l’équipe française. Vient ensuite une allocution de Sylvain Allano, directeur de la division Science & Technologies futures de PSA Peugeot Citroën. Contrairement au politicien, il rationalise l’affaire en justifiant le soutien financier de PSA par trois arguments. 1. Historiquement, deux choses « boostent la technologie » : les guerres et les compétitions. Mieux vaut par conséquent une « compétition sans risques ». Les compétitions mobilisent les ingénieurs et leur permettent de prendre des « virages industriels » décisifs. Tout le monde acquiesce. 2. PSA mène des recherches sur les matériaux. La question des matériaux est capitale pour le « véhicule du futur » : les matériaux actifs, multifonctionnels, qui pourront à la fois « convertir de l’énergie et stocker de l’information » exigent une approche multi-échelles. Avec les nanovoitures on est « à l’autre bout de l’échelle » et il lui paraît très important d’explorer le nanomonde comme on a exploré l’espace30. 3. PSA mène des recherches en tribologie – la science des frottements et de la friction – pour minimiser les pertes de chaleur. Les nanovoitures qui se déplacent sur une surface d’or et résistent au frottement semblent constituer « une manière radicale d’aborder ce problème aux petites échelles ». Joachim s’empresse de répondre à ce dernier argument en évoquant « un exemple d’il y a quelques années », la crémaillère moléculaire [Chiaravalloti, Gross et al. 2006]. Il fouille dans son ordinateur et affiche l’image STM de la molécule qui s’engrène comme une roue dentée sur le rebord en crémaillère d’un cristal monocouche de ces mêmes molécules. Puis il affirme : « sur ma voiture (macroscopique), si je mets pas d’huile dans mon moteur, combien d’atomes je vais perdre à la minute ? [...] Moi, je n’ai perdu aucun atome ! » De cette curieuse cérémonie, on retient l’hétérogénéité des significations et des valeurs projetées sur la course : les appréhensions des chercheurs quant à 30. On retrouve les deux premiers arguments chez Nicolas Seeboth, directeur de la Recherche Polymères & Additifs chimiques pour le groupe Michelin, qui sponsorise aussi la course. Seeboth ajoute un troisième argument : « l’allégement et le poids est souvent un critère clé pour la performance du pneu, [...] Michelin cherche toujours à mettre le moins d’atomes pour aller le plus vite possible. » https://www.youtube. com/watch?v=v7phvu5F_8s&t=203s, 18’20. 84 Sacha Loeve l’image – cocasse ou spectaculaire – qu’ils donnent de leur science ; les valeurs projetées par les politiciens soucieux du rayonnement et de la compétitivité du territoire ; celles des industriels qui s’efforcent de rationaliser un futur nanotechnologique plus ou moins crédible en le réinscrivant dans les enjeux d’ingénierie de leur secteur ; les valeurs attachées aux performances et aux nouveaux possibles ouverts par l’instrument, dans un jeu d’émulation entre l’initiateur de la course et l’équipementier. Enfin Joachim évoque « la lumière dans les yeux, par exemple, des ados du musée des sciences de Tokyo et la lettre de remerciement d’une prof qui enfin a réussi à intéresser ses élèves à la chimie31 ». Mais une telle révélation requiert bien des efforts en amont.

4 De longs préparatifs

Le LT-UHV 4-STM arrive au CEMES à Toulouse en 2014, l’équipementier ayant accepté de modifier la matrice logicielle pour rendre les pointes indépendantes et contrôlables par quatre ordinateurs distincts. Les chercheurs travaillent ensuite à modifier la mécanique des supports de pointes de manière à ce que quatre balayages en parallèle puissent être effectués avec des pointes plus longues sans qu’elles ne se touchent. La surface sur laquelle se déroule la course est une petite pastille d’or Au (111) de 8 millimètres de diamètre produite par MaTecK. Pour accueillir les nanovoitures et leurs pistes respectives (une piste par véhicule et par pointe STM), la pastille doit être préparée avant d’être insérée dans la chambre ultravide en inox du STM. Elle est nettoyée par bombardement d’ions argon de 10 minutes et cuisson à 500° C pendant 20 minutes. Au bout de quatre cycles, on obtient une surface d’or « atomiquement propre », c’est-à-dire presque pure de défauts et présentant le motif de plissements en chevrons caractéristique du cristal Au (111). Les angles à 45° de ces chevrons fournissent les « tournants » des pistes. Leurs parcours suivent ainsi les lignes de la structure intime du matériau. Les organisateurs avaient initialement prévu que chaque équipe dispose quelques atomes individuels d’or en guise de plots de départs et d’arrivée avant d’entamer la course, mais attendre que les quatre compétiteurs aient réalisé cette opération aurait considérablement retardé le moment du départ. La course a nécessité deux années d’entraînement. Neuf équipes en tout se sont portées candidates. L’une d’elles abandonne fin 2015. Une autre, l’équipe suisse, propose sa candidature au dernier moment, à la suite de la rencontre de Sochaux [Pawlak & Meier 2017]. Six équipes sont finalement sélectionnées par les organisateurs en mai 2016. Le STM de Toulouse comportant quatre pointes, il est prévu que deux équipes contrôlent leur propre STM à distance depuis Toulouse, l’un à Athens (Ohio, USA), l’autre à Graz (Autriche).

31. Communication personnelle, 20 janvier 2018. L’incroyable aventure des nanovoitures 85

Chaque équipe devait préciser les conditions de dépôt des molécules- véhicules sur la surface lors de sa candidature. Lancer et faire atterrir une molécule sur une surface en ultravide est déjà en soi une opération délicate, les basses pressions ayant pour effet de projeter violemment les molécules contre la surface au risque de les disloquer, sans parler du risque d’introduction de molécules adventices présentes dans l’air ambiant. Mais déposer plusieurs molécules différentes sur un même substrat relève du défi, car chacune requiert des procédés et des conditions de dépôt qui lui sont propres. Elles se présentent généralement en poudre, dont il faut sublimer quelques milligrammes dans des évaporateurs de conception « maison », qui doivent être adaptés pour une introduction dans la chambre de préparation du STM à l’aide d’un système de cannes. Les températures d’évaporation requises sont spécifiques à chaque espèce moléculaire. Certaines sont chauffées dans un petit creuset monté sur un orifice de la chambre de préparation avec une valve de contrôle au niveau de sa buse. D’autres sont mises en solution et déposées le long d’un filament électrisé pour être évaporées sur la surface32. L’une des molécules, le nanomoulin [nano- windmill] de la Technische Universität de Dresde, doit même être introduite en pièces détachées et assemblée au STM sur la surface car c’est un composé supramoléculaire avec quatre « ailes » reliées par des liaisons faibles [Nickel, Ohmann et al. 2013]. C’est en fait la première fois que plusieurs dépôts moléculaires spécifiques sont réalisés sur un même substrat en ultravide. Dans cette situation, la moindre erreur est susceptible de ruiner l’échantillon et oblige à recommencer les dépôts des autres. Un véritable casse-tête... D’autant plus que passer de nombreuses fois par la chambre d’évaporation chauffe un peu l’échantillon au risque de déstabiliser les dépôts (les très basses températures « gèlent » le mouvement moléculaire spontané). Même le fait d’imager la surface pour vérifier un dépôt est susceptible d’abîmer l’échantillon. Enfin toutes ces opérations cumulées exigent de maintenir le STM à basse température sur des durées exceptionnellement longues en venant remplir une nuit sur deux le cryostat d’hélium liquide, week-end compris. Le cryostat du STM pèse plus de 200 kg, il a été démonté quatre à cinq fois par an à la clef de 13 et rempli plus de 90 fois avant la course. À cela s’ajoute l’attaque, paraît- il, du système de contrôle par des pirates informatiques en septembre 2016... Chercheurs et équipements n’avaient jamais été soumis à si rude épreuve sur une si longue durée. Pour rendre possible les dépôts successifs sans endommager les précé- dents, les Toulousains confectionnent un système de masques avec un mini ascenseur et un carrousel pour orienter ces masques dans la chambre en ultravide. Malgré ces précautions, certaines molécules s’évaporent, diffusent et se déplacent, ou changent de conformation lorsqu’ils en déposent une autre. Le nanomoulin, par exemple, perd fréquemment une de ses ailes...

32. Voir l’explication de ces procédés par David Martrou au CEMES : https://www. youtube.com/watch?v=YyqqKLC9W6Q&list=PLqt5l1ay4ZQ7-TdD_NjjDTnOVVR5cKEN6. 86 Sacha Loeve

Une grande partie de la préparation a donc consisté à travailler sur les procédés de dépôt avec les autres équipes invitées à Toulouse afin de déterminer la bonne séquence d’évaporation à reproduire sans faille dans les deux semaines33 précédant le jour J. Deux semaines pendant lesquelles il a fallu négocier des repas pour six équipes et où le labo, affirme Joachim « ressemblait à la tour de contrôle de la mission Apollo 1334 ». Pendant deux ans, les chercheurs des autres équipes sont venus s’entraîner. L’entraînement [Eisenhut, Durand et al. 2016] consiste à apprendre à mani- puler le LT-UHV 4-STM, à utiliser son logiciel, à reconnaître sa molécule, à déblayer les autres molécules dispersées sur l’échantillon pour se tailler une piste, à piloter la molécule-machine en déterminant les bonnes valeurs de courant tunnel pour l’impulser sans la disloquer, à identifier les accidents et les possibles méthodes de réparation (comment par exemple, un pulse bien ciblé peut redonner à la molécule sa conformation fonctionnelle). Enfin, il a fallu préparer le dispositif médiatique, et transformer le labo en studio TV avec régie de communication internet. La course est en effet retransmise en direct sur les réseaux sociaux. Elle est ponctuée par des flashs, des interviews, des mini-documentaires et des points d’étapes. Un service de communication est mobilisé pour répondre en temps réel aux tweets et aux SMS. Mais le montage d’un dispositif médiatique posait une question cruciale : que montrer ? Impossible de montrer le microscope pendant la course. Il est installé dans une salle au sous-sol, à l’abri des vibrations, des ondes et des humains. Quant à montrer les « bolides » en action, cela supposerait des choix techniques et esthétiques qui, au final, n’ont pas été assumés par l’équipe de communication. Le règlement stipulait que chaque équipe devait fournir au moins une image STM entre trois et cinq minutes. Comme ce temps pouvait être utilisé pour avancer au lieu d’imager, cela revenait à ralentir la progression moyenne des véhicules. Après concertation, les équipes ont décidé que chacune choisirait sa propre stratégie pour produire des images : soit attendre plusieurs mouvements soit enregistrer une image à chaque déplacement de la molécule. Ce choix permettait de collecter un grand nombre de données expérimentales à analyser et de disposer d’assez images pour réaliser des films en stop motion35 montés au fur et à mesure par une équipe de postproduction située dans une autre salle. Il était initialement prévu de diffuser ces films toutes les heures et de les accompagner des signaux sonores de manipulation (« brzjscht, brzjschhht »). Mais à ce spectacle un peu martien la direction de l’équipe de communication a préféré mettre en scène les chercheurs, pour accentuer « l’aspect humain ». Or il faut bien dire que le spectacle de chercheurs penchés sur des écrans où défilent des images s’affichant ligne par ligne (un peu comme au temps

33. Incluant le temps nécessaire d’imagerie et de confection des pointes en tungstène (une nuit de cuisson sous vide et une après-midi en chambre de préparation). 34. Communication personnelle, 24 janvier 2018. 35. Voir par exemple https://www.youtube.com/watch?v=HoNPuu4iKw0. L’incroyable aventure des nanovoitures 87 du Minitel) ou en train de discuter tout en mangeant un sandwich peine à tenir les spectateurs en haleine. Au milieu de la nuit les chercheurs sont arrivés à « pirater » le dispositif médiatique et à convaincre les ingénieurs de retransmission de diffuser quelques sons de manipulation. Joachim avait aussi imaginé des animations 3D immersives réalisées à partir d’images STM expérimentales, et une start-up toulousaine était prête à tenter l’expérience, malheureusement trop chère pour la direction de la communication du CNRS. Au lieu de cela, une animation en images de synthèse est montrée périodiquement pour afficher les résultats temporaires de la course à chaque rapport d’étape36. Visuellement assez pauvre, elle figure la piste et ses six parcours au milieu d’une arène aux gradins vides, mais qui ressemble aussi à une roulette de casino... Sauf à suggérer que la course obéit avant tout à la chance, son rapport avec le dispositif expérimental est plus que lointain.

5 La course

La course a lieu au PicoLab du CEMES à Toulouse, les 28 et 29 avril 2017. Elle dure 38 heures (temps maximal d’autonomie des réserves d’hélium liquide servant à refroidir la piste). Les règles sont les suivantes :

1. Chaque molécule-voiture concourt sur sa propre piste. Les pistes doivent toutes mesurer 100 nanomètres de longueur et comporter deux tournants. Une fois le compte à rebours enclenché, chaque équipe doit déblayer sa piste, la cartographier, et positionner une nanovoiture sur la ligne start (ce qui prend plusieurs heures), avant le départ. 2. Les nanovoitures doivent être propulsées par pulses d’électrons inélas- tiques (électrons perdant de l’énergie qu’ils transfèrent à la molécule) en évitant tout contact avec la pointe du STM37. Il est donc interdit de les pousser mécaniquement (sauf au début pour les amener sur la ligne de départ). Le règlement ne stipule pas que le véhicule comporte des roues, certaines ont des ailes, d’autres des pattes... [Soe, Shirai et al. 2017] 3. Il est interdit de remplacer une pointe endommagée mais il est permis de la réparer en pratiquant une indentation dans la surface d’or. Mais si une équipe brise sa pointe en tungstène, elle sera de fait disqualifiée, car il est impossible de rouvrir à nouveau la chambre à vide du microscope pour en changer.

36. https://www.youtube.com/watch?v=3srR7rKFIvA. 37. Une « stratégie d’excitation » (d’activation) que Jean-Pierre Launay, arbitre de la course et ancien directeur du CEMES, compare à celle des auto-tamponneuses dans les fêtes foraines : « Vous avez une alimentation par le haut avec une perche, [...] un courant qui passe verticalement, et ceci doit se transformer en un mouvement horizontal dans une direction déterminée » https://www.youtube.com/ watch?v=Q0TEKb2zx5Q&t=1669s, 31’14. 88 Sacha Loeve

4. Il est permis de mettre de côté des nanovoitures pour en changer si l’une d’elle est endommagée ou désintégrée. L’équipe gagnante est celle dont la molécule-voiture passe la ligne d’arrivée la première ou se trouve la plus avancée sur sa piste à la fin du temps imparti. La première règle a dû être modifiée. Premièrement, l’équipe américano- autrichienne de Tour et Grill a concouru sur une surface d’argent au lieu d’une surface d’or, car leur molécule comporte des roues carboranes dont l’adhérence à l’or est si faible que la mobilité de la voiture empêche de l’imager. En échange, ils ont dû accepter une pénalité, décidée par le comité de la course : parcourir une distance totale de 150 nm au lieu de 100 pour les autres. Deuxièmement, les chercheurs n’ont pas réussi à fabriquer des pistes de configuration strictement équivalente sur la surface d’or38, le cristal ayant imposé à certaines équipes des trajets plus longs, plus compliqués, avec plus de tournants. La piste des Suisses, en particulier, ressemble à un véritable slalom. Il a donc été décidé que le classement final se ferait en fonction du pourcentage de la distance parcourue. 1h15 après le départ de la course, l’équipe japonaise a eu un crash logiciel. Le pilote ayant chargé trop d’instructions de manipulations dans la mémoire cache du programme, son ordinateur a cessé de communiquer avec le microscope et il a fallu une heure pour redémarrer le système. Tous devaient rétracter leur pointe mais le pilote allemand l’a fait descendre et l’a crashée sur la surface ; le pilote français a perdu son véhicule. Ses molécules étant très dispersées sur la surface (une par micron carré), il a dû rechercher un autre véhicule sur une autre zone pour reprendre – en vain – la course. L’équipe japonaise a passé la nuit à préparer et à positionner à nouveau quelques-unes de ses molécules-chenilles à pattes de naphtalène sur la ligne de départ. Elle a causé un nouveau crash logiciel le matin du 29 avril et a décidé de mettre fin à ses tentatives pour ne pas bloquer toute la course. Elle a parcouru une distance totale de 1 nanomètre et il lui a été attribué le prix du Fair Play. L’équipe française concourt avec le Green Buggy. Quand la molécule a disparu suite au crash logiciel, les chercheurs sont allés en chercher une autre. En cherchant à l’amener sur la ligne de départ ils ont endommagé leur pointe, ce qui eut comme conséquence pour les pilotes de « voir double », comme s’ils conduisaient en état d’ébriété39. Ils ont parcouru une distance nulle, mais ont pu réaliser de « belles images », c’est-à-dire des images intéressantes (permettant de mieux comprendre la contribution de chaque état quantique de la molécule à l’entrée d’énergie inélastique). Ils se sont donc attribué (avec l’accord des autres équipes) le prix de la plus Belle Nanovoiture. L’équipe allemande a brisé son moulin à vent supramoléculaire à deux reprises. La troisième fois, le nanomoulin est resté bloqué contre le rebord

38. J.-P. Launay le précise 15 min après le départ de la course https://www. youtube.com/watch?v=Q0TEKb2zx5Q&t=1669s, 29’43. 39. Quand une pointe STM se fendille, elle présente deux extrémités monoato- miques et produit une double image. L’incroyable aventure des nanovoitures 89 d’un pas atomique de la surface d’or ; il a parcouru 11 nanomètres mais n’a pas pu terminer la course : pas moyen de le décoincer. Mais les chercheurs ont essayé jusqu’au bout, aussi leur a-t-on attribué le prix de la Persévérance. L’équipe de l’Ohio a d’abord réussi à parcourir 20 nanomètres avec son Bobcat Nano-Wagon sous le STM situé de l’autre côté de l’Atlantique. Puis, au milieu de la nuit, le véhicule a refait 20 nanomètres... mais en sens inverse – personne ne sait pourquoi ! Les chercheurs sont donc sortis de la course avec une question de recherche : pourquoi le nanowagon s’est-il soudainement mis en marche arrière sans changement dans la « stratégie d’excitation » ? Y aurait-il un boîtier de vitesse caché dans la molécule ? Quoi qu’il en soit, leur véhicule, un imposant 4-rotaxane, était la molécule la plus grosse de la course : C252H240N120O56. Aussi leur a-t-on décerné le prix Texan. La seule équipe dont le véhicule a franchi la ligne d’arrivée sur la surface d’or de Toulouse est l’équipe suisse, de l’université de Bâle, avec son NanoDragster, après 6,5 heures de course pour une vitesse moyenne de 20 nm/h. De forme phalloïde, c’est la plus petite molécule de la course (C22H17N3). Elle servait initialement de composant photo-sensibilisateur dans un projet d’énergie solaire et a été détournée pour la course [Pawlak & Meier 2017]. Selon Rémy Pawlak, son pilote, son mouvement « est en principe quasiment sans friction » ou « superlubrique », grâce à l’interaction faible entre la structure carbonée de la molécule et la surface de la piste40. Autre avantage, la stratégie d’excitation qui préside à son mode de pilotage est relativement claire : la molécule inclut une terpyridine à la base de sa queue, comportant 3 cycles aromatiques avec chacun un atome d’azote (N) : un devant, un à gauche, un à droite. Selon la partie excitée par le STM, la molécule se déplace dans des directions différentes [Pawlak, Meier et al. 2017]. Cela lui a permis de négocier adroitement les multiples virages de sa piste, qui était la plus sinueuse. Enfin, selon les membres de l’équipe, ce véhicule superlubrique n’ayant pas de roues, il peut être considéré comme un aéroglisseur et non comme une voiture. Enfin la Dipolar Racer de l’équipe américano-autrichienne de Tour et Grill a explosé le chronomètre sur sa piste d’argent à Graz. La formule de leur molécule n’a été dévoilée que pendant la course : deux roues carboranes reliées par un essieu comportant en son centre un dipôle magnétique jouant le rôle du moteur en interagissant avec le courant tunnel. Le véhicule, qui s’apparente donc plutôt à un segway41, a négocié ses deux tournants avec succès et franchi la ligne d’arrivée sans incident notable en 1h30 seulement à une vitesse moyenne de 90 nm/h. Comme elle avait terminé peu après le début de la course, l’équipe a continué à conduire sur plus d’un micron – au-delà du nanomètre !

40. R. Pawlak est spécialiste de la manipulation des rubans de graphène glissant sur des surfaces par « friction évanescente » [vanishing friction], un phénomène appelé « superlubricité » [Kawai, Benassi et al. 2016], [Nordmann 2017]. 41. J. Tour, https://www.youtube.com/watch?v=Ghq8f_88MlM&t=22s, 17’14. 90 Sacha Loeve

Les équipes suisse et américano-autrichiennes ont été classées premières ex-aequo.

6 Technophanies

Au terme de ce récit, comment caractériser la course de nanovoitures ? J’avais initialement pensé la décrire comme un jeu hybride, mi-matériel, mi-virtuel, empruntant tout autant au jeu vidéo à la MarioKart qu’au jeu sérieux [edugame ou serious game]. Son contexte – un laboratoire de recherche – pouvait la rapprocher des jeux universitaires, ces obscurs précurseurs des jeux d’ordinateur avant leur diffusion comme produits culturels de masse [Triclot 2012]. Ces comparaisons étaient encouragées par le constat que les technosciences s’épanouissent souvent dans un régime ouvertement ludique plutôt que théorique ou applicatif [Bensaude-Vincent 2016, 92–97], [Milburn 2011, 2017], et par l’aspect plaisant voire « gamin » du thème des nanovoitures. Or, la comparaison avec le jeu vidéo ne résiste pas à l’examen. Entre l’action et la visualisation du résultat il faut au moins trois minutes, bien trop pour provoquer un plaisir de type vidéoludique. L’expérience de pilotage des molécules-voitures est donc très éloignée du plaisir sensori-moteur et de la qualité immersive que procurent les jeux vidéo actuels. Quant aux serious games, la course de nanovoitures en diffère en ce qu’elle n’éduque pas à une pratique préexistante qu’elle simule, mais vise à la faire émerger. Enfin, dans le cas des anciens jeux universitaires, le laboratoire était un espace fermé et protecteur. Ici, la course a une dimension immédiatement et directement publique. Elle transforme temporairement le laboratoire en antenne médiatique. Même avec le jeu en général, la comparaison ne fait pas sens pour les chercheurs, tant la préparation de la course fut longue, compliquée, pénible même (deux ans sur le pont pratiquement nuit et jour). Même si jouer peut s’avérer – surtout pour les enfants – une activité très sérieuse, le prisme ludique ne rend pas justice aux efforts et aux valeurs que les chercheurs attachent à l’expérience. Enfin l’expérience, contrairement au jeu, est ici unique et ne peut être rejouée à l’identique. Car il s’agit bien d’une expérience, même si elle est peu « classique » : il ne s’agit pas de tester une théorie, une application, ni même un instrument. Il s’agit de comparer directement plusieurs modèles de molécules-machines dans une même situation expérimentale afin d’espérer dépasser leur singularité pour définir des normes de fonctionnement communes, voire des standards techniques42. En cela, l’expérience a une fonction analogue à celle des halls de machines dans les premières expositions universelles avant la mise en place

42. Voir les spécifications techniques se voulant génériques proposées à la commu- nauté des nanomachinistes après la course par l’équipe de Tour [Simpson, García- López et al. 2017]. L’incroyable aventure des nanovoitures 91 de standards industriels dans les années 1870-1880. Sa dimension publique et médiatique rappelle aussi les lâchers d’aérostats du siècle des Lumières [Thébaud-Sorger 2009] ou, plus proche de nous, les lancements spatiaux. Ces expériences ont toutes une dimension technique mise au premier plan et même amplifiée par le dispositif médiatique. Mais elles n’excluent pas pour autant la production de connaissances. Dans le cas de l’édition 2017 de la NanoCar Race s’affrontaient divers concepts technologiques de ce qui constitue le bon schème de fonctionnement d’une molécule-machine : la « vision classique », où le mouvement est créé par la rotation ou le basculement d’un groupement chimique et la « vision quantique », où le mouvement est induit par les transitions entre différents états électroniques (fondamental ou excités) de la molécule avec une certaine probabilité. Outre le fait que ce sont les modèles les moins « mécanistes » et les plus « quantiques » qui ont remporté la course, la question reste de savoir pourquoi ces objets quantiques se comportent comme des objets quasi- classiques lorsqu’ils sont déposés sur une surface. Si la réponse passe par la théorie de la décohérence, celle-ci n’est pas bien comprise à l’échelle des molécules individuelles. Pour cela, les chercheurs attendent beaucoup de la comparaison du grand nombre d’images de différentes molécules produites pour la première fois dans les mêmes conditions expérimentales. Il s’agit en particulier de comprendre pourquoi l’image en microscopie tunnel garantit la reconstruction des orbitales moléculaires et non un mélange d’états électro- niques superposés43. Enfin par le mode de propulsion choisi – l’effet inélastique (non la mécanique, les interactions répulsives ou la lumière) – l’édition 2017 se focalise avant tout sur l’analyse des phénomènes inélastiques, qui concernent quelque 0,01 % des électrons qui passent au travers de la molécule par effet tunnel et lui transfèrent un peu d’énergie. Les molécules-machines sont conçues pour renforcer cet effet dont on ne comprend pas comment – sur le papier – il propulse la molécule. En faisant tourner plusieurs moteurs quantiques de ce type, on cherche à comprendre comment ils peuvent sélectionner des états quantiques pour tourner dans un seul sens, et comment on peut mesurer et calculer leur puissance motrice. On cherche à leur faire manifester leur fonctionnement. La qualité et la quantité des images produites ont donc un rôle capital tant pour la compréhension et la contrôlabilité des molécules-voitures que pour la popularisation de ces objets et des pratiques qui leur sont associées. C’est ce qui en fait une expérience et justifie le label de technophanie – un concept proposé par Gilbert Simondon dans Psychosociologie de la technicité [Simondon 1960]. Par les technophanies, écrit Simondon, « l’objet technique reconquiert une place dans une culture qui l’ostracise : l’objet rentre à nouveau dans la citadelle

43. « Pourquoi le processus de mesure tunnel fouille dans la structure électronique d’une molécule pour aller y chercher les orbitales moléculaires qui ne sont qu’une manière parmi d’autres d’expliquer la structure électronique d’une molécule ? » C. Joachim, communication personnelle, 20 janvier 2018. 92 Sacha Loeve de la culture par le biais d’une ritualisation, riche en images et en symboles » [Simondon 1960, 39]. Ainsi la course peut-elle être vue comme une manière de rebrancher le nanomonde des objets sur la culture populaire, de relier les petites échelles et le grand public. Dans une technophanie, l’objet technique n’est pas partout visible mais il est « le nœud de l’action », comme la fusée dans Objectif Lune, l’album de Tintin que Simondon donne en exemple. Outre les planches de l’Encyclopédie, le cabinet de physique des Lumières, les lancements de fusées, de satellites, voire de bombes atomiques (car les technophanies n’ont rien d’innocent), Simondon donne en exemple d’objet technophanique le jouet, qui est bien autre chose qu’un modèle réduit : le jouet représentant une locomotive n’est pas seulement l’objet locomotive, mais l’image et le symbole d’une catégorie entière d’êtres techniques susceptibles de développement. Le jouet est archétypal, il contient une image. [Simondon 1960, 41] Simondon définit les technophanies comme des « ritualisations médiatrices entre la culture et la technicité » qui « ont pouvoir d’instituer l’échange dans les deux sens » [Simondon 1960, 43]. Cette notion rappelle que l’intégration d’une technologie dans la culture ne se fait jamais sans récits ni sans rites. Les technophanies comportent une tension particulière qui leur donne valeur de culture, et qui n’est pas sans une certaine parenté avec le climat épique. [Simondon 1960, 42] Au départ, les NST ont alimenté une bulle spéculative de grands discours, amplifiée par la profusion des discussions d’éthique focalisées sur le transhu- manisme et l’homme augmenté. Les nanos ont agité un temps l’imaginaire populaire et inspiré de grands récits, puis le feu de paille s’est éteint, restant à peu près sans effet sur la dynamique de développement des nano-objets dans les laboratoires. Les chercheurs se sont sentis peu concernés, certains allant même jusqu’à rejeter le label « nano » jugé galvaudé44. La course de nanovoitures permet aux chercheurs d’aborder d’un même geste des questions de recherche qu’ils ne pourraient pas poser autrement et de constituer un public par le seul fait de montrer leur activité. Même si le dispositif n’a rien en soi de démocratique ou de participatif (les questions viennent des chercheurs et non du public), il rompt avec la séquence tradi- tionnelle production puis communication du savoir. Les chercheurs mettent en place une expérience leur permettant d’exposer publiquement le travail passionné en rendant visibles leurs valeurs et leurs efforts. Mais à la différence d’une expérience classique, l’expérience technopha- nique doit impérativement réussir, car c’est à travers elle que le halo social d’une technologie se construit ou s’éteint.

44. C’est le cas de Joachim qui se démarque désormais du « nano » mainstream en mettant en avant le « pico » ! L’incroyable aventure des nanovoitures 93

Un échec du geste technique – la fusée qui retombe près de sa base ou qui échappe au contrôle – crée un effet collectif aussi gênant que lorsque, chez les Romains, les poulets sacrés ne voulaient pas manger ou lorsque le taureau sacrifié s’enfuyait de l’autel en emportant, dans une horrible blessure, la hache du sacrificateur. [Simondon 1960, 119] Comme l’alunissage d’Apollo 11 retransmis en Mondovision, la NanoCar Race retransmise sur YouTube avait toutes les chances de rater. Et le risque augmentait l’intensité dramatique. Puisse donc la prochaine édition de la course de nanovoitures avoir lieu, et ses concepteurs et pilotes rester de grands enfants !

Remerciements

À Christian Joachim, pour sa patience et sa passion.

Bibliographie

Alemani, Micol, Gross, Leo et al. [2005], Recording the intramolecular deformation of a 4-legs molecule during its STM manipulation on a Cu (211) surface, Chemical Physics Letters, 402(1), 180–185, doi : 10.1016/j.cplett. 2004.12.026.

Anelli, Pier Lucio, Spencer, Neil et al. [1991], A molecular shuttle, Journal of the American Chemical Society, 113(13), 5131–5133, doi : 10.1021/ ja00013a096.

Aono, Masakazu, Hasegawa, Tsuyoshi et al. [2006], Point contact array, not circuit, and electronic circuit comprising the same, Patent no US 7473982B2.

Bensaude-Vincent, Bernadette [2016], The moral economy of synthetic biology, dans : Synthetic Biology : Metaphors, Worldviews, Ethics, and Law, edité par J. Boldt, Wiesbaden : Springer Fachmedien Wiesbaden, 87–100, doi : 10.1007/978-3-658-10988-2_7.

Bensaude-Vincent, Bernadette & Loeve, Sacha [2018], Toward a philosophy of technosciences, dans : French Philosophy of Technology : Classical Readings and Contemporary Approaches, edité par S. Loeve, X. Guchet & B. Bensaude-Vincent, Cham : Springer International Publishing, 169–186, doi : 10.1007/978-3-319-89518-5_11.

Bissell, Richard A., Córdova, Emilio et al. [1994], A chemically and electrochemically switchable molecular shuttle, Nature, 369, 133–137, doi : 10.1038/369133a0. 94 Sacha Loeve

Bouju, Xavier, Girard, Christian et al. [1997], van der Waals atomic trap in a scanning-tunneling-microscope junction : Tip shape, dynamical effects, and tunnel current signatures, Physical Review B, 55, 16 498–16 498, doi : 10.1103/PhysRevB.55.16498.

Carnino, Guillaume & Hilaire-Pérez, Liliane [2017], Qu’est-ce que la technologie ? Jalons pour l’histoire longue d’un concept oublié, dans : La Technologie générale, Johann Beckmann, Entwurf der allgemeinen Technologie / Projet de technologie générale (1806), edité par L. Hilaire- Pérez, G. Carnino & J. Hoock, Rennes : Presses universitaires de Rennes, 13–36.

Chiang, Pinn-Tsong, Mielke, Johannes et al. [2012], Toward a light-driven motorized nanocar : Synthesis and initial imaging of single molecules, ACS Nano, 6(1), 592–597, doi : 10.1021/nn203969b.

Chiaravalloti, Franco, Gross, Leo et al. [2006], A rack-and-pinion device at the molecular scale, Nature Materials, 6, 30–33, doi : 10.1038/nmat1802.

Coutellec, Léo [2015], La Science au pluriel. Essai d’épistémologie pour des sciences impliquées, Versailles : Editions Quae.

Deterding, Sebastian, Khaled, Rilla et al. [2011], Gamification : Toward a definition, dans : CHI 2011 Gamification Workshop Proceedings, Vancouver, BC, Canada.

Eisenhut, Frank, Durand, Corentin et al. [2016], Training for the 1st interna- tional nano-car race : the Dresden molecule-vehicle, The European Physical Journal. Applied Physics, 76(1), 10 001, doi : 10.1051/epjap/2016160259.

Feynman, R. P. [1984], Quantum-mechanical computers (A), Journal of the Optical Society of America B Optical Physics, 1, 464.

Feynman, Richard P. [1959], There’s plenty of room at the bottom, dans : Miniaturization, edité par H. D. Gilbert, New York : Rheinhold Publishing Co, 282–296, http://www.zyvex.com/nanotech/feynman.html.

—— [1960], How to build an automobile smaller than this dot. ?, Popular Science Monthly, nov., 114–116, 230–232.

Gimzewski, Jim K., Joachim, Christian et al. [1998], Rotation of a single molecule within a supramolecular bearing, Science, 281(5376), 531–533, doi : 10.1126/science.281.5376.531.

Godoy, Jazmin, Vives, Guillaume et al. [2011], Toward chemical propulsion : Synthesis of romp-propelled nanocars, ACS Nano, 5(1), 85–90, doi : 10. 1021/nn102775q. L’incroyable aventure des nanovoitures 95

Grill, Leonhard, Rieder, Karl-Heinz et al. [2005], Imaging of a molecular wheelbarrow by scanning tunneling microscopy, Surface Science, 584(2), L153–L158, doi : 10.1016/j.susc.2005.03.062.

Grill, Leonhard, Rieder, Karl-Heinz. et al. [2007], Rolling a single molecular wheel at the atomic scale, Nature Nanotechnology, 2, 95–98, doi : 10.1038/ nnano.2006.210.

Heath, James R., Kuekes, Philip J. et al. [2000], Molecular wire crossbar memory, Patent no US 09280189.

Jimenez-Bueno, Gorka & Rapenne, Gwénaël [2003], Technomimetic mole- cules : synthesis of a molecular wheelbarrow, Tetrahedron Letters, 44(33), 6261–6263, doi : 10.1016/S0040-4039(03)01519-3.

Joachim, Christian [2006], The driving power of the quantum superposition principle for molecule-machines, Journal of Physics : Condensed Matter, 18(33), 1935–1942, doi : 10.1088/0953-8984/18/33/S11.

—— [2008], Sous le microscope du philosophe, dans : Bionano-éthique. Perspectives critiques sur les bionanotechnologies, edité par B. Bensaude- Vincent, R. Larrère & V. Nurock, Paris : Vuibert, 73–78.

Joachim, Christian, Launay, Jean-Pierre et al. [2016], La nanocar Race, première course internationale de molécule-voitures, L’Actualité chimique, 411, I–XX.

Joachim, Christian & Rapenne, Gwénaël [2013], Molecule concept nanocars : Chassis, wheels, and motors ?, ACS Nano, 7(1), 11–14, doi : 10.1021/ nn3058246.

Joachim, Christian, Tang, Hao et al. [2002], The design of a nanoscale molecular barrow, Nanotechnology, 13(3), 330–335, http://stacks.iop. org/0957-4484/13/i=3/a=318.

Jung, Thomas A., Schlittler, Reto R. et al. [1996], Controlled room- temperature positioning of individual molecules : Molecular flexure and motion, Science, 271(5246), 181–184, doi : 10.1126/science.271.5246.181.

Kawai, Shigeki, Benassi, Andrea et al. [2016], Superlubricity of graphene nanoribbons on gold surfaces, Science, 351(6276), 957–961, doi : 10.1126/ science.aad3569.

Khatua, Saumyakanti, Godoy, Jazmin et al. [2010], Influence of the substrate on the mobility of individual nanocars, The Journal of Physical Chemistry Letters, 1(22), 3288–3291, doi : 10.1021/jz101375q.

Koumura, Nagatoshi, Zijlstra, Robert W. J. et al. [1999], Light-driven monodirectional molecular rotor, Nature, 401, 152–155, doi : 10.1038/43646. 96 Sacha Loeve

Langlais, Véronique J., Schlittler, Reto R. et al. [1999], Spatially resolved tunneling along a molecular wire, Physical Review Letters, 83, 2809–2812, doi : 10.1103/PhysRevLett.83.2809.

Livoreil, Aude, Dietrich-Buchecker, Christiane O. et al. [1994], Electrochemically triggered swinging of a [2]-catenate, Journal of the American Chemical Society, 116(20), 9399–9400, doi : 10.1021/ja00099a095.

Loeve, Sacha [2008], Autour d’une définition des nanos, dans : Bionano- éthique. Perspectives critiques sur les bionanotechnologies, edité par B. Bensaude-Vincent, R. Larrère & V. Nurock, Paris : Vuibert, Machinations, 17–33.

—— [2009], Le Concept de technologie à l’échelle des molécules-machines. Philosophie des techniques à l’usage des citoyens du nanomonde, Thèse de doctorat, Université Paris Nanterre.

—— [2011], « Ceci n’est pas une brouette ». Grands et petits récits des nanotechnologies, dans : Humains, non-humains. Comment repeupler les sciences sociales, edité par S. Houdart & O. Thierry, Paris : La Découverte, 208–220.

—— [2016], Pour les technosciences, Implications Philosophiques, décembre, en ligne, www.implications-philosophiques.org/actualite/une/pour- les-technosciences/, dossier thématique « Technique et technoscience après Simondon » dirigé par J.-H. Barthélémy.

—— [2017], Point and line to plane : The ontography of carbon nanomaterials, dans : From Bench to Brand and Back : The Co-Shaping of Materials and Chemists in the Twentieth Century, edité par P. Teissier, C. C. M. Mody & B. van Tiggelen, Cahiers François Viète, t. III, 183–216.

Marcovich, Anne & Shinn, Terry [2011], Instrument research, tools, and the knowledge enterprise 1999-2009 : Birth and development of dip-pen nanolithography, Science, Technology, & Human Values, 36(6), 864–896, doi : 10.1177/0162243910385406.

Michalewicz, Marek T. [1997a], Nano-cars : Feynman’s dream fulfilled or the ultimate challenge to automotive industry ? Communication, dans : Fifth Foresight Institute Conference on , Palo Alto, 5–8.

—— [1997b], Nano-cars : The enabling technology for building buckyball pyramids, publication abstract, dans : Fifth Foresight Institute Conference on Molecular Nanotechnology, Palo Alto, 1–4.

—— [1998], Nano-cars and buckyball pyramids, Annals of Improbable Research, 4, 18–19. L’incroyable aventure des nanovoitures 97

Milburn, Colin [2011], Just for fun : The playful image of nanotechnology, NanoEthics, 5(2), 223, doi : 10.1007/s11569-011-0120-4.

—— [2017], Long live play : The playstation network and technogenic life, dans : Research Objects in their Technological Setting, edité par B. Bensaude-Vincent, S. Loeve, A. Nordmann & A. Schwarz, Abingdon : Routledge, History and Philosophy of Technoscience, 105–122.

Moresco, Francesca, Meyer, Gerhard et al. [2001], Recording intramolecular mechanics during the manipulation of a large molecule, Physical Review Letters, 87, 088 302, doi : 10.1103/PhysRevLett.87.088302.

Morin, Jean-François, Shirai, Yasuhiro et al. [2006], En route to a motorized nanocar, Organic Letters, 8(8), 1713–1716, doi : 10.1021/ol060445d.

Nickel, Anja, Ohmann, Robin et al. [2013], Moving nanostructures : Pulse- induced positioning of supramolecular assemblies, ACS Nano, 7(1), 191–197, doi : 10.1021/nn303708h.

Nordmann, Alfred [2017], Vanishing friction events and the inverted plato- nism of technoscience, dans : Research Objects in their Technological Setting, edité par B. Bensaude-Vincent, S. Loeve, A. Nordmann & A. Schwarz, Abingdon : Routledge, History and Philosophy of Technoscience, 56–69.

Pawlak, Rémy & Meier, Tobias [2017], Fast and curious, Nature Nanotechnology, 12, 712, doi : 10.1038/nnano.2017.141.

Pawlak, Rémy, Meier, Tobias et al. [2017], Design and characterization of an electrically powered single molecule on gold, ACS Nano, 11(10), 9930–9940, doi : 10.1021/acsnano.7b03955.

Perera, Uduwanage Gayani E., Ample, Francisco et al. [2012], Controlled clockwise and anticlockwise rotational switching of a molecular motor, Nature Nanotechnology, 8, 46–51, doi : 10.1038/nnano.2012.218.

Rapenne, Gwénaël, Grill, Leonhard et al. [2006], Launching and landing single molecular wheelbarrows on a Cu (100) surface, Chemical Physics Letters, 431(1), 219–222, doi : j.cplett.2006.09.080.

Reinerth, William A., Jones, II, LeRoy et al. [1998], Molecular scale electronics : syntheses and testing, Nanotechnology, 9(3), 246, doi : 10.1088/ 0957-4484/9/3/016.

Sasaki, Takashi, Morin, Jean-François et al. [2007], Synthesis of a single- molecule nanotruck, Tetrahedron Letters, 48(33), 5817–5820, doi : 10.1016/ j.tetlet.2007.06.081. 98 Sacha Loeve

Schummer, Joachim [2006], Gestalt switch in molecular image perception : The aesthetic origin of molecular nanotechnology in supramolecular che- mistry, Foundations of Chemistry, 8(1), 53–72, doi : 10.1007/s10698-005- 3358-5. Shirai, Yasuhiro, Morin, Jean-François et al. [2006], Recent progress on nanovehicles, Chemical Society Reviews, 35, 1043–1055, doi : 10.1039/ B514700J. Shirai, Yasuhiro, Osgood, Andrew J. et al. [2005], Directional control in thermally driven single-molecule nanocars, Nano Letters, 5(11), 2330–2334, doi : 10.1021/nl051915k. Simondon, Gilbert [1958], Du mode d’existence des objets techniques, Paris : Aubier, 2012. —— [1960], Psychosociologie de la technicité, Paris : Presses universitaires de France, republié dans Sur la Technique (1953-1983), 2014, 27–129. Simpson, Grant J., García-López, Víctor et al. [2017], How to build and race a fast nanocar, Nature Nanotechnology, 12, 604–606, doi : 10.1038/nnano. 2017.137. Soe, We-Hyo, Shirai, Yasuhiro et al. [2017], Conformation manipulation and motion of a double paddle molecule on an Au (111) surface, ACS Nano, 11(10), 10 357–10 365, doi : 10.1021/acsnano.7b05314. Thébaud-Sorger, Marie [2009], L’Aérostation au temps des lumières, Rennes : Presses universitaires de Rennes. Tour, James M. [2016], Animadversions of a synthetic chemist, Inference, 2(2), online, http://inference-review.com/article/animadversions- of-a-synthetic-chemist. —— [2017], An open letter to my colleagues, Inference, 2(3), online, http: //inference-review.com/article/an-open-letter-to-my-colleagues. Triclot, Mathieu [2012], Jouer au laboratoire. Le jeu vidéo à l’université (1962-1979), Réseaux, 173–174(3), 177–205, doi : 10.3917/res.173.0177. Villagómez, Carlos J., Sasaki, Takashi et al. [2010], Bottom-up assembly of molecular wagons on a surface, Journal of the American Chemical Society, 132(47), 16 848–16 854, doi : 10.1021/ja105542j. Vives, Guillaume, Guerrero, Jason M. et al. [2010], Synthesis of fluorescent dye-tagged nanomachines for single-molecule fluorescence spectroscopy, The Journal of Organic Chemistry, 75(19), 6631–6643, doi : 10.1021/jo101468u. Vives, Guillaume, Kang, JungHo et al. [2009], Molecular machinery : Synthesis of a “Nanodragster”, Organic Letters, 11(24), 5602–5605, doi : 10.1021/ol902312m. NanoTechnoScience for Philosophers of Science

Alfred Nordmann Technische Universität Darmstadt, Institut für Philosophie ()

Résumé : Au cours des dernières décennies, la philosophie des sciences est devenue la philosophie des sciences particulières. En conséquence, certaines des questions majeures de la philosophie des sciences sont adressées à la re- cherche en nanosciences et technologie. C’est le cas notamment des questions concernant la nature de la connaissance, le rôle de la théorie, les pratiques expérimentales et observationnelles. Ces questions habituelles suscitent des réponses inattendues, suggérant que la nanotechnoscience est un exemple non pas d’une science mais d’une technoscience.

Abstract: In recent decades, the philosophy of science has become philosophy of the special sciences. Accordingly, some of the main questions of the philosophy of science are brought to nanoscale research. These are questions regarding the nature of knowledge, the role of theory, accounts of experimental and observational practice, and others. As it turns out, however, the familiar questions elicit unfamiliar answers, suggesting that nanotechnoscience is an instance not of science but of technoscience.

“Nanotechnoscience” is more than shorthand for “nanoscience and nan- otechnologies” but signifies a mode of research that is neither science nor engineering1. Peter Galison, for example, speaks of an “engineering way of being within the sciences” when he characterizes nanotechnological research: A focus on novel effects, materials, and objects, but constructed through an engineering way of being that values the making and linking of structures with little regard for the older fascination with existence for its own sake. [Galison 2017, 25]

Philosophia Scientiæ, 23(1), 2019, 99–119.

1. This text integrates, revisits, updates, and revises two earlier publica- tions: “Philosophy of Nanotechnoscience” [Nordmann 2008], and “Philosophy of Technoscience in the Regime of Vigilance” [Nordmann 2010]. 100 Alfred Nordmann

Along such lines, science and technoscience can be distinguished as follows: Science uses experiments, measurements, research technologies in order to produce representations and forge an agreement between theory and reality, mind and world. Technoscience uses theories, models, and scientific knowledge in order to achieve predictive and technical control and make things work [Bensaude-Vincent, Loeve et al. 2017, 3–5]. As for the philosophy of science and the philosophy of technoscience, they ask many of the same questions but answer them differently. This will now be shown in a cursory way for the philosophy of nanotechnoscience and its seven unfamiliar responses to seven familiar questions:

1. What is the role of theory and theory-development in nanoscale research, and what kinds of theories are needed for nanotechnological development? 2. What are the preferred modes of reasoning, methods and associated tools in nanoscientific research? 3. What is the domain of objects that are of interest to nanotechnoscience? 4. What kind of knowledge do nanotechnoscientific researchers typically produce and communicate? 5. What kind of basic science could provide a foundation for nanotechno- science? 6. What sorts of experiments are performed in nanotechnological research, what is its mode of experimentation? 7. What is observation, how do nanotechnology researchers observe their experiments and see with and through their instruments?

All seven questions are of primarily epistemological and methodological significance—they tell us what NanoTechnoScience is in terms of presupposi- tions and aspirations, epistemic values and practices. And yet, the answers to these questions have societal dimensions and implicate ethical or political values. Here, these will only be hinted at by way of a closing remark at the end of each section.2

1 Closed theories

In the late 1940s, physicist Werner Heisenberg introduced the notion of “closed theories” [Heisenberg 1974]. In particular, he referred to four closed theories:

2. While there may be such a thing as pure or value-free science, there is no pure technoscience. For a more extensive consideration of the intersection between epistemology and ethics, broadly conceived, see the publications cited in the previous note. NanoTechnoScience for Philosophers of Science 101

Newtonian mechanics, Maxwell’s theory with the special the- ory of relativity, thermodynamics and statistical mechanics, non-relativistic quantum-mechanics with atomic physics and chemistry.

These theories he considered to be closed in four complementary respects: 1. Their historical development has come to an end, they are finished or reached their final form. 2. They constitute a hermetically closed domain in that the theory defines conditions of applicability such that the theory will be true wherever its concepts can be applied. 3. They are immune to criticism; problems that arise in contexts of application are deflected to auxiliary theories and hypotheses or to the specifics of the set-up, the instrumentation, etc. 4. They are forever valid: wherever and whenever experience can be described with the concepts of such a theory, the laws postulated by this theory will be proven correct [Heisenberg 1974]. All this holds for nanotechnoscience: It draws on an available repertoire of theories that are closed in regard to the nanoscale, and it is concerned neither with the critique or further elaboration of these theories, nor with the construction of theories of its own.3 This is not to say, however, that closed theories are simply “applied” in nanotechnoscience. When Heisenberg refers to the hermetically closed domain of closed theories, he states merely that the theory will be true where its concepts can be applied and leaves quite open the domain of its actual applicability. Indeed, he suggests that this domain is so small that a “closed theory does not contain any absolutely certain statement about the world of experience” [Heisenberg 1974]. Even for a closed theory, then, it remains to be determined how and to what extent its concepts can be applied to the world of experience. Thus, there is no preexisting domain of phenomena to which a closed theory is applied. Instead, it is a question of success, that is, of calibration, tuning, or mutual adjustment to what extent phenomena of experience can be assimilated into the domain of the theory such that its concepts can be applied to them. This notion of “application” has been the topic of recent discussions—but it does not capture the case of nanotechnoscience. Here, researchers are not

3. There is more to be said about the lack of theory development in and for nanotechnoscience. One counterexample might be the discovery and subsequent theoretical work on the giant magnetoresistance effect [Wilholt 2006]. Also, there have been calls for the development of theory specifically suited to the complexities of the nanocosm [Roukes 2001], [Eberhart 2002]. These voices remain isolated, however. Also, if there were theories of and for nanotechnology, would they describe an underlying structure and reality, or would they anticipate a world that can arise only under conditions of perfect technical control at the vanishing point of engineering [Nordmann 2017]? 102 Alfred Nordmann trying to bring nanoscale phenomena into the domain of quantum chemistry or fluid dynamics or the like. They are not using models to extend the domain of application of a closed theory or general law. They are not engaged in fitting the theory to reality and vice versa. Instead, they take nanoscale phenomena as parts of a highly complex mesocosm between classical and quantum regimes. They have no theories that are especially suited to accounting for this complexity, no theories, for example, of structure-property relations at the nanoscale.4 Nanoscale researchers understand, in particular, that the various closed theories have been formulated for far better-behaved phenomena in far more easily controlled laboratory settings. Rather than claim that the complex phenomena of the nanoscale can be described in such a way that the concepts of the closed theory now apply to them, they draw on closed theories eclectically, stretching them beyond their intended scope of application to do some partial explanatory work at the nanoscale.5 A particular measure of the flow of current through an organic-inorganic molecular complex might be reconstructed quantum-chemically or in the classical terms of electrical engineering—and yet, the two accounts do not compete against each other for offering a better or best explanation [Nordmann 2004]. Armed with theories that are closed relative to the nanoscale, researchers are well equipped to handle phenomena in need of explanation, but they are also aware that they bring crude instruments to bear that are not made specifically for the task. Accordingly, the business of explanation becomes transformed into one of re-construction. Rather than offering a theoretical description that satisfies the human intellect, there is a demonstration that bits and pieces of theory can be joined together or modularized so as to yield the observed event in a computer model. If the language of “fitting” derives from a representational idiom of making a picture that fits the situation and vice versa, this practice of “stretching” emphasizes that theories provide modular components for the re-construction of a process in silico. Indeed, nanoscale research is characterized by a tacit consensus according to which the following three propositions hold true simultaneously:

1. There is a fundamental difference between quantum and classical regimes such that classical theories cannot describe quantum phenomena and such that quantum theories are inappropriate for describing classical phenomena.

4. The term “complexity” is used here in a deliberately non-technical manner. The complexity at the nanoscale is one of great messiness, of too many relevant variables and properties, and multiple complicated interactions. 5. There is another dimension to this: In the standard case of fitting theory to reality and vice versa, the problem concerns ways to compensate for the idealizations or abstractions that are involved in formulating a theory and constructing a model. However, classical theories do not abstract from nanoscale properties and processes, nor do they refer to idealizations of nanoscale phenomena. Instead, in nanotechnoscience the challenge is that of crossing from the intended domain of a classical theory into quite another domain. NanoTechnoScience for Philosophers of Science 103

2. The nanoscale holds intellectual and technical interest because it is an “exotic territory” [Roukes 2001], where classical properties like color and conductivity emerge when one moves up from quantum levels, and where phenomena like quantized conductance emerge as one moves down to the quantum regime. 3. Nanoscale researchers can eclectically draw upon a large toolkit of theories from the quantum and classical regimes to re-construct—by stretching rather than fitting—novel properties, behaviors, or processes.

Taken together, these three statements express a characteristic tension concerning nanotechnology, namely that on the one hand it is thought to be strange, novel, and surprising while on the other hand it is familiar and manageable. More significantly for present purposes, however, they express an analogous tension regarding available theories: They are thought to be inadequate on the one hand, but quite sufficient on the other. The profound difference between classical and quantum regimes highlights what makes the nanocosm special and interesting—but this difference boils down to a matter of expediency and taste when it comes to choosing modular bits and pieces of theory from the perspectives of classical or quantum physics. Put another way: What makes nanoscale phenomena scientifically interesting is that they cannot be adequately described from either perspective, but what makes nanotechnologies possible is that the classical and quantum approaches prove resourceful enough when it comes to reconstructing these phenomena. As for the societal dimensions of all this—theories that are closed rela- tive to the nanoscale and stretched to provide explanations, do not suggest limits of what is technically possible. Especially in the early days of nan- otechnology, this gave rise to an image of nanotechnology as an unlimited storehouse of goods.

2 Qualitative reasoning

As was shown above, Heisenberg considered the extent to which phenomena of experience can be fitted to closed theories as “a question of success” [Heisenberg 1974], and one can consider the extent to which elements of closed theories can be stretched to account for phenomena of experience in the same terms. This suggests the follow-up question as to what “success” amounts to in nanoscale research: What does it take to satisfy oneself that one has reached a sufficiently good understanding or control of the phenomena under investigation? For Heisenberg himself and those philosophers of science who follow the paradigm of theoretical physics, the question boils down to the predictive success of quantitative science. Here, “quantitative” means more than the employment of numbers or precision measurements. The characteristics of 104 Alfred Nordmann quantitative approaches include the following: First, predicted numerical values are compared to values obtained by measurement. The reasonably close agreement between two numbers then serves to establish the agreement of theory and reality. Second, this quantitative agreement emphatically makes do without any appeal to a likeness or similarity between theoretical models and the real-world systems they are said to represent. Quantitative science is satisfied if it reliably leads from initial conditions to accurate predictions, it does not require that all the details of its conceptual apparatus or every term in its equations has a counterpart in reality. Both characteristics of quantitative science are familiar especially from 20th century theoretical physics—but do they hold for nanotechnoscience? Although there may not be a general answer to this question, it is fair to say that much nanotechnoscientific research is qualitative. To be sure, phenomena are measured and described with the utmost quantitative precision. And yet, epistemic success consists in constructions of likeness or similarity. In other words, in nanotechnoscience the quantitative agreement of predicted and measured quantities is replaced by a qualitative agreement of calculated and experimental images. One hallmark of the achievement of likeness or similarity is the absence, even deliberate suppression of visual clues by which to hold calculated and experimental images apart. Instead, there has been a pronounced tendency to blend and even fuse them: The mere fact of a seamless transition between them serves as an argument of sorts according to which the calculated image or model captures the relevant structure or dynamics of the experimentally observed entity or behavior. Since an extensive analysis of this practice cannot be provided here, a description must do. It starts with (nano)technoscientific researchers comparing two seemingly distinct images on their displays or computer screens. One of the displays offers a visual interpretation of the data that were obtained through a series of measurements (e.g., by an electron or scanning probe microscope), the other presents a dynamic simulation of the observed process—and for this simulation to be readable as such, the simulation software has produced a visual output that is designed to look like the output of an electron or scanning probe microscope. Agreement between two similarly processed images gives the researchers license to draw inferences about probable causal processes and to suggest that they understand these processes, perhaps creating and participating in them. The mere likeness of the images appears to warrant inferences from the mechanism modeled in the simulation to the mechanism responsible for the experimentally obtained data. Accordingly—and this has not been done as yet—one could investigate how nanoscale researchers construct mutually reinforcing likenesses, how they calibrate not only simulations to observations and visual representations to physical systems, but also their own devices to extant devices, their own work to that of others, and current findings to long-term visions. This kind of study would show that the community of researchers is not unified by an overarching paradigm or theoretical framework but by the availability of a large NanoTechnoScience for Philosophers of Science 105 tool-kit of theories, concepts, strategies, and techniques. Instead of theories, it is instruments (STM, AFM, etc.), their associated software, techniques and exemplary artefacts (buckyballs, carbon nanotubes, gold nanoshells, molecular wires, graphene), that provide relevant common referents [Mody 2004], [Hennig 2006], [Johnson 2006], [Marcovich & Shinn 2014]. Peter Galison has referred to this qualitative orientation as “ontological indifference”: Nanoscale research does not seek to decide what is and what is not, what are fundamental features of the world and what is inferred or derived [Galison 2017]. Since such questions are central in physics with its fundamental particles and hierarchical composition of matter, how is it that nanotechnological research can afford to do without? For example, molecular electronics researchers may invoke more or less simplistic pictures of electron transport but they do not reflect much on the nature and existence of electrons. Indeed, electrons are so familiar to them that they might think of them as ordinary macroscopic things that pass through a molecule as if it were another material thing with a tunnel going through it [Nordmann 2004]. Many physicists and philosophers of physics would object to such blatant disregard for the strangely immaterial and probabilistic character of the quantum world that is the home of electrons, orbitals, and standing electron waves [Scerri 2000], [Vermaas 2004]. Indeed, to achieve a practical understanding of electron transport, it may be necessary to entertain more subtle accounts. However, it is the privilege of ontologically indifferent technoscience that it can always develop more complicated accounts as the need arises. For the time being, it can see how far it gets with rather more simplistic pictures.6 Ontological indifference amounts to a disinterest in questions of representa- tion and an interest, instead, in substitution and immersion. Rather than use sparse modeling tools to economically represent salient causal features of real systems, nanoresearchers produce in the laboratory and in their models a rich, indeed over-saturated substitute reality. Accordingly, only after generating a highly complex system from simple parts (e.g., a simulation model or experimental system), they then begin to reduce complexity by observing the behavior and parameter-dependencies not in “nature out there”, but in some domesticated chunk of reality in the computer or in the laboratory. These data reduction and modeling techniques, in turn, are informed by algorithms which are concentrated forms of previously studied real systems, they are tried and tested components of substitute realities that manage to emulate real physical systems [Harré 2003]. In other words, there is so much reality in their simulations or constructed experimental systems, that nanotechnology researchers can take them for reality itself [Winsberg 2009]. They study these substitute systems and, of course, these systems provide prototypes for technical devices or applications. While the public is still awaiting significant 6. A particularly interesting and challenging example of this is Don Eigler’s famous image of a quantum corral that confines a standing electron wave. The picture’s seemingly photographic realism suggests that the quantum corral is just as thing-like as a macroscopic pond. 106 Alfred Nordmann nanotechnological products to come out of the labs, the researchers in the labs are already using nanotechnological tools to detach and manipulate more or less self-sufficient nanotechnological systems which “only” require further development before they can exist as useful devices outside the laboratory, devices that not only substitute for but improve upon something in nature. Instead of producing a symbolic picture of reality, researchers learn to inhabit an artefactual or manufactured reality that displaces the observed reality. At the height of contemporary scientific and technological development, we therefore encounter a mode of thought which is characteristic of prescientific magical thinking. The pseudo-science of physiognomy, for example, is based on the idea that there is a meaningful likeness between the facial features and the character of a person—both participating in the same reality of which they are only two different aspects. With the reappearance of such forms of reasoning, societies will need to reassess the classical ideal of critical science in the service of Enlightenment.

3 The objects of interest

Any field of research is directed at a certain domain of objects and what unifies this domain is a particular way of conceiving these objects. Mechanics, for example, looks at all phenomena of motion. Everything that can be assigned coordinates in time and space and that has mass is an object of mechanics and is of interest in regard to those properties that make it an object of motion. The philosophy of science articulates this world-view of mechanics and asks, for example, to what extent certain conceptions of time and space prejudice the investigation of objects in motion. If one now asks about nanotechnological research, the philosophy of technoscience may offer something like the following characterization of its domain of objects: Nanotechnological research considers properties, traits, or features in regard primarily to their potential technical function, and in regard only incidentally to structure. On a first approach, one might say that nanoscience and nanotechnologies are concerned with everything molecular or, a bit more precisely, with the investigation and manipulation of molecular architecture—as well as the properties or functionalities that depend on molecular architecture. Everything that consists of atoms is thus an object of study and a possible design target of nanoscale research. This posits a homogeneous and unbounded space of possibility, giving rise, for example, to the notion of an all-powerful nanotechnology as a combinatorial exercise that produces the “little BANG” (ETC Group 2003)—since bits, atoms, neurons, genes all consist of atoms and all of them are molecular, they all look alike to nanoscientists and engineers who can recombine them at will. And thus, categorial distinctions of living and inanimate, organic and inorganic, biological and technical things, of nature and culture appear to become meaningless with the notion of an unlimited NanoTechnoScience for Philosophers of Science 107 space of combinatorial possibilities. Though hardly any scientist believes literally in the infinite plasticity of everything molecular, the molecular point of view proved transgressive in many nanotechnological research programs. It is particularly apparent where biological cells are described as factories operating with molecular nanomachinery. Aside from challenging cultural sensibilities and systematic attempts to capture the special character of living beings and processes, nanotechnoscience here appears naively reductionist. In particular, it does not usually acknowledge any influence of the context or environment of molecular structures, thereby excluding top-down causation from wholes to parts that would determine the properties and behaviors of component molecules.7 Conceived as a unified enterprise with an unbounded domain of “everything molecular”, nanotechnology fits the bill of an all-encompassing modern technology along the lines of Heidegger’s Gestell that treats everyone and everything as a material resource for the achievement of technical control. It does so because it employs what one might call a “thin” conception of nature: Nature is circumscribed by the physical laws of nature and all that accords with these laws is natural. Thus, nanotechnology can quickly and easily claim for itself that it always emulates nature, that it manufactures things nanotechnologically just as nature does when it creates living organisms. This conception, however, is too “thin” or superficial to be credible, and it suffers from the failing that the conditions of (human) life on Earth have no particular place in it: From the point of view of physics and the eternal immutable laws of nature, life on Earth is contingent and not at all necessary. These laws predate and will outlive the human species. In contrast, a substantial, more richly detailed or “thick” conception of nature takes as a norm the special evolved conditions that sustain life on Earth. Here, any biomimetic research that emulates nature derives from an attitude of care and respect as it seeks to maintain these special conditions. This would involve an appreciation of how these conditions have evolved historically. On this conception, a molecule that occurs in a technical system will not be the same as one in a biological system, even if they share the same chemical composition. Instead of considering the nature or essence of objects in terms of structure-property relations, nanotechnology is interested in their power or potency—what they can or what they will be.8 However, it once again remains an open question and challenge to nanoscience and nanotechnologies whether they can really integrate such a thick or substantial conception of nature.

7. Even Richard Jones’s Soft Machines [Jones 2004] with its vivid appreciation of the complexities of “biological nanotechnology” does not reflect the epigenetic findings of developmental biologists regarding environmental stimuli to gene expression. These findings complicate immensely the promise of robust nanotechnology. 8. This ontological conception of (not only) nanotechnology’s attractive objects was the subject of the GOTO project on the Genesis and Ontology of Technoscientific Objects [see Bensaude-Vincent, Loeve et al. 2017]. 108 Alfred Nordmann

If the thin conception of “everything molecular” yields a fantastically broad domain of objects, and if a more contextual, thick conception appears to be too challenging and perhaps too limiting, a third attempt to limit the domain of objects would foreground instrumental and experimental practice: The domain of objects and processes does not include all nanoscale objects “out there” but consists of just those phenomena and effects that are revealed by scanning tunneling microscopy and other specifically nanotechnological procedures. It was Peter Janich, in particular, who ascribed to nanotechnoscience a methodological unity based on common practice [Janich 2006], [see also Grunwald 2006]. He suggested a philosophical program of systematizing the operations by which nanoscale objects become amenable to measurement and observation. Such a systematic reconstruction of the domain of objects of nanotechnological research might begin by looking at length measurement or scanning probe microscopy. Unfortunately, however, research practice is not actually unified in this manner. Even scanning probe microscopy—to many a hallmark or point of origin for nanotechnologies—plays a minor role in the work of many nanoscale researchers [Baird & Shew 2004]. Also, the struggles to attain standard measures, or to characterize nanomaterials, testify to the unruliness of the objects of research. In light of the shortcomings of all three approaches, what remains is the idea that the specific rather than envisioned objects of nanoscale research are constituted through their particular histories—histories that concern their origin (in a tissue sample, in the earth, in a chemically produced batch), that include nanotechnological interventions as well as their location finally in a technical system. This would promote, of course, the fragmentation of “nan- otechnology” into as many “nanotechnologies” as there are nanotechnological devices or applications. If this is so, it becomes impossible to uphold the idea of carbon nanotubes as all-purpose technical components. If they contribute to the performance of some product, then they are individuated or characterized as being carbon-nanotubes-in-that-product. By the same token, they are no longer conceived as molecular objects that are combinable in principle with just about any other. The open space of unlimited potential differentiates into a manifold of specific technological trajectories. For technology assessment and especially the assessment of risk and safety, this would imply a shift of perspective from component materials to integrated systems or devices—a carbon nanotube would be as safe or unsafe as the product in which it appears.

4 From epistemic certainty to systemic robustness

The previous sections considered research practices of the nanotechno- sciences—how theories are stretched to deal with the complexities at the NanoTechnoScience for Philosophers of Science 109 nanoscale, how a qualitative methodology serves the construction of likeness and inferences from that likeness, and how the research objects are individ- uated and encountered. All these practices contribute to the production of knowledge, but the sense in which this is “objective knowledge” needs to be explored. As with more classical science, the findings of nanotechnoscientific research are published in scientific journals, so the question is, more concretely, what kind of knowledge is expressed or communicated in a nanoscientific journal article? Instead of answering this question in a properly detailed and differentiated manner, a somewhat schematic summary must suffice here [Nordmann 2012]. A typical research article in classical science states a hypothesis, offers an account of the methods, looks at the evidence produced, and assesses the hypothesis in light of the evidence. In contrast to “hypothesis testing”, a technoscientific research article provides testimony for an acquired capability. It offers a sign or proof of what has been accomplished in the laboratory and tells a story of what has been done. The telling of the story does not actually teach the capability, but it offers a challenge to the reader that they might develop this capability themselves. As opposed to “epistemic” knowledge (concerned with the truth or falsity of propositions), nanoscale research produces working knowledge—that is, knowledge of how things can work together within a working order.9 Working knowledge can be objective and public, scientific and communicable. It grasps causal relations, and establishes habits of action. It is validated or assessed not by the application of criteria or norms but by being properly entrenched in a culture of practice. Only propositions can be true or false not attunement to a working order: by demonstrably acquiring a capability, one can more or less consistently effect something in an “apparatus-world complex” [Harré 2003]. As opposed to the truth or falsity, certainty or uncertainty of hypotheses, the hallmarks of technoscientific knowledge are robustness, reliability, and resilience of technical systems or systematic action. This account of working knowledge pushes the question of where the “science” is in “technoscience” to the fore. The answer to this question can be found in the very first section above: It is in the (closed) theories that are brought as tools to the achievement of partial control and partial understanding. Nanotechnoscience seeks not to improve theory or to change our understanding of the world but to manage complexity and produce novelty. As such, nanotechnoscience is technical tinkering, product development, search for design solutions to societal problems, an attempt to shape and reshape the

9. This particular notion of “working knowledge” originated in [Baird & Nordmann 1994]. It should be distinguished from thing knowledge on the one hand, from personal skill or tacit knowledge on the other hand. Even when it is non-propositional, it is based in public performances of construction, maintenance, or repair. It consists in a demonstrable grasp of how things can be brought to exhibit their capacities in the context of specific works of human art—as such subject to scrutiny, assessment, prodding. 110 Alfred Nordmann world. However, the conceptual and physical tools it tinkers with do not come from ordinary experience, from common sense and a craft tradition but concentrate within them the labors of science. So, the “science” of “nanotechnoscience” is in the theoretical tools that enter into the research process. What comes out is working knowledge which cannot be reduced to scientific understanding or a theoretical description of how things are. As long as one can produce an effect in a reasonably robust manner, it does not really matter whether scientific understanding catches up. Indeed, the complexities may be such that it cannot fully catch up. The standard example of technology being ahead of science is the steam engine which was developed without a proper understanding of the relation between heat and work [Baird 2004]. This understanding came much later and, indeed, was prompted only by the efficient performance of the steam engine. The steam engine itself was therefore not applied science but the result of technical tinkering. It was made of valves, pumps, gears, etc. of which there was good craft-knowledge—and it worked just fine before the advent of thermodynamics. In a sense, it didn’t need to be understood. As opposed to the steam engine, nanotechnological products, genetically modified organisms, or drug delivery systems are all offsprings of the knowledge society. They are not made of valves and pumps but assembled from highly “scientized” components such as algorithms, capabilities acquired by scientifically trained researchers, measuring and monitoring devices with plenty of knowledge built in [Winsberg 2009]. The science that goes into the components is well-understood, but not so the interactions of all the components and their sensitivities in the context of the overall technical system. Still, like the steam-engine it may work just fine without being fully understood. And though one cannot attain positive knowledge from which to derive or predict its performance, we may learn to assess its robustness. The fact that nanoscale researchers demonstrate acquired capabilities, and that they thus produce “mere” working knowledge, creates a demand for working knowledge also in a social arena where nanotechnological innovations are challenged, justified, and appropriated. The shift from propositional hypotheses to actions within techno-cultural systems, from epistemic questions of certainty to systemic probes of robustness has implications also for the production of socially robust technologies which requires working knowledge of how things work in a society.

5 Scientific foundations for an open science

The preceding sections provided a survey of nanotechnoscience in terms of disciplinary questions (a complex field partially disclosed by stretching closed theories), of methodology (constructions and qualitative judgments of likeness), of ontology (a thin conception of nature as unlimited potential), and NanoTechnoScience for Philosophers of Science 111 of epistemology (acquisition and demonstration of capabilities). This does not exhaust a philosophical characterization of the field which would have to include, for example, a sustained investigation of nanotechnology as a conquest of space or a kind of territorial expansion.10 Also, nothing has been said so far about nanotechnology as an enabling technology that might enable, in particular, a convergence with bio- and information-technologies. Finally, it might be important to consider nanotechnoscience as an exemplar of a larger cultural transition from scientific to technoscientific research. This survey is limited in other ways. It glossed over the heterogeneity of research questions and research traditions. If there is to be a common denominator for the nanoscale research, this invites the speculative question: “What kind of basic science is required by and for nanotechnology?” From quantum mechanics or hydrodynamics derive the (closed) theories that serve as the toolkit for nanoscale research. While these are basic sciences, of course, they are not therefore the basis of nanoscience. What, then, is the basic scientific research that might properly ground nanotechnologies or establish nanoscience as a field in its own right? Interestingly, perhaps characteristically, there are no attempts so far to address this somewhat speculative question in a systematic way.11 And obviously, one should not expect any consensus regarding the following proposals. In terms of empirical grounding or a theoretical paradigm, some call for general theories of (supra-)molecular structure-properties relations, others imagine that there will be a future science of molecular and nanotechnical self-organization, for example [Eberhart 2002], or [Roukes 2001]. Following the suggestion of Peter Janich (see above), one might identify and systematize how nanoscale phenomena are constituted through techniques of observation and measurement—this might render theories of instrumentation basic to nanoscience, for example [Clifford & Seah 2006], [Marcovich & Shinn 2014]. Another kind of basic research, entirely, would come from so-called Bildwissenschaft (image or picture-science) that could provide a foundation for image-production and visualization in nanotechnoscience. Its investigations might contribute clues for distinguishing visualizations of microscopically obtained data from animations and simulations. This science might also investigate image-text relations or develop conventions for reducing the photographic intimations of realism while enhancing informational content.

10. One implication of this is that nanotechnology should not be judged as the promise of a future but, instead, as a collective experiment in and with the present [Nordmann 2007]. 11. To be sure, there are piecemeal approaches. One might say, for example, that a theory of electron transport is emerging as a necessary prerequisite for molecular electronics [but see Tao 2006]. Also, the giant magnetoresistance effect might be considered a novel nanotechnological phenomenon that prompted “basic” theory development [Wilholt 2006]. Compare also [Nordmann 2017]. 112 Alfred Nordmann

A third possibility is to ask whether nanotechnoscience can and should be construed as a “social science of nature” [Böhme & Schäfer 1983].12 As an enabling, general purpose, or key technology it leaves undetermined what kinds of applications will be enabled by it. This sets it apart from cancer research, the Manhattan project, the arms race, space exploration, artificial intelligence research, etc. As long as nanotechnoscience has no societal mandate other than to promote innovation, broadly conceived, it remains essentially incomplete, requiring social imagination and public policy to create an intelligent demand for its capabilities. As such a paradigmatically “open science”, it relies on social scientists, anthropologists, philosophers to realize its ambitions of shaping a world atom by atom. A science of mesocosmic order, a science of instrumentation and imaging, a social science of nature can be conceived as basic to nanotechnological research. In fact, nanotechnology remains fairly indifferent to any such attempts of grounding it. What remains then, is the question not of basic science but of basic technoscience which establishes capabilities of measurement, intervention, visualization, and thus a mastery of complexity that remain irreducible to underlying theory. Instead of a new systematic understanding of complexity at the nanoscale, nanotechnoscience achieves pragmatic and problematic integrations of pre- existing scientific knowledge with novel discoveries and capabilities. If one expects science to be critical of received theories in order to produce a better understanding of the world, and if one expects technology to enhance trans- parency and control by disenchanting and rationalizing nature, these pragmatic integrations appear regressive rather than progressive or even revolutionary. However, if instead of epistemic certainty one seeks systemic robustness, these integrations hold the promise of producing socially robust technologies.

6 Collective experimentation

Quite independently of what uses experiments are put to, they consist first and foremost in the stabilization of a phenomenon in a laboratory— experiments make something observable, measurable, and replicable that does not exist as such outside controlled laboratory conditions. This is surely what nanotechnological experiments do, too. But they do not end here.

12. The term Soziale Naturwissenschaft was coined in the context of the finalization thesis and refers to a “social science of nature” as well as a “science of social nature”. Here, this proposal is taken up in two ways. As opposed to matter, materials are social entities and, as such, objects of this social science of nature—as are molecules which are defined by their history and situatedness. Secondly, nanotechnoscience is a program for shaping and designing, that is, for reforming the world. To the extent that this is also a social reform it is systematically incomplete without societal agenda-setting: What are the projects, the problems to be solved, the targets and design norms of nanotechnoscience? NanoTechnoScience for Philosophers of Science 113

All the novel phenomena and surprising properties that are discovered in nanotechnological laboratories mark the beginning of a process of “delocal- ization” [Galison 1997]. Phenomena leave their place of origin and become delocalized by being stabilized in the laboratory, then rendered robust enough to be reproducible under varying conditions in other laboratories, then scaled up and moved out of the laboratory altogether into the world at large of technical devices. This process of delocalization aims for a seamless transition from laboratory to market-place as technical processes or phenomena become more robust or viable. It is therefore misleading to imagine that these processes and phenomena are brought to completion in the scientific laboratory and then handed over or “transferred” to engineers and commercial development. Instead, the world at large is just a larger laboratory in which these processes and phenomena can prove themselves. In order to take literally the notion of collective experimentation in society as a laboratory, one needs to attend to the various features of experimentation writ large [Krohn & Weyer 1994], compare [Groß, Hoffmann- Riem et al. 2005], [Schwarz 2014]. As with all laboratories, this one is standardized in a variety of ways, and as with all experiments, these require systematic observation to support a learning process. In particular, the immersive aspect of experimentation in the laboratories of technoscience and society deserve to be explored. Where the experimenters and observers are also the guinea pigs and vice versa, where experiments do not serve the advancement of truth but the experience and management of surprising features and effects, the mode and manner of experiencing and observing the experiments is I believe crucially important. For social learning from collective experiments to take place, proper institutions are required for their more or less systematic observation. In the case of nanotechnologies, the question of regulation is tied in with the search for such institutions. There is on the one hand the sceptical question whether current methods of data-collection and monitoring will prove to be adequate; and there is on the other hand the search for new instruments such as codes of conduct, observatories, public engagement exercises, citizens and consumer conferences, as well as “ELSA”-research or Responsible Research and Innovation. Though vaguely defined and lacking proper agency, these new institutions serve a general form of permanent vigilance.

7 Observatories and other agencies

Even for observation with the naked eye, it has been shown that it is neither passive nor neutral but, in the words of Norwood Russell Hanson [Hanson 2007], “theory-laden”. When we see the sun rising and setting, our observation corresponds to an implicit theory when we “should” be seeing the Earth turning against the sun. And when a lay-person looks at a prepared tissue 114 Alfred Nordmann sample, she tends to see nothing at all, whereas the trained eyes of the pathologist comprehend the situation immediately and seemingly without an explicit act of interpretation. The difficult, perhaps intractable question of immediate, yet theory-laden observation becomes more difficult even in the case of technoscience and in the case of observing the collective experiment with nanotechnologies in society. Ian Hacking famously asked “Do we see through a microscope?” [Hacking 1981]. Discussing light microscopy as well as electron microscopy, he argues that we do not see “through” an electron microscope as through a tube but that we see “with” advanced microscopes. We see with them because seeing is not merely passive or reactive but is based on strategic interventions: We literally throw light at what we are hoping to see and utilise laws of refraction to receive an image that we can interpret. Therefore, even in 19th century, light microscopy observation was wrapped up with an experimental intervention of sorts. But even though electron microscopy might appear more mediated and inferential than light microscopy, this does not make it less reliable. One of the ways in which electron microscopy is highly inferential is the fact that it is calibrated to light microscopy—it is set up in such a way that it agrees with light microscopy. So even where, in the end, one does not look through a lense but at a display screen, the display gives us a way of seeing the world much as a television set does. For the expert, then, scientific observation involves a technically contrived effortlessness or mediated immediacy—it is conceptually complicated and perceptually simple. With regard to nanotechnologies, Hacking’s question should now be extended: “Do we see through a scanning tunnelling microscope?” One of the distinctive features of the STM is that it is used to intervene not only by making visible but also by way of manipulating the objects under observation. A second distinctive feature of STM microscopy consists in its twofold calibration. Its data set is calibrated to electron microscopy on the one hand, and on the other hand its visual output is calibrated to topographic software that is used in geography, simulation modelling and video gaming— this software is best suited for the representation of what goes on at the surface of a body. Aside from providing the pleasure of experiencing a very familiar- looking space that stands ready to be colonised by nanotechnology, it stacks the deck in favor of inferences from the likeness of STM-images and theoretical models in a computer simulation. Tellingly, these distinctive features make the STM conceptually even more complicated but perceptually even simpler than electron microscopy. Philosophers tell different stories when they consider whether the case of the STM is just another small step in the history of microscopy or whether it poses entirely new questions, see, for example, [Pitt 2004], [Rehmann- Sutter 2014]. But they all agree that it involves an interplay between active intervention and passive submission or, to put it philosophically, between spontaneity and receptivity. Indeed, this attitude of the observer informs the general orientation towards objects of nanoscale research: interesting NanoTechnoScience for Philosophers of Science 115 properties that might provide technical functionality are actively sought out by researchers who are hoping to be surprised by the phenomena they produce. Also, the appearance of specific phenomena and processes requires hard work and careful control, but the familiar visual frame of the “sur- facescape”, for example, opens up an unbounded space for the emergence of novelty and surprise. So far, the history of “seeing with microscopes” revolves around realism or truth: Straightforward seeing is associated with seeing how things are, whereas a highly theory-laden and inferential mode of perception suggests that what we see is a construct of sorts. The reliability of a way of seeing— with the electron microscope, for example—was judged in comparison to apparently straightforward cases of immediate perception. But the two features, identified above, of the STM drive home that the reliability of the observation depends not on representational features but on the technical robustness and performance of the system. Though STM microscopy is even more inferential than electron microscopy, the fact that it is also an instrument of intervention and the fact of its twofold calibration indicate that it cannot be likened to a human observer who confronts an outside reality and wonders whether a mental image provides a truthful representation. Instead, the STM is coordinated with a multitude of other instruments and procedures and is judged by the way it agrees with and improves upon a whole system of observational and experimental techniques. Firmly entrenched in a variety of contexts and practices, the STM is not so much a method of seeing atoms on surfaces but an “apparatus-world complex” that affords perceptual and manipulative access to atoms on surfaces.13 Similarly, collective experimentation with emerging nanotechnologies also requires a robust system of observation that is tied to various institutions and interests and that is simultaneously a way of seeing and of acting in the world. Rather than simply registering potential hazards and public concerns, a systematic observation of our collective experiments should afford a kind of institutional robustness. When, for example, a commercial “nano”-product sends users to the hospital for respiratory distress, an observation of this event should do more than merely represent what happened—what did the media report, what did the toxicologists conclude, how did the stock market react? Instead, it needs to view this incident as an experimental situation that served to probe the robustness of the regime of vigilance that is to ensure a social learning process—how effectively did existing regulatory institutions, governmental agencies, public media and the scientific community respond to this incident, what was learned and what deficits can be identified? To

13. See Rom Harré for an account of the difference between instruments that func- tion like probes (the thermometer, the light microscope) and a complex of apparatus and world that makes a phenomenon available for research and development, for observation and intervention. Of the latter complexes he says that they afford a phenomenon much like yeast, water and an oven afford us a loaf of bread [Harré 2003]. 116 Alfred Nordmann make these assessments, to raise and answer these questions, an appropriate institution is needed [Lösch, Gammel et al. 2009].

8 Parting shot

While there is more than this to the philosophy of nanotechnoscience, this initial survey might be sufficient to suggest that we should not become fixated on the imagined consequences of an emerging technology. If there is something special about nanotechnology and the challenges it poses, it is right in front of our eyes in the form of a very unusual research enterprise. The main challenge is to analyze, appreciate, and evaluate it.

Bibliography

Baird, Davis [2004], Thing Knowledge: A Philosophy of Scientific Instruments, Berkeley: University of California Press.

Baird, Davis & Nordmann, Alfred [1994], Facts-well-put, The British Journal for the Philosophy of Science, 45(1), 37–77, doi: 10.1093/bjps/ 45.1.37.

Baird, Davis & Shew, Ashley [2004], Probing the history of scanning tunneling microscopy, in: Discovering the Nanoscale, edited by D. Baird, A. Nordmann, & J. Schummer, Amsterdam: IOS Press, 145–156.

Bensaude-Vincent, Bernadette, Loeve, Sacha, et al. [2017], Introduction: The genesis and ontology of technoscientific objects, in: Research Objects in their Technological Setting, edited by B. Bensaude-Vincent, S. Loeve, A. Nordmann, & A. Schwarz, Abingdon: Routledge, 1–12.

Böhme, Gernot & Schäfer, Wolf [1983], Towards a social science of nature, in: Finalization in Science: The Social Orientation of Scientific Progress, edited by W. Schäfer, Dordrecht: Springer Netherlands, 251–269, doi: 10.1007/978-94-009-7080-9_12.

Clifford, Charles A. & Seah, M. P. [2006], Modelling of nanomechanical nanoindentation measurements using an AFM or nanoindenter for com- pliant layers on stiffer substrates, Nanotechnology, 17(21), 5283–5292, doi: 10.1088/0957-4484/17/21/001.

Eberhart, Mark [2002], Quantum mechanics and molecular design in the twenty first century, Foundations of Chemistry, 4(3), 201–211, doi: 10.1023/ A:1020655631654. NanoTechnoScience for Philosophers of Science 117

Galison, Peter [1997], Image and Logic: A Material Culture of Microphysics, Chicago: Chicago University Press.

—— [2017], The and the ring; A physics indifferent to ontology, in: Research Objects in their Technological Setting, edited by B. Bensaude- Vincent, S. Loeve, A. Nordmann, & A. Schwarz, Abingdon: Routledge, 15–26.

Groß, Matthias, Hoffmann-Riem, Holger, et al. [2005], Realexperimente: Ökologische Gestaltungsprozesse in der Wissensgesellschaft, Bielefeld: Transcript.

Grunwald, Armin [2006], Nanotechnologie als Chiffre der Zukunft, in: Nanotechnologien im Kontext: Philosophische, ethische und gesellschaftliche Perspektiven, edited by A. Nordmann, J. Schummer, & A. Schwarz, Berlin: Akademische Verlagsgesellschaft, 49–80.

Hacking, Ian [1981], Do we see through a microscope?, Pacific Philosophical Quarterly, 62(4), 305–322, doi: 10.1111/j.1468-0114.1981.tb00070.x.

Hanson, Norwood Russell [2007], Patterns of Discovery, Cambridge: Cambridge University Press, 2nd edn.

Harré, Rom [2003], The materiality of instruments in a metaphysics for experiments, in: The Philosophy of Scientific Experimentation, edited by H. Radder, Pittsburgh: University of Pittsburgh Press, 19–38.

Heisenberg, Werner [1974], The notion of a “closed theory” in modern science, in: Across the Frontiers, edited by W. Heisenberg, New York: Harper & Row, 39–46.

Hennig, Jochen [2006], Lokale Bilder in globalen Kontroversen: Die hetero- genen Bildwelten der Rastertunnelmikroskopie, in: The Picture’s Image. Wissenschaftliche Visualisierungen als Komposit, edited by I. Hinterwaldner & M. Buschhaus, Munich: Fink, 243–260.

Janich, Peter [2006], Wissenschaftstheorie der Nanotechnologie, in: Nanotechnologien im Kontext: Philosophische, ethische und gesellschaftliche Perspektiven, edited by A. Nordmann, J. Schummer, & A. Schwarz, Berlin: Akademische Verlagsgesellschaft, 1–32.

Johnson, Ann [2006], The shape of molecules to come, in: Simulation: Pragmatic Construction of Reality, edited by J. Lenhard, G. Küppers, & T. Shinn, Dordrecht: Springer, 25–39, doi: 10.1007/1-4020-5375-4_2.

Jones, Richard [2004], Soft Machines, Oxford; New-York: Oxford University Press. 118 Alfred Nordmann

Krohn, Wolfgang & Weyer, Johannes [1994], Society as a laboratory: The social risks of experimental research, Science and Public Policy, 21(3), 173– 183, doi: 10.1093/spp/21.3.173.

Lösch, Andreas, Gammel, Stefan, et al. [2009], Jenseits von Regulierung: Zum politischen Umgang mit der Nanotechnologie, Heidelberg: Akademische Verlagsgesellschaft.

Marcovich, Anne & Shinn, Terry [2014], Toward a New Dimension: Exploring the Nanoscale, New York: Oxford University Press.

Mody, Cyrus [2004], How probe microscopists became nanotechnologists, in: Discovering the Nanoscale, edited by D. Baird, A. Nordmann, & J. Schummer, Amsterdam: IOS Press, 119–133.

Nordmann, Alfred [2004], Molecular disjunctions: Staking claims at the nanoscale, in: Discovering the Nanoscale, edited by D. Baird, A. Nordmann, & J. Schummer, Amsterdam: IOS Press, 51–62.

—— [2007], If and then: A critique of speculative nanoethics, NanoEthics, 1(1), 31–46, doi: 10.1007/s11569-007-0007-6.

—— [2008], Philosophy of nanotechnoscience, in: Nanotechnology: Principles and Fundamentals, edited by G. Schmid, Weinheim: Wiley, 217–244.

—— [2010], Philosophy of technoscience in the regime of vigilance, in: International Handbook on Regulating Nanotechnologies, edited by G. Hodge, D. Bowman, & A. Maynard, Weinheim: Edward Elgar, 25–45.

—— [2012], Object lessons: Towards an epistemology of technoscience, Scientia studiae: Revista Latino-Americana de Filosofia e História da Ciência, 10, 11–31.

—— [2017], Vanishing friction events and the inverted platonism of techno- science, in: Research Objects in their Technological Setting, edited by B. Bensaude-Vincent, S. Loeve, A. Nordmann, & A. Schwarz, Abingdon: Routledge, History and Philosophy of Technoscience, 56–69.

Pitt, Joseph C. [2004], The epistemology of the very small, in: Discovering the Nanoscale, edited by D. Baird, A. Nordmann, & J. Schummer, Amsterdam: IOS Press, 157–163.

Rehmann-Sutter, Christoph [2014], Viewing the nanoscape; implications of afm for an ethics of visualization, in: Nanobiotechnology, Nanomedicine and Human Enhancement, edited by J. S. Ach & B. Lüttenberg, Münster: LIT Verlag, 27–44.

Roukes, Michael [2001], Plenty of room indeed, Scientific American, September, 48–57. NanoTechnoScience for Philosophers of Science 119

Scerri, Eric R. [2000], Have orbitals really been observed?, Journal of Chemical Education, 77(11), 1492–1494, doi: 10.1021/ed077p1492.

Schwarz, Astrid [2014], Experiments in Practice, Abingdon: Routledge.

Tao, Nongjian J. [2006], Electron transport in molecular junctions, Nature Nanotechnology, 1, 173–181, doi: 10.1038/nnano.2006.130.

Vermaas, Pieter [2004], Nanoscale technology: A two-sided challenge for interpretations of quantum mechanics, in: Discovering the Nanoscale, edited by D. Baird, A. Nordmann, & J. Schummer, Amsterdam: IOS Press, 77–91.

Wilholt, Torsten [2006], Design rules: Industrial research and epistemic merit, Philosophy of Science, 73(1), 66–89, doi: 10.1086/510175.

Winsberg, Eric [2009], A tale of two methods, Synthese, 169(3), 575–592, doi: 10.1007/s11229-008-9437-0.

Nanotechnologies et ingénierie du foie bioartificiel. Une autre idée de la « convergence technologique »

Xavier Guchet Université de technologie de Compiègne, COSTECH (EA 2223)

Cécile Legallais Université de technologie de Compiègne, BMBI (UMR CNRS 7338)

Résumé : La conception américaine de la convergence technologique, promue par le programme NBIC (Nano-Bio-Info-Cogno), est orientée vers la recherche d’une intégration de tous les domaines du réel (la nature, la vie, l’esprit, la société) dans la représentation unitaire d’un monde constitué de systèmes hiérarchiques complexes et couplés entre eux. Il s’agit par conséquent d’une vision totalisante, témoignant d’un idéal de contrôle et de maîtrise que rien ne paraît devoir limiter. Les Grecs avaient un mot pour désigner cette ambition démesurée : l’hybris. À rebours de cette vision conquérante, l’ingénierie du foie bioartificiel est sous-tendue par une conception plus modeste et plus précau- tionneuse de la convergence technologique. L’article examine cette dualité de perspectives et défend le choix d’une ingénierie animée par les valeurs de la prudence, de l’attention et du soin.

Abstract: The American NBIC program assumes that all existing reality— be it material, living, spiritual, social—fits into a unified representation of the world that consists in intertwined tiered complex systems, and is amenable to technological reengineering. Unlimited control and mastery are the key words of this all-encompassing vision, supported by what the Ancient Greeks called hybris. Contrary to this swaggering ambition however, bioartificial liver engineering paves the way for a more humble and cautious approach to technological convergence. The paper examines these two rival views and

Philosophia Scientiæ, 23(1), 2019, 121–135. 122 Xavier Guchet & Cécile Legallais argues in favor of a culture of engineering motivated by prudence, attention and care.

1 Introduction

Au milieu des années 1980, l’ingénieur K. E. Drexler pronostiquait un chan- gement radical dans l’activité fabricatrice : tandis que durant des milliers d’années, fabriquer quelque chose avait toujours consisté à enlever de la matière au moyen d’un outil (selon une approche descendante, top-down), la « manufacture moléculaire » devait bientôt rendre accessible la fabrication ascendante, atome par atome [bottom-up], de tout type d’objets utiles [Drexler 1986]. La promesse résidait dans la mise au point de petites machines programmables et munies d’un bras articulé, capables de déplacer les atomes individuellement et de construire n’importe quelle structure y compris les plus complexes. Drexler soutenait par conséquent une thèse audacieuse : les nanotechnologies n’allaient pas seulement permettre de fabriquer des choses avec une précision accrue et un contrôle amélioré ; elles devaient introduire un bouleversement dans le concept même de fabrication. L’idée d’une ingénierie à l’échelle nanométrique changeant radicalement les concepts avant même de changer les outils a par la suite continué d’alimenter les discours de promotion des nanotechnologies. Le programme dit de « conver- gence technologique » (Nanotechnologies, biotechnologies, informatique et sciences cognitives, NBIC), bien qu’il ne reprenne pas l’idée de manufacture moléculaire telle que Drexler l’avait envisagée, repose malgré tout sur la même idée : le renouvellement des concepts de l’ingénierie à l’échelle nanométrique est une condition nécessaire pour la mise au point de nouvelles solutions techniques [Roco & Bainbridge 2003]. L’idée sous-jacente du programme NBIC est l’intégration de tous les domaines du réel (la nature, la vie, l’esprit, la société) dans la représentation unitaire d’un monde constitué de systèmes hiérarchiques complexes et couplés entre eux. Plus précisément, le programme stipule 1o ) que toute réalité résulte en dernière analyse d’un agencement de la matière à l’échelle nanométrique ; 2o ) que l’unité de la matière à cette échelle justifie le rapprochement des diverses sciences, en vue de déployer cette vision unitaire du réel par l’intégration progressive de tous les systèmes dont il est constitué, y compris les systèmes humains ; 3o ) que la mise au point de technologies d’intervention à l’échelle nanométrique, capables de manipuler les briques élémentaires [building blocks] dont ces différents systèmes complexes sont constitués, doit permettre de manipuler et de transformer ces systèmes dans un but d’optimisation ; et 4o ) que l’amélioration des performances humaines, à la fois biologiques, psychiques, cognitives et sociales, est la finalité ultime de ce grand programme de convergence technologique à visée d’optimisation. La convergence technologique soutenue par la NSF ne promet pas seulement Nanotechnologies et ingénierie du foie bioartificiel 123 une « révolution » dans les méthodes et dans les procédés : elle annonce rien de moins qu’une révolution dans les concepts fondamentaux de la connaissance et de la technique, mais aussi et peut-être surtout dans la compréhension de la réalité humaine, et dans la possibilité d’agir sur elle. En filigrane de cette vision particulièrement ambitieuse, nous trouvons l’idée qu’aucune limite ne pourra faire obstacle à la reconfiguration du réel. Cette vision totalisante de la convergence technologique s’est-elle effective- ment réalisée ? Assurément non. Dans bien des domaines, les nanotechnologies n’ont apporté aucun renouvellement véritable des concepts et des approches. Elles ont fourni des outils commodes, des possibilités de manipulation inté- ressantes, mais c’est tout. Qui plus est, dans diverses approches d’ingénierie intégrant les nanotechnologies, celles-ci n’apportent aucunement la promesse d’une domination sans limite de la nature. Elles sont plutôt mises au service d’une sensibilité à la complexité du réel, d’une forme de soin aux choses et même de prudence. C’est le cas de l’ingénierie des organes (bio)artificiels1. L’exemple de la suppléance hépatique servira de fil conducteur à l’analyse. Il s’agira d’examiner les nombreuses stratégies pour la construction d’un foie bioartificiel, d’identifier l’utilisation qui est faite des nanotechnologies dans ces différentes approches, et de montrer dans ce contexte que les nanotechnologies (qui n’ont en rien fait émerger des concepts inédits et des stratégies nouvelles) sont aujourd’hui mises au service d’un style d’ingénierie orientée par l’exigence de la modestie, du soin, et de la convenance eu égard aux exigences propres de ce sur quoi porte l’activité d’ingénierie. Le contraire donc de la vision conquérante de Drexler ou des NBIC.

2L’ hybris des nanotechnologies

La vision démiurgique des NBIC s’appuie sur une conception métaphysique (toute réalité résulte de l’assemblage des mêmes briques élémentaires, à l’échelle nanométrique) et sur une promesse (la convergence des technologies doit donner accès à la manipulation sans limite de ces building blocks, et par conséquent au contrôle du réel dans son intégralité et dans tous les ordres de grandeur). Ce triomphalisme s’est traduit par la diffusion, dans un nombre grandissant de disciplines scientifiques et d’ingénierie, du vocabulaire de la machine [Bensaude-Vincent & Guchet 2007], [Bensaude-Vincent 2011]. Cependant, dès l’« Executive summary », le rapport NBIC tempère l’hybris qui sous-tend la vision conquérante d’un monde intégralement reconfigurable et « machinisable » par les technologies convergentes. Il souligne en effet que l’extraordinaire potentiel économique et social que celles-ci contiennent ne pourra se réaliser qu’à la condition de prendre très sérieusement en

1. Les organes bioartificiels intègrent des éléments biologiques, notamment des cellules, pour assurer leurs fonctions, contrairement aux organes purement artificiels (par exemple ceux à base de membranes en polymère, comme l’hémodialyse). 124 Xavier Guchet & Cécile Legallais considération les enjeux éthiques et les besoins sociétaux. Assisterions-nous par conséquent, avec le programme NBIC, à un retour à la sagesse et à la prudence des Grecs de l’Antiquité, qui pensaient eux aussi que l’activité productive artisanale [techné poietiké] devait être surplombée par la politique ? Ce rapprochement avec les Grecs est toutefois trompeur. La méfiance dans laquelle Platon tenait l’activité productive (même si cette méfiance ne résume pas toute la conception platonicienne de la techné poietike, loin de là2) s’explique par le fait que l’artisan est celui qui transforme la matière et qu’il a de ce fait le privilège des formes : il peut donc les faire varier selon son bon vouloir, introduisant ainsi un risque d’instabilité permanente dans la cité [Rancière 2000]. Les Grecs n’ont pas tenté de circonscrire ce pouvoir de faire varier les formes par un cadrage éthico-politique seulement, mais par une métaphysique. Qu’il y ait une Idée éternelle de toute chose [eidos], un modèle idéal qui définit ce que sont les propriétés essentielles des choses, empêche l’artisan de faire n’importe quoi. Pour Platon, le bon artisan n’est pas celui qui possède une habilité particulière, une manière astucieuse de procéder, un savoir-faire acquis par l’expérience : il est celui qui règle sa fabrication sur l’eidos de ce qu’il fabrique [Schiefsky 2007]. La limitation de l’hybris technicienne vient aussi, pour Aristote, de la matière : celle-ci n’est pas le pur principe d’indétermination que l’on croit souvent, à tort ; elle possède toujours des « formes implicites », selon le mot de Simondon [Simondon 2005], ce qui empêche que n’importe quelle forme puisse lui être imposée [Schiefsky 2007]. De surcroît, la hiérarchie des activités place la poiésis, la production, dans une situation de moyen par rapport aux finalités ultimes de l’éthique et de la politique : celles-ci exercent un magistère sur celle-là, elles se la subordonnent. Enfin, un dernier principe de limitation est apporté par la conception organologique de la technique [Vernant 1965] : l’agir technique est contraint par les possibles de la corporéité vivante, ainsi que par la nature des besoins qui sont les siens. En résumé, la techné est soumise à un quadruple principe de limitation : elle est rendue dépendante comme quadruple effet de l’idée [eidos] de la chose à réaliser (la maison pour le maçon), des caractéristiques de la matière à travailler (le marbre pour le sculpteur), de la commande du prescripteur (la demande du client) et de l’énergie motrice du corps de l’artisan. [Schwartz 2008] Or, le programme NBIC ne souscrit plus à ces principes limitants. Les limitations d’ordre formel et matériel semblent avoir complètement disparu : à 2. Platon ne semblait pas tenir les artisans en grande estime. Dans le Phèdre, Socrate établit une hiérarchie des âmes, de la plus céleste à la plus vile : « celui qui pratique un métier ou cultive la terre », l’artisan ou l’agriculteur donc, se trouve au septième rang, juste devant le sophiste et le tyran [Phèdre, 248 d-e]. Toutefois, dans le Banquet, Socrate voit en eux, tous métiers confondus, des créateurs. Ils sont en effet « causes du passage de la non-existence à l’existence » de ce qu’ils fabriquent, « en sorte que toutes les opérations qui sont du domaine des arts sont des créations, et que sont créateurs tous les ouvriers de ces opérations » [Banquet, 205, b–c]. Nanotechnologies et ingénierie du foie bioartificiel 125 l’échelle nanométrique, tout est supposé possible. Façonner le monde atome par atome : ce slogan popularisé par K. E. Drexler a défini l’idéal d’une conception ascendante [bottom-up] des nanotechnologies, en laissant croire que bientôt les ingénieurs seront capables de construire tout et n’importe quoi en manipulant des atomes individuels, sans jamais rencontrer de limitation imposée par des formes essentielles. Corrélativement, la matière a pu apparaître, dans cette conception ascendante, comme un réceptacle absolument passif, n’opposant aucune contrainte aux buts des ingénieurs. Par ailleurs, la cause efficiente de l’agir technique à la nanoéchelle n’est plus le corps venant limiter de fait les possibilités de l’action par sa biomécanique et sa force musculaire : le corps est si peu directement engagé dans les manipulations nanotechnologiques, que des systèmes haptiques à retour d’effort ont parfois été introduits dans les microscopes en champ proche, précisément pour restituer au manipulateur un ressenti sensori-moteur [voir l’exemple du NanoLearner Marchi, Castet et al. 2010]. Des quatre causes aristotéliciennes, seule la cause finale (répondre aux grands défis de société ; satisfaire des exigences éthiques) semble par conséquent en mesure d’introduire une limitation dans le développement des technologies convergentes. Désormais libéré de toute soumission à des principes limitants intrinsèques à l’activité productive elle-même, l’agir technique ne semble pouvoir être limité que par des considérations extérieures à celui-ci, imposées d’en haut. Selon la conception qui sous-tend les NBIC, les scienti- fiques et les ingénieurs ont, semble-t-il, vocation à « machiner » l’intégralité du réel et en le livrant sans reste au calcul – metron. Les ingénieurs « nano » paraissent ainsi totalement dépourvus du sens profond que les Grecs avaient du metrion, c’est-à-dire de la juste mesure, de la justesse et de la convenance de l’activité productive à l’égard de la chose produite.

Le programme de convergence technologique a, il est vrai, connu des inflexions significatives durant la décennie 2000-2010, se traduisant par une certaine prise de distance à l’égard de l’ambition démiurgique radicale du rapport Roco & Bainbridge [Roco & Bainbridge 2003]. Dans un rapport rendu public en 2004, la Commission européenne a ainsi défendu une conception alternative, orientée non pas vers l’amélioration tous azimuts des performances humaines mais plutôt vers « la société de la connaissance » [HLEG 2004]. La convergence y est moins définie comme un grand programme mobilisateur que comme une série de programmes de recherche sectoriels, motivés par des besoins sociaux. Il ne s’agit plus de développer une ingénierie globale du corps et de l’esprit, comme dans le rapport NBIC, mais une ingénierie pour le corps et pour l’esprit, c’est-à-dire à leur service.

La vision américaine de la convergence a également évolué, sous la plume de Roco lui-même. Dans son bilan des dix années de la National Nanotechnology Initiative [Roco 2011], Roco ne mentionne plus du tout l’amélioration humaine comme vision directrice de la convergence technologique. Il explique qu’en dix ans, les nanotechnologies ont atteint une certaine maturité et que les positions extrêmes voire extrémistes, triomphalistes aussi bien que catastrophistes, ont 126 Xavier Guchet & Cécile Legallais

été marginalisées. Quoi qu’il en soit, bien que le ton du bilan soit moins triomphaliste, la convergence technologique reste le but à atteindre. Ce diagnostic d’hybris technicienne dans les nanotechnologies pose ce- pendant une grave difficulté : bien des scientifiques et des ingénieurs ne s’y reconnaîtront sans doute absolument pas ! Indubitablement, l’ingénierie nanotechnologique ne correspond pas partout, loin s’en faut, à la conception défendue aussi bien par Drexler que par le rapport NBIC. À rebours de la vision conquérante du shaping the world atom by atom, ou bien de la « machination » du réel dans tous les ordres de grandeur, les ingénieurs qui font usage des nanotechnologies, à des titres divers, savent très bien qu’une ingénierie astucieuse doit plutôt adopter une attitude modeste, attentive aux « formes implicites » des matériaux à cette échelle, cherchant à tirer parti des propriétés de la matière à l’échelle nanométrique (forces électrostatiques, mouvement brownien, etc.) – au lieu de vouloir à tout prix plaquer sur elle des formes et des manières de procéder qui ne lui conviennent pas. Loin de relever d’une visée de maîtrise sans limite, les nanotechnologies réclament au contraire de subtiles négociations avec des matériaux qui se révèlent bien souvent récalcitrants. C’est aussi au demeurant ce qui ressort de l’analyse d’un domaine en plein essor depuis les années 1980 : l’ingénierie tissulaire.

3 Une conception alternative de la convergence : le foie bioartificiel

L’ingénierie tissulaire et la conception d’organes (bio)artificiels pour suppléer des organes défaillants sont souvent présentées comme des domaines sus- ceptibles d’être « potentialisés » [enabled] par les nanotechnologies [Solano- Umaña, Vega-Baudrit et al. 2015]. Un tissu, et plus encore un organe complet, est constitué par un agencement tridimensionnel d’une ou plusieurs popula- tions cellulaires, au sein d’une matrice extracellulaire en général vascularisée et innervée. Les interactions cellules-cellules et cellules-matrice sont cruciales : la fonctionnalité de l’organe dépend de ces interactions qui sont de nature biochimique mais aussi mécanique. Dans le cas du foie, on a affaire à des structures fonctionnelles hautement organisées en lobules. Les hépatocytes sont agencés en travées, ils présentent une polarisation particulière et bordent un espace appelé sinusoïde qui laisse passer le sang de la zone portale à la zone centro-lobulaire. Les contraintes mécaniques locales, liées à l’écoulement sanguin, ainsi que l’oxygénation varient en fonction de la localisation de l’hé- patocyte dans le sinusoïde et lui confèrent des propriétés/fonctions spécifiques. Or, ces interactions opèrent à l’échelle nanométrique. L’idée de concevoir des matrices extracellulaires artificielles ou biosourcées qui reproduisent les patterns nanométriques de l’organe in vivo, ou bien de nanostructurer des réseaux vasculaires pour assurer les transports moléculaires (nutriments, etc.), s’est par conséquent imposée comme une perspective prometteuse. La Nanotechnologies et ingénierie du foie bioartificiel 127 nanolithographie ou l’électrofilage [electrospinning] permettent de construire à façon des supports ou des matrices nanostructurées, formant un échaffaudage en 3D [scaffold]. Les nanotechnologies peuvent aussi être utilisées dans le domaine des organoïdes. Il s’agit d’un domaine en plein essor et qui consiste à construire une version simplifiée d’un organe en vue de procéder à des tests – on parle aussi d’organs-on-chips, d’organes sur puce, destinés notamment à l’industrie pharmaceutique afin qu’elle puisse étudier le métabolisme des médicaments et leurs effets en phase préclinique, comme complément ou alternative à l’expérimentation animale. Le projet multidisciplinaire en cours Innovations for Liver Tissue Engineering (iLite), porté par le Département hospitalo-universitaire HEPATINOV (Assistance Publique – Hôpitaux de Paris ), est un très bon exemple de convergence technologique reposant sur une vision différente de celle des NBIC. L’objectif du projet est triple : construire un foie bioartificiel extracorporel, construire un foie transplantable (tous deux pour le traitement de pathologies hépatiques sévères), et construire un foie sur puce (pour l’étude du métabolisme hépatique des médicaments et de leurs effets sur l’organe cible). L’approche repose sur l’assemblage de briques élémentaires (organoïdes hépatiques, réseaux vasculaires et biliaires) construits séparément et intégrés dans un scaffold. Les différentes étapes de la construction et de l’intégration sont guidées par la modélisation in silico. L’une des ambitions majeures du projet est d’arriver à la construction d’un arbre biliaire : en effet, aucun dispositif bioartificiel existant à ce jour ne comporte de système assurant le drainage et l’évacuation de la bile secrétée par les hépatocytes, or cette fonction est essentielle. L’ambition de construire des organes bioartificiels entiers, extracorporels ou transplantables, semble en première analyse souscrire à la plus complète hybris technicienne : n’est-il pas question en effet de substituer à des organes défaillants des sortes de pièces de rechange, selon une vision très mécaniste du corps humain ? L’ingénierie des organes (bio)artificiels rendrait ainsi opératoire l’analogie du corps et de la machine, et rendrait idéalement concevable une substitution complète de tous les organes par leurs équivalents reconstruits. Cette visée artificialisante, illimitée dans son principe directeur, ne pourrait trouver sa limitation que dans un cadrage éthique externe, exigeant par exemple la prise en compte du vécu des patients, de leur ressenti en présence d’un artefact étranger dans leur corps, etc. Nous aurions par conséquent affaire à un magnifique exemple d’hybris technicienne, et à une conception de la convergence technologique orientée par la conviction que le réel dans son ensemble, dans tous les ordres de grandeurs, est manipulable et reconfigurable à volonté. Or, l’histoire des approches du foie (bio)artificiel ne confirme pas cette analyse. La littérature distingue souvent trois grands types d’approches [Takahashi, Malchesky et al. 1991] : les approches non biologiques, les approches biologiques et les approches hybrides. Les approches non biolo- giques reposent sur l’idée que l’insuffisance hépatique aiguë se traduit par 128 Xavier Guchet & Cécile Legallais des dérèglements métaboliques mesurables, notamment des concentrations anormalement élevées de toxines dans le sang. Le dispositif de suppléance hépatique doit corriger ces désordres en éliminant ces toxines. Le premier article consacré au foie artificiel [Kiley, Welch et al. 1956] décrit ainsi la transposition de la dialyse rénale au traitement des hépatites fulminantes. Par la suite, dans les années 1960, d’autres approches non biologiques sont développées : l’hémofiltration ou encore l’utilisation de charbon actif, un matériau à fort pouvoir d’adsorption capable de fixer les toxines. Les stratégies non biologiques ne sont cependant pas les seules envisagées à cette époque. Partant du constat que le foie accomplit des fonctions très complexes et globales – la détoxification donc, mais aussi la synthèse de protéines, ainsi que des fonctions de tranformation des substances (métabolisme) et de stockage – certains considèrent dès les années 1950 que la seule manière de traiter les hépatites fulminantes est de « brancher » les patients sur des foies d’animaux, sous la forme soit de foies intacts et maintenus vivants ex vivo, soit de tranches de foie, soit d’homogénats de foie [Naruse, Tang et al. 2007]. À la fin des années 1960, la plasmaphérèse (le remplacement du plasma du patient par celui d’un donneur) est ajoutée à la panoplie des dispositifs de suppléance hépatique. Il faut attendre les années 1970 pour voir se développer d’autres approches, dites hybrides, consistant principalement à intégrer des hépatocytes dans des dispositifs artificiels. Ces approches par utilisation d’hépatocytes se sont considérablement diversifiées, surtout à partir des années 1980 (possibilité d’utiliser de nouveaux biomatériaux ; apparition de techniques de fixation des hépatocytes sur des microtransporteurs coatés au collagène ; développement du concept de bioréacteur à base de fibres creuses, jouant le rôle de milieu favorable au maintien de la fonctionnalité des hépatocytes [Carpentier, Gautier et al. 2009]). À partir de la fin des années 2000 de nouvelles approches apparaissent, fondées sur l’utilisation non d’hépatocytes matures – difficiles à se procurer en quantité suffisante et très fragiles – mais de cellules souches pluripotentes humaines induites (hiPSC) ou de progéniteurs pouvant se différencier en plusieurs types cellulaires hépatiques – dans le cas d’iLite, les voies de différenciation des progéniteurs de la lignée cellulaire utilisée, la lignée HepaRG, sont au nombre de deux : les hépatocytes et les cholangiocytes (les cellules épithéliales qui tapissent notamment les vaisseaux de l’arbre biliaire). L’idée est de contrôler la différenciation de ces cellules pluripotentes vers les différents types cellulaires d’intérêt, en les combinant au sein d’un bioréacteur3 avec des biomatériaux et des facteurs de croissance ou de différenciation. Il est attendu que les types cellulaires s’auto-organisent pour former des structures 3D appelées sphéroïdes et se différencient mieux et plus rapidement, dans cet environnement complexe, pour atteindre des fonctions aussi proches que possible des cellules matures. Il s’agit dans iLite d’amener les cellules utilisées le plus près possible du stade

3. Il s’agit d’une enceinte confinée dans laquelle on peut contrôler/piloter l’envi- ronnement chimique et physique. Nanotechnologies et ingénierie du foie bioartificiel 129 hépatocytaire ou cholangiocytaire. Les sphéroïdes sont intégrés dans des billes d’alginate (il s’agit d’un biopolymère choisi pour ses propriétés mécaniques intéressantes, et que l’on peut fonctionnaliser à l’échelle nanométrique par le greffage spécifique de molécules d’intérêt). Ces billes sont alors mises en suspension dans le bioréacteur. La reconstruction cellule par cellule d’un organe aussi complexe que le foie étant encore à ce jour totalement hors de portée, les approches ascendantes calquées sur le principe de la manufacture moléculaire de Drexler (il faudrait plutôt dire ici « manufacture cellulaire ») sont d’emblée disqualifiées : dans le projet iLite par exemple, il s’agit plutôt de construire des building blocks séparément – sous-types de tissus hépatiques, réseau vasculaire, arbre biliaire – et de les assembler après coup. Or, si cette approche semble à première vue raviver une approche partes extra partes d’inspiration cartésienne, en réalité ce n’est pas le cas, ce que démontre la mise en perspective historique de la démarche d’ensemble d’iLite. En effet, les approches des années 1950 à 1970 visaient prioritairement à suppléer à une série de fonctions normalement accomplies par le foie quand il est sain. Les stratégies actuelles visent plutôt à suppléer un organe défaillant caractérisé par une architecture 3D hautement spécifique. La différence entre « suppléer » un organe et « suppléer à » des fonctions accomplies par cet organe peut paraître ténue, elle est en réalité essentielle : dans le second cas, il s’agit de concevoir un dispositif capable d’accomplir certaines fonctions identifiées comme étant celles du foie. L’organe est une sorte de petite machine. Dans le premier cas, il s’agit de concevoir un dispositif qui doit reproduire au plus près l’organisation du foie. D’une approche à l’autre, la relation de priorité entre structure et fonction se trouve inversée : les premières tentatives priorisaient la fonction, il s’agissait de mettre au point un dispositif artificiel ou hybride capable d’y suppléer ; les approches actuelles priorisent plutôt l’organisation (la structure) et ses propriétés (biochimiques, mécaniques, etc.), la fonction devant découler de celles-ci. Il est certes possible de soutenir que, dans les deux cas, l’organe est vu comme une machine ; toutefois, ce n’est pas du tout le même concept de machine qui est mobilisé : dans le cas de la suppléance à des fonctions défaillantes préalablement identifiées, l’organe est conçu comme une machine définie par les fonctions qu’elle doit accomplir ; dans le cas de la construction d’un organe bioartificiel, l’organe est vu davantage comme un système complexe défini avant tout par des propriétés topologiques, biochimiques et mécaniques très spécifiques. Cette distinction est bien traduite par les concepts simondoniens d’abstraction et de concrétisation [Simondon 1958] : l’ingénieur qui procède abstraitement cherche à tout prix à réaliser une ou des fonctions prédéfinies ; l’ingénieur qui pense concrètement cherche plutôt à créer l’environnement qui permettra à la machine d’assurer ses propres conditions de fonctionnement. Il convient en outre de bien distinguer, dans le cadre de la visée de suppléance de l’organe (et non de suppléance « à » des fonctions de l’organe), entre deux conceptions extrêmement différentes. La première conception consiste à récupérer dans le vivant les éléments fonctionnels du foie : les types 130 Xavier Guchet & Cécile Legallais cellulaires et en particulier les hépatocytes matures, mais aussi la matrice extracellulaire biologique préalablement décellularisée (c’est-à-dire nettoyée de toutes ses cellules) avant de pouvoir y ensemencer les cellules. L’alternative à la décellularisation de l’organe est de construire artificiellement une matrice extracellulaire biomimétique, souvent à partir de collagène. La seconde conception consiste, comme dans le projet iLite, à utiliser non des hépatocytes matures, mais des progéniteurs, et à créer un environnement favorisant leur maturation finale. Par opposition à une stratégie de pure fabrication partes extra partes d’inspiration cartésienne, les deux conceptions évoquées ici relèvent bien de ce que Raphaël Larrère avait identifié comme une démarche de « pilotage de processus naturels », par opposition à une définition de l’activité technique comme « fabrication » [Larrère 2002]. Dans la première conception (récupération d’éléments déjà constitués et fonctionnels du foie), le pilotage porte sur les conditions nécessaires au maintien de la viabilité et de la fonctionnalité des cellules matures dans un microenvironnement contrôlé ; dans la seconde conception, le pilotage porte sur le processus de différenciation des progéniteurs et sur leur auto-organisation 3D, dans un microenvironnement également contrôlé. La différence peut sembler minime, elle est cependant bien réelle. La première conception consiste à concevoir le foie bioartificiel à partir d’éléments déjà fonctionnels, le problème étant de les intégrer, de les faire interagir et de les faire coopérer. La seconde voie du pilotage repose sur l’idée qu’un organe est un système issu d’une genèse à partir d’un état, ou d’une situation, dans laquelle ses composants fonctionnels ne sont pas préconstitués. Le projet iLite est un mixte des deux. Cette dualité de stratégies d’ingénierie peut au demeurant se traduire par les concepts respectivement d’hylémorphisme pour caractériser la première conception, et d’individuation pour caractériser la seconde, que Simondon avait mis en tension [Simondon 2005]. L’hylémorphisme est la doctrine métaphysique selon laquelle toute réalité est issue de la rencontre entre une forme et une matière. L’approche consistant, soit à décellulariser une matrice et à la réensemencer, soit à fabriquer une matrice à base de biopolymère pour y déposer les types cellulaires d’intérêt, relève de l’hylémorphisme : il s’agit d’orienter l’auto-organisation des cellules en leur imposant une forme prédéfinie. Il convient de souligner toutefois que ces approches par scaffold biomimétique ont considérablement évolué depuis les années 1980. Progressivement a émergé l’idée de fonctionnaliser des biopolymères de façon à ce qu’ils soient reconfigurés par les cellules à mesure que celles-ci prolifèrent et s’auto-organisent : dans ce cas, la distinction entre la forme et la matière se brouille, la matière (les cellules) se guidant sur une forme qu’en retour elle constitue elle-même, par réagencement du polymère. L’individuation désigne quant à elle le processus par lequel tout être indi- vidué, quel qu’il soit (comme un organe par exemple), est issu d’une genèse à partir d’un système de réalité où rien de ce qui constitue cet être individué n’est donné à l’avance – un système de réalité préindividuelle comme dit Simondon. Nanotechnologies et ingénierie du foie bioartificiel 131

Le processus de morphogenèse du foie est contemporain du processus par lequel se constituent les composants du foie : c’est le même processus. Le projet iLite relève d’une combinaison des deux approches hylémorphique et par individuation. Comme déjà dit, il est en effet question dans le projet d’amener les progéniteurs le plus près possible de l’état hépatocytaire dans le gel d’alginate : ce sont donc des éléments déjà partiellement fonctionnels qui sont utilisés, ce qui laisse penser qu’iLite reste tributaire d’une stratégie plutôt hylémorphique. Toutefois, l’objectif est bien d’arriver à intégrer dans l’alginate des cellules capables de s’auto-organiser en sphéroïdes tout en continuant de se différencier et de proliférer. La concomitance des deux processus de différenciation et d’auto-organisation 3D indiquerait alors qu’un processus d’individuation est à l’œuvre.

4 Vers une (nano)ingénierie de la convenance et du soin

À partir de ce bref aperçu historique sur la (bio)construction hépatique, il est possible de tirer trois enseignements concernant l’utilisation qui est faite des nanotechnologies dans ce domaine. Premièrement, il n’y a pas beaucoup de sens à parler de façon générale de l’utilisation des nanotechnologies dans l’ingénierie du foie bioartificiel. Tout dépend de la stratégie adoptée. Les approches par encapsulation d’hépatocytes matures d’origine xénogénique (hépatocytes de porc, de singe, de chien, etc.) ou homologues (humain) pourront tirer profit de nanotechnologies pour la struc- turation et la fonctionnalisation des biomatériaux. Les approches priorisant la fonction de détoxification pourront tirer parti des structures nanoporeuses. Les approches bioartificielles consistant à « faire pousser » (même incomplètement) les éléments fonctionnels de l’organe dans des bioréacteurs utiliseront les technologies de nanopatterning pour le design de matrices extracellulaires 3D nanostructurées. Les nanotechnologies n’orientent pas le domaine : elles sont sous-déterminées par rapport à lui. Deuxièmement, l’histoire du foie artificiel se révèle être non pas l’histoire d’une « science appliquée » – selon une conception linéaire du progrès allant de la connaissance scientifique vers les applications pratiques – mais celle de croisements successifs de filières techniques largement indépendantes (les biomatériaux, la culture cellulaire, la cryoconservation, la modélisation, etc.). La convergence qui a lieu dans le domaine de la bioconstruction hépatique ne correspond pas du tout à celle défendue par le programme NBIC : il s’agit beaucoup plus de saisir des opportunités et d’intégrer progressivement des compétences techniques, que de rechercher une vision unifiée du réel dans tous les ordres de grandeur. Les exemples abondent et iLite en fournit plusieurs. 132 Xavier Guchet & Cécile Legallais

Le comportement des hépatocytes, in vivo et ex vivo, n’est pas encore complètement compris. Le processus de différenciation des progéniteurs en hépatocytes et cholangiocytes non plus. Les cellules de la lignée HepaRG présentent des avantages (elles s’auto-organisent dans les billes, ce que ne font pas par exemple les cellules iPS), mais elles présentent aussi une limite importante : elles ne peuvent pas être, pour des raisons réglementaires, utilisées sur des patients humains, y compris lorsque le sang n’est pas directement en contact avec elles. En cas de succès du protocole, il faudra par conséquent tout recommencer en utilisant des cellules validées par les autorités de régulation. Par ailleurs, les premières manipulations effectuées dans le work package du projet, consacré à la mise au point des bioréacteurs, ont montré des résultats inattendus : les billes d’alginate, pourtant bien caractérisées, se sont dégradées, pour des raisons qui restent à déterminer. La difficulté vient aussi des modèles animaux. Il est notoire que l’extrapolation aux humains des résultats validés sur des animaux est particulièrement délicate et sujette à caution. De surcroît, compte tenu de la difficulté à se procurer des cellules en grande quantité ainsi que des coûts engendrés, iLite prévoit des tests sur des petits animaux (des rats) et non sur des gros comme le porc. Il ne peut s’agir que d’une première phase, en cas de résultats concluants les tests devront être reproduits sur les gros animaux, dans le cadre d’un nouveau protocole et, sans doute, avec de nouvelles difficultés. Sans compter que pour faire les tests sur les petits animaux, il a fallu réduire en taille les composants déjà disponibles sur le marché et utilisés dans des dispositifs existants d’épuration extracorporelle, de type dialyse hépatique (pratiquée en routine clinique), et mobiliser pour cela des industriels acceptant de fournir ces composants miniaturisés. En résumé, la mise au point des bioréacteurs – qui n’est qu’une partie du projet iLite – témoigne d’une démarche de convergence technologique qui ressemble beaucoup moins à la marche triomphale des NBIC, qu’à une activité de négociation délicate avec des biomatériaux qui perdent leurs propriétés, avec des cellules capricieuses, avec des réglementations contraignantes, avec des animaux qui n’ont jamais la stabilité attendue des modèles, avec des industriels, avec des médecins qui finalement ne savent peut-être pas si clairement que cela quelles sont exactement les performances que devrait avoir le dispositif bioartificiel. Troisièmement, et corrélativement, les approches les plus récentes ne sous- crivent pas du tout à la vision d’une ingénierie en possession de la loi générale de ce qu’il s’agit de reconstruire (le foie biologique). On ne connaît pas toutes les fonctions du foie, et on ne sait pas comment le vivant s’y prend pour générer un foie. Les partenaires du projet iLite contournent la difficulté en se donnant à eux-mêmes une règle de conception à partir d’un cas particulier (fourni par la modélisation), et s’assurent au maximum de la convenance de cette règle avec la règle générale, et inconnue à ce jour donc, qui régit la morphogenèse et la physiologie du foie biologique. Cette convenance est approchée, pas à pas, par comparaison du modèle et des données expérimentales. Dans la terminologie philosophique (kantienne), un jugement auquel le cas particulier ainsi que la Nanotechnologies et ingénierie du foie bioartificiel 133 règle générale sont donnés est appelé jugement déterminant : il consiste à subsumer le cas particulier sous la règle connue. Un jugement auquel seul le cas particulier est donné, mais non la règle générale, est appelé jugement réfléchissant : la faculté de juger réfléchissante se donne à elle-même la règle de généralisation empirique. Les ingénieurs du foie bioartificiel pratiquent ainsi une forme de jugement réfléchissant, de prudence réflexive qui les amène à exercer un contrôle minutieux de leur démarche par référence aux caracté- ristiques topologiques (cause formelle) et physiologiques (cause matérielle) de ce qu’ils cherchent à reconstruire. En outre, le critère ultime pour la validation de la démarche réside dans la capacité du futur foie bioartificiel extracorporel à empêcher la survenue du coma suite à une hépatite fulminante chez un(e) patient(e) : autrement dit, l’ingénieur n’est pas préoccupé uniquement par la validité analytique de son dispositif (le dispositif assure-t-il comme il faut ses fonctions de détoxification, de synthèse de protéines, de régulation, de transport et d’écoulement ?), mais aussi par sa validité clinique (combien de temps le dispositif réussit-il à stabiliser un patient ?). Il n’est pas question à ce jour, et sans doute pour longtemps encore, de viser la mise au point d’un foie bioartificiel qui puisse concurrencer la greffe hépatique : dans les cas les plus graves, où le foie ne se régénère pas ou pas assez vite, celle-ci reste le traitement de référence. La modestie semble de rigueur. Dans ce contexte, l’objectif n’est pas de construire le dispositif optimal, le meilleur possible, mais un dispositif capable de couvrir le délai d’attente entre la prise en charge et la greffe. Cette visée du soin n’est pas extérieure à la démarche d’ingénierie : elle l’oriente en faisant rechercher les spécifications techniques qui souscriront aux contraintes, organisationnelles mais aussi réglementaires, du système de soin. Certes, cette préoccupation pour le soin, inscrite dans la démarche d’ingénierie elle-même, ne couvre pas tous les problèmes. Une ingénierie peut être orientée par la finalité du soin, cela ne garantit pas de facto que le dispositif sera utilisé de façon appropriée c’est-à-dire mis au service d’une démarche passant par un jugement contextuel, réfléchissant, capable d’appréhender la situation de soin dans sa globalité. Comme le pensaient les Grecs, en la matière seul l’usager est juge, c’est-à-dire le médecin et, in fine, la/le patient(e).

5 Conclusion

En résumé, le plus grand tort du discours des NBIC est de livrer une vision totalement erronée de l’ingénierie et de la convergence technologique telle qu’elles se pratiquent effectivement. Loin d’être ce conquérant qui progresse avec la sécurité de celle/celui qui possède la règle générale du réel dans son intégralité, l’ingénieur fait plutôt l’épreuve d’une incomplétude, par le bas et par le haut. Par le bas, elle/il ne connaît justement pas la loi générale de morphogenèse et d’organisation de ce qu’elle/il cherche à reconstruire, et son 134 Xavier Guchet & Cécile Legallais matériau (les cellules) se révèle difficile à maîtriser. Par le haut, son activité s’insère dans un tout plus vaste dont elle/il n’a pas toujours la clé : le système de soin, et in fine les patient(e)s en chair et en os. L’ingénierie des organes bioartificiels semble ainsi beaucoup moins animée par l’esprit de conquête, que par une exigence de juste appréciation et d’une recherche de convenance (au sens de ce qui convient, compte tenu de leurs besoins et exigences propres, aussi bien aux cellules qu’aux médecins et aux patient(e)s). En somme, cette ingénierie n’est pas si étrangère que cela à la prudence des Grecs et, de ce fait, elle contribue peut-être à inscrire la convergence technologique dans un horizon qui n’est pas du tout celui du programme NBIC, et qui consiste à adopter une attitude de metrion plutôt que de metron.

Remerciements

Remerciements aux partenaires du projet de Recherche Hospitalo-Universitaire RHU Ilite - Innovations in LIver Tissue Engineering, financé dans le cadre du PIA (ANR16-RHUS-0005)

Bibliographie

Bensaude-Vincent, Bernadette [2011], Materials as machines, dans : Science in the Context of Application, edité par M. Carrier & A. Nordmann, Dordrecht : Springer Netherlands, 101–111, doi : 10.1007/978-90-481-9051- 5_7.

Bensaude-Vincent, Bernadette & Guchet, Xavier [2007], Nanomachine : One word for three different paradigms, Techné : Research in Philosophy and Technology, 11(1), 71–89, doi : 10.5840/techne20071117.

Carpentier, Benoît, Gautier, Aude et al. [2009], Artificial and bioartificial liver devices : present and future, Gut, 58(12), 1690–1702, doi : 10.1136/ gut.2008.175380.

Drexler, K. Eric [1986], Engines of Creation. The Coming Era of Nanotechnology, New York : Anchor Books, trad. fr. par M. Macé, révisée par Th. Hoquet, Engins de création : l’avènement des nanotechnologies, Paris : Vuibert, 2005.

Kiley, John E., Welch, Harold F. et al. [1956], Removal of blood ammonia by hemodialysis, Proceedings of the Society for Experimental Biology and Medicine, 91(3), 489–490.

Larrère, Raphaël [2002], Agriculture : artificialisation ou manipulation de la nature ?, Cosmopolitiques, 1, 158–173. Nanotechnologies et ingénierie du foie bioartificiel 135

Marchi, Florence, Castet, Julien et al. [2010], Le concept du NanoLearner : Les mains dans le Nanomonde de l’université vers le grand public, J3eA, 9, 0014, doi : 10.1051/j3ea/2010017.

Naruse, Katsutoshi, Tang, Wei et al. [2007], Artificial and bioartificial liver support : A review of perfusion treatment for hepatic failure patients, World Journal of Gastroenterology : WJG, 13(10), 1516–1521, doi : 10.3748/wjg. v13.i10.1516.

Rancière, Jacques [2000], Le Partage du sensible : esthétique et politique, Paris : La Fabrique.

Roco, Mihail C. [2011], The long view of nanotechnology development : the National Nanotechnology Initiative at 10 years, Journal of Nanoparticle Research, 13(2), 427–445, doi : 10.1007/s11051-010-0192-z.

Roco, Mihail C. & Bainbridge, William Sims (éds.) [2003], Converging Technologies for Improving Human Performance. Nanotechnology, Biotechnology, Information Technology and Cognitive Science, Dordrecht : Springer, doi : 10.1007/978-94-017-0359-8.

Schiefsky, Mark J. [2007], Art and nature in ancient mechanics, dans : The Artificial and the Natural : An Evolving Polarity, edité par B. Bensaude- Vincent & W. Newman, Cambridge, Mass. : MIT Press, 67–108.

Schwartz, Yves [2008], Le travail dans une perspective philosophique, Ergologia, 0, 121–154.

Simondon, Gilbert [1958], Du mode d’existence des objets techniques, Paris : Aubier, 2012.

—— [2005], L’Individuation à la lumière des notions de forme et d’informa- tion, Krisis, Grenoble : Jérôme Millon.

Solano-Umaña, Victor, Vega-Baudrit, José Roberto et al. [2015], The new field of nanomedicine, International Journal of Applied Science and Technology, 5(1), 79–88.

Takahashi, Tsuyoshi, Malchesky, Paul S. et al. [1991], Artificial liver. State of the Art, Digestive Diseases and Sciences, 36(9), 1327–1340, doi : 10.1007/ BF01307531.

Vernant, Jean-Paul [1965], Remarques sur les formes et les limites de la pensée technique chez les Grecs, dans : Mythe et pensée chez les Grecs. Études de psychologie historique, Paris : Éditions La Découverte, 302–322, 1985.

Perspectives

Lessons from the Land of Atoms and Molecules

Chris Toumey University of South Carolina (USA)

Résumé : Lorsque les sciences humaines et sociales analysent les nanotech- nologies, que révèlent-elles de ces disciplines ? Ici, je propose six éléments de réponse. Tout d’abord, nous pouvons nous demander si notre travail est issu de travaux antérieurs sur d’autres sujets scientifiques, tels que les organismes génétiquement modifiés ou le programme ELSI du projet du génome humain. Deuxièmement, comment jugeons-nous les promesses extravagantes associées à les nanotechnologies ? Troisièmement, que concluons-nous sur les origines des nanotechnologies ? Après cela, le problème de la définition des nanotechnolo- gies. Ensuite, la question de savoir s’il aurait pu ou dû exister un consensus sur l’éthique en nanotechnologie. Et enfin, la responsabilité de communiquer nos découvertes et nos points de vue aux communautés scientifiques et d’ingénieurs qui créent les nanotechnologies.

Abstract: When the humanities and social sciences examine nanotechnology, what does this reveal about those disciplines? Here, I cover six topics. First, we can ask whether our work is derived from earlier work on other scientific topics like genetically modified organisms or the ELSI program of the Human Genome Project. Secondly, how do we judge the extravagant promises that were attached to nanotech? Third, what do we conclude about the origins of nanotech? After that, the problem of defining nanotechnology. Then the question whether there could have been, or should have been, a consensus on ethics in nanotech. And finally, a responsibility to communicate our findings and insights to the scientific and engineering communities that make nanotech happen.

Philosophia Scientiæ, 23(1), 2019, 139–150. 140 Chris Toumey 1 Introduction

When I was a graduate student in anthropology many years ago, I tried to imagine how I might find my own place in the social sciences. One day I heard a story that taught me an important lesson about this part of academia. The story is doubtless apocryphal, and although I cannot guarantee its truth, it nevertheless reminds me that we need to have a sense of why we study nanotechnology. The story dates from the late nineteenth or early twentieth century, and concerns a college in the United States which felt that it needed to enhance its teaching of the social sciences. Funds were raised and a building was erected to house sociology, psychology, and perhaps some other disciplines. The building now needed an epigram to be carved in stone above the entrance to remind its occupants of its mission with respect to both teaching and learning. The social scientists suggested the second half of a couplet from Alexander Pope’s 1734 Essay on Man:

Know Then Thyself, Presume not God to Scan. The Proper Study of Mankind Is Man.

I like the way this justifies the social sciences, especially if “man” is taken to mean mankind, and not only one gender. The Protestant trustees of the school did not approve of this idea, as to them the business of governing the school was a Christian mission. The couplet from Pope seemed to them irreligious, perhaps bordering on atheism. That would not do. So they reached into their own values and knowledge, with the result that they ordered a different epigram, namely, the question from the beginning of Psalm 8 : 4-6:

What Is Man That Thou Art Mindful of Him?... For Thou Hast Made Him a Little Lower Than the Angels, and Hast Crowned Him with Glory and Honor...

The moral of this story is that you may have your own reasons for teaching sociology or any other subject, but your employers have their reasons, and theirs count more than yours. From this story I infer that it is good for us to be aware of why we use the perspectives of the humanities and social sciences to study nanotechnology and other topics in science or technology. We should also consider how studying science and technology affects us. So here I offer a series of insights about the ways that we have investigated nanotech over the past seventeen years or so. I understand that my views are very much anchored in my own experiences, but my hope is that my insights will be helpful and relevant to all who have worked in this area. Perhaps my comments will have some value for those who examine other topics of science and technology as well, such as artificial Lessons from the Land of Atoms and Molecules 141 intelligence, synthetic biology, or environmental science, as they walk in the footsteps of those who have studied nanotech. About two years after I arrived at the University of South Carolina in 2000, a group of faculty in the humanities received a generous grant from the U.S. National Science Foundation to study nanotechnology. When they first invited me to join the group, I declined because I was trying to launch a very different research project for myself. But the nanotech research slowly earned more and more of my curiosity. By the summer of 2003 I was wholeheartedly involved, and happily so. Almost all of my research and writing since then has addressed societal and cultural issues in nanotech. I write a humanistic commentary on nanotech four times a year in the journal Nature Nanotechnology, plus peer-reviewed papers for journals, encyclopedias, and edited volumes. At this point I have had approximately ninety professional publications on nanotech. So then, here is what I think:

2 Models for the ethics of nanotechnology (SEIN): ELSI or GMOs?

First, there is the question of how our work on nanotech derives from earlier work on other topics in science and technology. There are two principal accounts. Some say the relevant precedent was a series of controversies over genetically modified organisms, or GMO’s, from the 1990s. Many in Western Europe rejected the idea of GMOs in the food supply out of hand, with the result that GMOs have been severely restricted in Europe. This suggested that the E.U. and some member states which felt optimistic about nanotech were concerned that nano might share the fate of GMOs. Supposedly they felt that investigations of nanotech by the humanities and social sciences would lubricate the friction between nanotech and its critics, that is, mitigate opposition to nanotech. And there was a competing account on the other side of the Atlantic. I heard that the government-funded program of “ethical, legal and societal implications” (ELSI) from the Human Genome Project was the template for “societal and ethical implications of nanotechnology” (SEIN), as the National Science Foundation labeled that work. At a workshop on nanotech in Washington DC in April 2007, I asked a panel of three members of the U.S. , plus an aide to the Chair of the House Science Committee, whether the inspiration for SEIN was ELSI or GMOs. All of them claimed that it was ELSI. It was known that the ELSI work was not perfect, but if SEIN was to be the child of ELSI, then it was intended that the child would be wiser than the parent. 142 Chris Toumey

I am saying that our interest in nanotech does not arise from nothing. It might well be that scholars in the humanities and social sciences on one side of the Atlantic wanted SEIN to be ELSI.2, while those on the other shore expected it to be GMO.2. There were conditions that preceded and informed the study of nanotechnology and they were not the same in the United States as in Western Europe. I believe that conferences and other venues eventually smoothed out those differences, but it is not a bad thing to be self-aware about our early motives—and our different motives—for examining a topic in science and technology. Secondly, we would be wise to see that sometimes our career choices are steered by herd mentalities. Before we studied nanotech, many studied either ELSI or GMO’s; after funding for SEIN started to dwindle, many migrated to studies of synthetic biology, also known as SynBio, or to examine Artificial Intelligence. One can see these changes in the venues for disseminating our work. The journal NanoEthics focused closely on nanotech for several years, and then expanded to include synthetic biology and other topics. S.NET was originally the Society for the Study of Nanotechnology and Emerging Technologies; now its acronym means New and Emerging Technologies. The acronym is the same, but the topical interest is broader. Both the journal and the society continue to include the topic of nanotech, but no longer exclusively so. It is not wrong for us to study any of these scientific topics, but we should bear in mind that our work is also a social-science phenomenon. Partly this is because we need to discover where we will find the next source of funding for our work, and partly it is because we share some visions of what is the next scientific topic worth studying. There is no shame in seeing what our colleagues see, nor in caring about what our colleagues care about. But we need to see that when we move together from ELSI to SEIN to SynBio, we are doing this as a community and we are part of something bigger than our individual selves. This academic behaviour is not a result of totalitarian politics or of religious fanaticism, but it is still a group phenomenon.

3 The extravagant promises of nanotechnology

Next I have some ideas about the particulars of our work on nanotechnology. Early on we invested much of our interest in Eric Drexler’s vision of nanotech. We felt that we needed to understand the science to the best of our abilities, and also that we ought to comprehend the popular visions of nanotech. Drexler’s vision was the primary entry for nonscientists who became seriously interested in nanotech. We did not have to agree with Drexler, but we needed Lessons from the Land of Atoms and Molecules 143 to take him seriously to the extent that his ideas influenced a large population of nonscientists. Subsequently it became clear that almost none of the scientific work actually being done in nanotechnology was in line with Drexler’s vision. There was a wide gap between, on the one hand, Drexler’s sense of nanotech and, on the other, the experiments, discoveries, publications, patents, and applications that have made nanotech so exciting. It appears that Drexler’s vision has had a negligible influence on the scientists and engineers who make nanotechnology happen [Toumey 2014]. Some scientists in nanotech disagree with Drexler; some ignore him; some have simply never heard of him. Eric Drexler continues to insist that he sees what one should see, and that those who see otherwise are misguided [Drexler 2013], [Drexler & Smalley 2003]. Even if his stance barely affects the science, if at all, those of us in the humanities and social sciences accept a continuing responsibility to consider how nonscientists know and judge nanotech. Whether we like it or not, Drexler’s writing, especially his 1986 book, Engines of Creation [Drexler 1986], remains extremely influential among nonscientists who become curious about nanotech. One might say that we wasted our time on a wide-eyed visionary who is largely ignored by the scientists and engineers in nanotech. But if there is a difference between Drexler’s vision and the science that happens in working labs, and if the values and concerns of nonscientists constitute a legitimate research topic, then we need to recognize that the views of Eric Drexler are still the principal portal through which most nonscientists first encounter nanotech and gain a sense of what it might be. A related point is that when nanotechnology started to receive a great deal of attention, respect and serious funding circa 2000, it benefited greatly from extravagant promises. This would be the next industrial revolution, it was said. Carbon nanotubes would displace everything made of steel because they are six times lighter and a hundred times stronger than steel. Nanomedicine would cure cancer and greatly extend the human lifespan. The predictions concerning the benefits of nanotech were grand enough to make sensible people become giddy. And then the excessive expectations evaporated. Many good things have happened in nanotech, and many more are coming down the road, sooner or later. I believe that sometime in the future, nanomedicine is going to deliver some very effective therapeutics for cancer that will kill the tumors without harming non-cancerous cells: targeted drugs will displace chemotherapy with all its dreadful side effects. But this is still in the future. There is a parallel story about extravagant expectations which is worth remembering. In 2003, the Director of the National Cancer Institute (a prominent part of the National Institutes of Health) predicted that suffering and death from cancer would be eliminated by the year 2015 [von Eschenbach 2003]. Nanomedicine was incorporated into that vision. It appears that the 144 Chris Toumey

Director’s promise was increasingly ignored or forgotten as 2015 approached without achieving what was predicted, but some people learned the lesson that it was unwise to present an unrealistic expectation. When nanotech failed to quickly deliver the amazing things it promised, policy makers and managers of government funding programs began to have second thoughts about investing so much so soon in this area. We learned another lesson here: extravagant promises might result in large amounts of dollars, euros, or pounds at first, but if you cannot produce the results you promise, then you discredit yourself, and it is likely that your sponsors will choose to invest their financial resources elsewhere. This is a tricky situation. If your vision of a certain science or technology is modest because you want to be realistic, then your grant proposals may be less competitive than those that shine with glittering optimism. But if you promise too much and deliver too little, then that mistake will catch up with you sooner or later. This affected the study of nanotech by the humanities and social sciences as much as by the natural sciences.

4 The mythology of the origins of nanotechnology

Another lesson I learned was that there are competing narratives about the origin of nanotechnology. The premier origin myth says that nanotech began with Richard Feynman’s 1959 talk “There’s Plenty of Room at the Bottom” [Feynman 1959]. Indeed, the U.S. National Nanotechnology Initiative states on its website that Feynman is the father of nanotechnology. I found that certain persons in the CalTech (California Institute of Technology) community are especially devoted to this account, largely because Feynman spent most of his teaching career there. He was a brilliant Nobel Laureate and charismatic in a larger-than-life way. I have no reason to dislike Richard Feynman or his remarkable genius, but I prefer a different account of what made nanotech happen. The group that I joined at the University of South Carolina was centered on faculty in the Department of Philosophy. One of their principal contributions was to examine epistemological issues in the relationship between nanoscale objects— especially atoms and molecules—and images of those objects produced by scanning probe microscopy. This included the Scanning Tunneling Microscope (STM) and the Atomic Force Microscope (AFM). I like to summarize those issues by saying that an image of an atom or a molecule cannot possibly look like a real atom or molecule. Those epistemological questions point us to the invention of the Scanning Tunneling Microscope in 1981 and the invention of the Atomic Force Microscope in 1986 as the foundational moments in the field. Both of those Lessons from the Land of Atoms and Molecules 145 inventions came from IBM scientists, namely, Gerd Binnig and Heinrich Rohrer at IBM Zurich for the STM, and Binnig for the AFM. So then, if nanotech is made possible because of the STM and the AFM, then the Feynman/CalTech story is less compelling, and the IBM account is primary. And on that point, I should mention that my own most prominent contribution to the study of nanotech was, in my opinion, my 2008 paper on the history of Richard Feynman’s 1959 talk [Toumey 2008]. His talk was published six times in the following three years, which is truly impressive, but after this it was rarely cited in the twenty years between 1959 and 1979. I thought that in my paper I had shown that Feynman’s talk was not the origin of nanotech. This science has multiple facets and multiple origins, and is not limited to that which Richard Feynman said in December 1959. Thus, without intending to endorse one narrative or another, I must admit that my view is more favorable to the IBM story than to the CalTech version. But my account seems to have sunk without a trace. It happens again and again that the origin of nanotech—supposedly a singular origin—is said to be Feynman’s talk. Chagrin, , and bemusement: these are the feelings that course through me when I hear that nanotech began with Feynman. This is not to say that scientists in nanotechnology spend their time choosing between the CalTech story and the IBM narrative. But if we are attentive to nuances in competing narratives—do not the humanities lovingly nurture narratives and their nuances?—then the story of the origin of nanotech becomes more interesting because we have two incommensurate accounts to weigh against each other. I have my reasons for preferring the IBM version, partly because I learned from scholars who focused on issues of epistemology involving images produced by the STM and the AFM. It has been a great pleasure for me to learn about those issues at the feet of certain philosophers and artists. But I do not deny that my own perspective derives from my interactions with the team at the University of South Carolina. I understand that other scholars in other situations might have reasons to embrace different accounts of the origin of nanotechnology. This is not something that can easily be condensed into a simplistic linear history of this multidisciplinary field of study.

5 The problematic definition of nanotechnology

Next question: If we have committed ourselves to studying nanotechnology, then do we know what it is that we are going to study? What exactly is nanotech—this domain that apparently looms so large on our intellectual horizons? More specifically, what is nanotechnology as understood by those of us in the humanities and social sciences? 146 Chris Toumey

For this question we need to recognize three features built into nanotech that shape our knowledge and intentions. First, nanotechnology is not defined by a single product like a better smartphone or a singular process like the polymerase chain reaction. On the contrary, it embraces every scientific and engineering discipline and subdiscipline, plus certain technologies, for control- ling matter that is measured at the scale of the nanometer. It includes atomic physics and subatomic physics; but not to the exclusion of organic chemistry and inorganic chemistry; and molecular biology; plus microelectronics; also materials science; in addition, scanning probe microscopy and electron beam microscopy; and many more fields. At the University of South Carolina, our tutors for the science underlying nanotech were mostly from the Department of Chemistry. I hear that at other universities, the scholars in the humanities and social sciences get their scientific knowledge mostly from professors of physics. And in other cases, the focus is on microelectronics. I do not mean to say that one science is a more legitimate purveyor of nanotech than another. But it could be that, in the early days of our work, some of us saw nanotech primarily as an exercise in chemistry, while others saw it as an extension of physics, and so on. In fact the person in organic chemistry could well have more in common with her counterpart in molecular biology than with the person down the hall doing inorganic chemistry in the same chemistry department. This also means that a professor might describe his or her work in organic chemistry as such when there is a call for proposals in that field, but she might describe it as nanotech when the call for proposals is labelled as nanotechnology. The same work can be both chemistry and nanotech; or physics and nanotech; or molecular biology and nanotech; and so on. Likewise, scientists can publish their work on nanotech either in classic disciplinary journals or in recently founded nanotech journals. Nanotech is a creature with multiple identities. When the funding agency or the journal says chemistry, then the scientist says I do chemistry; when they say nanotech, the scientist says I do nanotech. This is not intrinsically duplicitous. One’s science ultimately speaks for itself, even if the label on the science changes. Nevertheless, this situation complicates the task of saying what nanotech is. A second problem built into our investigations of nanotech concerns a sense of time. The earliest applications of nanotech were to relatively trivial products like clubs and tennis rackets reinforced with small quantities of carbon nanotubes. The truly impressive applications, like curing cancer or saving the environment, are to be found somewhere in the future. As a result, it is possible for anyone to project their hopes or fears onto the future state of nanotech. We can know what nanotech is like today, but it is not clear that nanotech will be the same in ten or twenty years. Lessons from the Land of Atoms and Molecules 147

A third condition is this: nanotechnology is an enabling technology, also known as a multipurpose technology platform. Consider the story of the assembly line. When it was invented to put automobiles together, it was not immediately obvious that it would have other purposes. But soon it served as a way to make sewing machines and almost any other product that needed many different parts to be put together in a particular order. This means, again, that it is possible for anyone to project their own expectations onto nanotechnology, not limited by the current state of science and technology. Perhaps this is just a bunch of silly daydreams to the scientists and engineers who make nanotech happen. I can understand this attitude. But if nanotech is going to change our material culture—medical diagnosis and therapeutics, microelectronics, materials science, and many other areas where technology touches us—then it is worth knowing what people hope for from nanotech, and what they fear about it. And the responsibility to discern these hopes and fears falls into the laps of those of us who toil in the humanities and social sciences.

6 A culture of collaboration and referencing in the ethics of nanotechnology

Next I’d like to comment on a way for us to make the best of each other’s work. My example is the study of ethics in nanotechnology. I have in hand two hundred published papers on ethics in this area, and there could well be another hundred or so. My collection ranges chronologically from Weil [2001] to Hester, Mullins et al. [2015], and alphabetically from Allhoff [2007] to Wolfson [2003]. By one standard, this is very good. Many people have crafted thoughtful statements about ethics. And quantity of publications seems to be the principal metric by which the U.S. National Science Foundation judges the value of the research it funds. It is regrettable, however, that few of the later papers build upon the earlier ones. For the most part, these publications on ethics cite each other very little. One would hope that all this work would result in a consensus, or at least a synthesis that highlights the main themes of these many publications. But if most authors ignore most others, then together these papers constitute a loose compendium of stand-alone statements. We can observe the same phenomenon when we consider other topics in the study of nanotech: many excellent scholars have produced much superb work, and have published their findings, but have not shaped that work into a comprehensive vision of what we have learned and what we ought to do next. It is not possible to go back and start over again with our examinations of 148 Chris Toumey nanotech, but perhaps this lesson will be useful for the next large-scale studies of science and technology. A consensus or synthesis of research related to a particular topic is worth having and worth doing.

7 The different publics for studies of nanoethics

I will conclude with one more observation, plus a recommendation to go with it. I mentioned earlier that each year I have four short humanistic commentaries in Nature Nanotechnology. This is my platform for showing the scientists and engineers who read this journal that the humanities and social sciences (not limited to my own work) can contribute to our understandings of nanotech. Recently I completed a project, with valuable help from graduate re- search assistant Meagan Conway, aimed at discovering whether publications in SEIN actually reach the people who make nanotech happen. We conducted sixty interviews distributed among four populations: academic scientists and engineers in nanotech; persons involved in U.S. government policy for nanotech; persons in nanotech-related start-ups; and, intellectual property attorneys who steer nanotech products through patenting and regulatory processes. We found that only a small fraction of all the SEIN work had come to the attention of persons in those four populations [Toumey & Conway 2017]. This brings us to a two-part question: when this community of ours studies nanotech from the perspectives of the humanities and social sciences, is our work trivial or nontrivial? If it is trivial, then there is no reason to communicate our work to the people who make nanotech happen, especially scientists, engineers and persons in government science policy. Or to anyone else for that matter. We can be entirely happy in our little bubble, enjoying our own little festival of the humanities. But if we are really generating insights that help others to better understand nanotech, then it makes sense to communicate these insights to the people who make nanotech happen. Here I add that I am not the only one in the SEIN community who has had commentaries published in Nature Nanotechnology, which I appreciate. Furthermore, I am partly motivated by an ethos from my home discipline of cultural anthropology. If we benefit in our careers because certain communities permit us to study them, then we have a responsibility to share our results with those communities. For one anthropologist, the community might be a village on an island in the Pacific. For another, it might be a scientific community working in nanotech. The ethos of sharing the results with the community under study applies in both cases. Now let us generalize to other STS topics. The Nature Publishing Group has approximately thirty monthly journals. Some address classic scientific Lessons from the Land of Atoms and Molecules 149 disciplines, e.g., Nature Physics and Nature Chemistry. Others are devoted to newer, more specialized topics, including biotechnology, nanotechnology, climate change, and sustainability. In this family of journals, there is a venue for writing commentaries for almost any topic of science or technology that one wants to study in STS. In addition, there are trade publications read by scientists and engineers: Technology Review (from MIT); Engineering & Science (from CalTech); Chemical & Engineering News; IEEE Science and Society; and more. I suspect that there are two reasons why many of us in STS hesitate to submit papers to those venues. First, their commentaries are not typically peer reviewed; second, there is much pressure to publish in the journals of one’s home discipline, and neither scientific journals nor trade publications have any such value. These two considerations are especially acute for tenure-track assistant professors. Indeed, some fear that publishing a non-peer-reviewed article in a non-humanities journal can be the Kiss of Death when a committee considers whether one deserves tenure. What a rotten thing it is, if I am right, that a historian or a philosopher or an anthropologist has to choose either to communicate her work to the scientific community she studied, or to maximize her chances for tenure. But not both.

Acknowledgment

The work in this paper was made possible by a grant from the U.S. National Science Foundation (Award 1535060: “The Value of SEIN Research”). The views in this paper are not necessarily those of the National Science Foundation.

Bibliography

Allhoff, Fritz [2007], On the autonomy and justification of nanoethics, NanoEthics, 1(3), 185–210, doi: 10.1007/s11569-007-0018-3.

Drexler, K. Eric [1986], Engines of Creation. The Coming Era of Nanotechnology, New York: Anchor Books, trad. fr. par M. Macé, révisée par Th. Hoquet, Engins de création : l’avènement des nanotechnologies, Paris: Vuibert, 2005.

—— [2013], Radical Abundance, New York: Public Affairs.

Drexler, K. Eric & Smalley, Richard E. [2003], Nanotechnology, Chemical & Engineering News, 81(48), 37–42, http://pubs.acs.org/cen/ coverstory/8148/8148counterpoint.html. 150 Chris Toumey

Feynman, Richard P. [1959], There’s plenty of room at the bottom, in: Miniaturization, edited by H. D. Gilbert, New York: Rheinhold Publishing Co, 282–296, http://www.zyvex.com/nanotech/feynman.html.

Hester, Karena, Mullins, Martin, et al. [2015], Anticipatory ethics and gov- ernance (AEG): Towards a future care orientation around nanotechnology, NanoEthics, 9(2), 123–136, doi: 10.1007/s11569-015-0229-y.

Toumey, Chris [2008], Reading Feynman into nanotechnology. A text for a new science, Techné: Research in Philosophy and Technology, 12(3), 133– 168, doi: 10.5840/techne20081231.

—— [2014], Does scale matter at the nanoscale?, Nature Nanotechnology, 9, 6–7, doi: 10.1038/nnano.2013.289.

Toumey, Chris & Conway, Meagan [2017], Failure to communicate: The problem of sharing STS research with scientific communities, in: The Politics and Situatedness of Emerging Technologies, edited by D. Bowman, A. Dijkstra, C. Fautz, J. Guivant, K. Konrad, C. Shelley-Egan, & S. Woll, Berlin: IOS Press, 25–38. von Eschenbach, Andrew C. [2003], NCI sets goal of eliminating suffering and death due to cancer by 2015, Journal of the National Medical Association, 95(7), 637–639.

Weil, Vivian [2001], Ethical issues in nanotechnology, in: Societal Implications of Nanoscience and Nanotechnology, edited by M. C. Roco & W. S. Bainbridge, Dordrecht; Boston; London: Springer, 244–251.

Wolfson, Joel Rothstein [2003], Social and ethical issues in nanotechnology: Lessons from biotechnology and other high technologies, Biotechnology Law Report, 22(4), 376–396, doi: 10.1089/073003103769015906. Mettre cette molécule au travail ?

Christian Joachim NanoScience Group, CEMES-CNRS, Toulouse (France)

Résumé : On retrace rapidement comment la molécule a pris son indépen- dance grâce notamment à l’invention du microscope à effet tunnel. Depuis le milieu des années 1970, son identité a changé : d’un quelconque indiscernable parmi une multitude (par exemple dans un nanomatériau) à une seule devant embarquer une petite unité de calcul ou délivrer une puissance motrice. Avec des molécule-machines complexes fonctionnant chacune indépendamment, on espère également taquiner la physique quantique autrement qu’en utilisant quelques photons ou un seul atome dans une cavité froide.

Abstract: We briefly retrace the story of how the Scanning Tunneling Microscope has allowed the molecule to establish its autonomy. Since the mid- 1970s, the identity of a molecule has changed from being any interchangeable unit in a multitude of identical entities (for example a single molecule within a nanomaterial) to being a specific individual molecule operating as a calculating unit or as an autonomous source of motive power. The complex autonomous molecule-machine promises to allow us to exploit properties predicted by quantum physics in a different way from the effects that depend on a few photons or a single atom in a cavity

Mettre une molécule au travail ? Non pas collectivement dans un cristal ou une couche moléculaire, mais par elle-même, individuellement. Suivons le fonctionnement du molécule-interrupteur numéro 123985. Fait de quelques atomes, un de ses petits groupements chimiques bascule pour faire entrer une donnée numérique dans la molécule-calculateur faite de quelques milliers d’atomes. « C’est de l’électronique moléculaire, tout est dans les molécules » s’exclame en 1968 Brivaux dans le roman La Nuit des Temps de R. Barjavel [Barjavel 1968]. Par fabrication, le robot constructeur de cette électronique moléculaire a positionné la molécule-interrupteur à quelques 100 picomètres du fil électrique numéro 5674378. Ce fil atomique est d’un seul atome de section. Il a été construit atome par atome sur toute sa longueur à la surface d’un support

Philosophia Scientiæ, 23(1), 2019, 151–159. 152 Christian Joachim isolant. Il permet d’apporter l’ordre de commutation de l’échelle du donneur d’ordre à la molécule-interrupteur 123985. Pour que la molécule comprenne cet ordre, il y aura également entre le donneur d’ordre et cette molécule, une conversion entre le très classique « tout ou rien » et de l’information quantique. Ancrée sur le canal transmembranaire numéro 342675 à la surface d’un globule blanc, voici la molécule-moteur numéro 65435. Elle permet d’obstruer ce canal à volonté en utilisant un programme intégré pour pallier l’attaque d’un virus. Son énergie de rotation provient de l’agitation permanente des longues molécules constituant la membrane cellulaire. Cette énergie thermique est redressée par la structure électronique intime de cette molécule-moteur pour que son rotor tourne pas à pas et dans un seul sens afin d’ouvrir ou de fermer le canal [García-López, Chen et al. 2017]. En nous et d’un poids moléculaire beaucoup plus conséquent que les petites molécule-machines, des macromolécules biologiques travaillent à assurer le bon déroulement de nos fonctions vitales. Dès la fin des années 1970, quelques chercheurs se sont alors intéressés à l’encore plus petit et suffisamment stable qu’est une molécule d’une centaine d’atomes pour pouvoir utiliser ses ressources physiques intrinsèques afin de construire une petite unité de calcul de taille nanométrique [Aviram & Ratner 1974]. Les premières molécule-roues ont été étudiées à l’unité sur une surface support à la fin des années 1990 [Gimzewski, Joachim et al. 1998]. Pour aller explorer ce monde matériel d’en bas où chaque molécule-machine doit fonctionner « comme une horloge », il faut un microscope adapté pour non seulement imager mais également toucher chaque molécule-machine. Pour ce faire, nous disposons maintenant de microscopes à champ proche. Parmi ces microscopes, le microscope à effet tunnel (STM) a été inventé au début des années 1980 [Binnig & Rohrer 1986]. Il s’agissait d’abord d’aider à la poursuite de la miniaturisation des puces électroniques fabriquées à la surface d’un cristal semi-conducteur. Il permet, en effet, de réaliser des images de ce type de surface et des dopants atomiques avec une précision meilleure que 100 pm ce que ne permet pas le microscope électronique (dit à champ lointain) qui fut inventé au début des années 1930 [Knoll & Ruska 1932]. Avec son petit frère, le microscope à force atomique (AFM), ces microscopes à champ proche sont très vite devenus de formidables instruments d’exploration du nano-monde. La science des surfaces en a grandement bénéficié en permettant d’abord et de manière inégalée de déterminer la structure atomique exacte d’un grand nombre de surfaces. L’imagerie des molécules posées ou déposées sur ces surfaces a atteint un tel degré de précision qu’une nouvelle chimie de synthèse est en train de naître avec la synthèse chimique dite « sur-surface » de nouvelles molécules instables ou insolubles en solution en partant de précurseurs de faible poids moléculaire [Gourdon 2008]. Pour le nano-monde, et à la fin des années 1980, le STM est surtout devenu un formidable outil pour écrire atome par atome [Eigler & Schweizer 1990], pour déplacer une seule molécule et par exemple la faire tourner en l’alimentant Mettre cette molécule au travail ? 153

électroniquement en énergie. Construites dans les années 2010, les versions modernes à 4 têtes de ces microscopes permettent de nos jours les premières mesures de la conductance d’un fil atomique et bientôt d’une molécule-circuit électronique à multiples entrées-sorties [Kolmer, Olszowski et al. 2017]. Mais pourquoi mettre les molécules au travail une par une tout en respectant l’individualité de leur travail ? On ne sait si les forgerons grecs du bronze de l’horloge d’Anticythère voulaient créer un marché économique des horloges astronomiques ou simplement exercer leur art. On sait par contre que la maîtrise du mouillage et l’invention des caractères mobiles également en bronze par Johannes Gutenberg visaient bien à créer un nouveau marché du document imprimé avant sa fameuse bible. Il en est de même pour la Pascaline de Blaise Pascal, même si ses premiers exemplaires en bois ne permettaient pas une grande diffusion commerciale. C’est également le cas pour l’invention de la triode à vide par Lee de Forrest et pour le transistor en semi-conducteur des laboratoires Bell à la fin des années 1940. La mise au travail d’une seule molécule avait également au départ un objectif technologique : la miniaturisation ultime des composants électroniques. Quand Ari Aviram & Mark Ratner proposent à IBM d’utiliser une seule petite molécule pour un jour remplacer un rectificateur de courant fabriqué alors en silicium, ils ont en vue les marchés économiques futurs, en particulier pour la portabilité des machines électroniques et leur efficacité énergétique [Aviram & Ratner 1974]. Avec l’invention de la microscopie à effet tunnel, sept ans plus tard, également chez IBM et avec force de capitaux, on pouvait donc s’attendre à ce que la technoscience donne toute sa puissance et que les molécules- calculateurs soient désormais entrées en production comme fruits de 40 ans de recherche intensive en électronique moléculaire. Or il n’en a rien été. Nous avons d’abord sous-estimé l’ampleur des défis technologiques nécessaires pour atteindre cet objectif. Nous avons surtout oublié qu’une poussée de fièvre technologique par envie d’exploration d’une limite, sous contraintes de santé publique, économique ou politique s’accompagne souvent d’un cham- bardement scientifique. Au cas où et par anticipation le mot « nanoscience » est quand même apparu dans les années 1990. Peut-être ce mot a-t-il été forgé trop rapidement. Orpheline dans sa définition première « sciences à l’échelle du nanomètre », la nanoscience est alors assez dépourvue de résultats scientifiques. Pas de nouvelle loi, et trop peu de nouveaux objets d’étude. Elle est donc appelée en renfort pour rassembler des champs technoscientifiques loin de sa définition initiale et ceci sans avoir besoin de l’indépendance moléculaire [Joachim 2005]. Première déviation au milieu des années 1990 : la physique mésoscopique, intermédiaire entre la microphysique et la nano-physique. Deuxième déviation : les nanomatériaux vers la fin des années 1990. Et enfin troisième déviation vers les technologies quantiques aux alentours des années 2010. En effet et pour le temps présent, on s’en sort très bien technologiquement en assemblant un grand nombre de molécules (ou de nanoparticules) pour créer des nanomatériaux aux propriétés nouvelles et donc des produits nouveaux. 154 Christian Joachim

Ce foisonnement d’idées à base de nanomatériaux a totalement occulté la prise d’indépendance moléculaire. Le champ des nanosciences et nanotechnologies est alors progressivement devenu le lieu de la physique mésoscopique et des matériaux nano-structurés On s’en sort également très bien en émulant les comportements quantiques avec des systèmes mésoscopiques. L’espoir de la micro-électronique a maintenant basculé de l’électronique moléculaire vers les calculateurs quantiques [Nielsen & Chuang 2002]. Dans un futur proche, ceux- ci pourraient battre en performances nos bons vieux processeurs en silicium. Ici encore, point n’est besoin d’une seule molécule pour faire le travail. Ces deux vagues technoscientifiques des nanomatériaux, puis des calcula- teurs quantiques, ont finalement libéré la créativité scientifique en nanosciences de son carcan technologique. Ce qui semblait être un double péché de naissance avec comme objectif final le futur du marché économique de la micro- électronique : électronique moléculaire et invention de la microscopie à effet tunnel est en passe d’être oublié. Il est redevenu possible d’essayer de faire travailler une seule et toujours la même molécule, objet d’étude par excellence de la nanoscience, sans avoir à produire pour demain la puce moléculaire qui va sauver la loi de Moore. Prenons deux exemples de cette libération : le fil moléculaire et la molécule-moteur. Les fils moléculaires devaient être à la base de l’architecture des processeurs de l’électronique moléculaire [Joachim, Gimzewski et al. 2000]. Chaque fil moléculaire devait réaliser l’interconnexion entre deux composants molé- culaires dans une sorte de copie minuscule des circuits électroniques des puces de la micro-électronique actuelle. Il a été très difficile de mesurer la conductance électronique d’un seul et unique fil moléculaire tout en connaissant parfaitement sa composition chimique et sa structure. Mais à force de prouesses expérimentales, les premières mesures ont été effectuées à la fin des années 2000, grâce notamment à la nouvelle chimie « sur-surface » et à l’amélioration de la stabilité du microscope à effet tunnel. Confirmation expérimentale : l’intensité du courant électronique passant au travers d’un fil moléculaire décroît exponentiellement avec la longueur de ce fil. Le taux de cette décroissance est même très grand, à peu près un ordre de grandeur par nanomètre de longueur de fil [Lafferentz, Ample et al. 2009]. Il devient alors difficile de construire un circuit électronique complexe, c’est-à-dire très étendu dans l’espace. L’intérêt technologique pour ces fils moléculaires (et le petit frère : le fil de section atomique stabilisé à la surface d’un isolant) a donc fortement diminué entre la fin des années 1990 et le début des années 2010. On s’est également aperçu que les bonnes vieilles lois des mailles et des nœuds de Gustav Kirchhoff n’étaient plus valables dans un circuit moléculaire. Avec tout ceci l’électronique moléculaire semble bien mal engagée pour révolutionner la micro-électronique. De plus, le transistor en semi-conducteur, composant élémentaire essentiel de toute puce électronique continue à être miniaturisé pour atteindre de nos jours et en laboratoire, une longueur de la zone active de l’ordre de 5 nm. La seule molécule-transistor ayant montré un gain expérimentalement demande une jonction métal-molécule-métal de 2 nm Mettre cette molécule au travail ? 155 de long [Joachim & Gimzewski 1997]. La microélectronique rejoint donc l’électronique moléculaire. Premier marqueur historique majeur de l’entrée en nanosciences, l’électronique moléculaire version électronique du futur semble donc vouée à disparaître. Elle est avalée à la fois par le rouleau compresseur de la micro-électronique devenue au fil des années « nanoélectronique » et par la technologie nouvelle des calculateurs quantiques dont les bases théoriques ont été jetées par Richard Feynman et David Deutch en 1984 [Feynman 1984], soit dix années après l’article théorique de Aviram & Ratner sur la molécule-diode. C’est justement la confrontation scientifique et technologique entre l’at- terrissage réaliste de l’électronique moléculaire et l’envolée spectaculaire des études et sources de financement sur les calculateurs quantiques qui a entraîné une petite renaissance pour le fil moléculaire. Et si l’ingénierie quantique permettait de concevoir de nouveaux fils moléculaires « quantiques » sans décroissance exponentielle de leur conductance électronique ? La première publication ouvrant cette voie est parue en 2015 [Nacci, Ample et al. 2015]. De cette hybridation entre l’électronique moléculaire et l’ingénierie quantique est en train de naître un nouveau champ de recherche qui forge vraiment une partie de la nanoscience. La nouvelle chimie « sur-surface » est ici très utile car les premiers fils moléculaires de cette nouvelle voie ne peuvent pas ici être synthétisés en solution. Ils sont alors synthétisés en un très petit nombre à la surface d’un métal par diffusion bidimensionnelle de monomères. Ils sont ensuite choisis un par un par la pointe du microscope à effet tunnel après avoir obtenu leur image par la même technique. La réactivité chimique latérale de ces monomères est activée thermiquement. Autre nouveauté, la longueur de ces fils peut atteindre de 50 nm à 100 nm. Les constructeurs de STM développent donc maintenant des instruments à plusieurs têtes pour pouvoir mesurer avec au moins deux pointes, la conductance de ces nouveaux fils moléculaires quantiques. En nanoscience, on rafraîchit même ici ses connaissances en physique quantique en ressortant la théorie des bandes à valeurs complexes, une théorie formulée à la fin des années 1950 par Walter Kohn, prix Nobel de chimie en 1998 [Kohn 1959]. Cette théorie longtemps considérée comme un petit joyau mathématique sans trop d’utilité est maintenant utilisée dans le domaine des fils moléculaires quantiques, pour donner une courbe universelle du taux de décroissance tunnel en fonction des caractéristiques de la structure électronique des fils moléculaires [Nacci, Ample et al. 2015]. On espère également observer une signature vibrationnelle des électrons tunnel transférés le long du fil moléculaire dans une sorte de potentiométrie tunnel à 3 pointes afin de taquiner les ressorts des comportements quantiques. L’effet tunnel résistera-t-il quand on observe un courant tunnel de l’intérieur de la barrière ? On a même conçu une petite molécule ampèremètre à insérer le long du fil moléculaire pour compter en moyenne le nombre d’électrons passant par effet tunnel au travers de ce fil. La molécule-moteur a eu deux naissances expérimentales. La première vient d’une longue tradition de la chimie moléculaire et supramoléculaire à vouloir bien maîtriser le mouvement des molécules en solution, prolongement 156 Christian Joachim des intuitions du biochimiste Paul D. Boyer, prix Nobel de chimie en 1997, qui voyait au milieu des années 1960 dans les changements de conformation de protéines une explication par mécanique moléculaire des mécanismes élémentaires du vivant. Le récent prix Nobel de chimie 2016, attribué à Jean- Pierre Sauvage, James Frazer Stoddart et Bernard Feringa, est une illustration parfaite de cette première naissance vers le milieu des années 1980. Sauvage et Stoddart s’intéressaient alors en particulier à des molécules entrelacées et au mouvement relatif des anneaux en fonction de différentes stimulations chimiques ou optiques [Livoreil, Dietrich-Buchecker et al. 1994]. La deuxième naissance est plus récente et découle de l’invention du STM. Suite à nos études sur les mécanismes physiques produisant le contraste intramoléculaire dans l’image d’une molécule obtenue par microscopie à effet tunnel, nous nous sommes aperçus en 1998 qu’il était possible à volonté de mettre en rotation une seule de ces molécules, d’un diamètre de 1,2 nm [Gimzewski, Joachim et al. 1998]. Bel exemple de nanoscience ! Il nous a fallu alors comprendre les lois de ce mouvement de rotation moléculaire. Il peut être déclenché soit par une petite impulsion électrique appliquée à un endroit précis sur la molécule choisie, soit en déplaçant justement l’axe de rotation de cette molécule de 0,255 nm par manipulation moléculaire avec la pointe du microscope. Nous nous sommes alors posé un grand nombre de questions : une molécule-rotor peut-elle tourner dans un seul sens [Perera, Ample et al. 2013] ? Comment contrôler son sens de rotation ? À quelle vitesse tourne-t-elle sur sa surface support ? Y a-t-il de la friction électronique et/ou mécanique entre cette molécule-rotor et la surface support ? Une molécule peut-elle avoir une puissance motrice [Ohmann, Meyer et al. 2015] ? Quels sont les canaux permettant à l’énergie d’entrer pour fournir la puissance nécessaire à ce travail ? Si plusieurs de ces roues moléculaires sont montées sur un châssis également moléculaire de quelques nanomètres de longueur, les roues peuvent-elles tourner ? Avons-nous assez de place pour monter une molécule- moteur sur le châssis d’un tel nano-véhicule ? Peut-il y avoir transmission d’un mouvement de rotation le long d’une longue chaine de molécule-engrenage construite molécule par molécule ? De ces interrogations post-1998 est né le champ de la nano-mécanique moléculaire qui vient également de plein droit remplir le champ de la na- noscience. Chimistes de synthèse et chimistes théoriciens essayent maintenant de concevoir des molécule-moteurs ayant un minimum de puissance motrice. Les physiciens des surfaces imaginent des protocoles expérimentaux afin de mesurer cette puissance en utilisant les nouveaux microscopes à plusieurs pointes, une pointe de microscope à effet tunnel pour alimenter en énergie la molécule-moteur et une pointe de microscope à force atomique pour mesurer la puissance fournie. Certains expérimentateurs ont même commencé à s’interroger sur les limites de la puissance motrice de la plus petite machine matérielle possible avec un piston constitué d’un seul atome en utilisant la technologie des atomes froids. Les chimistes théoriciens et leurs collègues physiciens commencent à établir les lois du mouvement du rotor de ces Mettre cette molécule au travail ? 157 molécule-moteurs et prédire leur rendement [Echeverria, Monturet et al. 2014]. Le STM a même permis de construire une première crémaillère moléculaire et la construction d’un train d’engrenage est en cours. Chaque molécule- engrenage a un diamètre compris entre 1 et 2 nm, 6 dents moléculaires et son axe de rotation est constitué d’un seul atome métallique. Voici donc les débuts de la nanoscience. L’objet d’étude est une seule molécule, toujours la même. Nous allons la mettre en situation d’étude puis la faire travailler, calculer ou transmettre de l’information. Nos nouveaux protocoles expérimentaux demandent soit une surface, soit un piège optique pour maintenir la molécule-dispositif ou la molécule-machine choisie en situa- tion. Ils requièrent l’invention de nouveaux modes de communication entre la molécule et l’expérimentateur, le donneur d’ordre ou fournisseur d’information. Une molécule apprécie l’échange d’information par voie quantique. Mais nous ne savons pas trop parler le quantique d’où le constat qu’en molécule- mécanique, les études sur surface sont les plus développées, car les mouvements mécaniques sont alors presque classiques [Echeverria, Monturet et al. 2014]. Il s’agit maintenant d’optimiser la structure électronique intrinsèque de la molécule-machine pour que, par exemple, une molécule-moteur produise du travail avec un bon rendement. Il s’agit maintenant d’augmenter la puissance de calcul d’une molécule en minimisant son nombre d’atomes. Nous en sommes actuellement à seulement 30 atomes de carbone et deux atomes d’or pour une molécule porte logique booléenne [Soe, Manzano et al. 2011]. Il s’agit maintenant de jouer avec l’effet tunnel sur de grandes distances intramoléculaires pour qu’un fil moléculaire devienne un véritable canal quantique de transmission d’information. On s’approche enfin d’une question post-quantique : pourquoi le principe de superposition agrémenté du postulat de Born semble être le principe de fonctionnement d’une molécule-moteur et semble gouverner la puissance calcul d’une molécule à calculer [Joachim 2006] ? Quelque part en nanoscience, peut-on craquer la mécanique quantique en poussant le développement des molécule-machines ?

Bibliographie

Aviram, Arieh & Ratner, Mark A. [1974], Molecular rectifiers, Chemical Physics Letters, 29(2), 277 –283, doi : 10.1016/0009-2614(74)85031-1.

Barjavel, René [1968], La Nuit des temps, Paris : Presses de la Cité.

Binnig, Gerd & Rohrer, Heinrich [1986], Scanning tunneling microscopy, IBM Journal of Research and Development, 30(4), 355.

Echeverria, Jorge, Monturet, Serge et al. [2014], One-way rotation of a molecule-rotor driven by a shot noise, Nanoscale, 6, 2793–2799, doi : 10. 1039/C3NR05814J. 158 Christian Joachim

Eigler, Donald M. & Schweizer, Erhard K. [1990], Positioning single atoms with a scanning tunnelling microscope, Nature, 344, 524–526, doi : 10.1038/ 344524a0.

Feynman, Richard P. [1984], Quantum-mechanical computers (A), Journal of the Optical Society of America B Optical Physics, 1, 464.

García-López, Vıctor, Chen, Fang et al. [2017], Molecular machines open cell membranes, Nature, 548, 567–572, doi : 10.1038/nature23657.

Gimzewski, Jim K., Joachim, Christian et al. [1998], Rotation of a single molecule within a supramolecular bearing, Science, 281(5376), 531–533, doi : 10.1126/science.281.5376.531.

Gourdon, André [2008], On-surface covalent coupling in ultrahigh vacuum, Angewandte Chemie International Edition, 47(37), 6950–6953, doi : 10. 1002/anie.200802229.

Joachim, Christian [2005], To be nano or not to be nano ?, Nature Materials, 4, 107–109, doi : 10.1038/nmat1319.

—— [2006], The driving power of the quantum superposition principle for molecule-machines, Journal of Physics : Condensed Matter, 18(33), 1935– 1942, doi : 10.1088/0953-8984/18/33/S11.

Joachim, Christian & Gimzewski, Jim K. [1997], An electromechanical amplifier using a single molecule, Chemical Physics Letters, 265(3), 353– 357, doi : 10.1016/S0009-2614(97)00014-6.

Joachim, Christian, Gimzewski, Jim K. et al. [2000], Electronics using hybrid- molecular and mono-molecular devices, Nature, 408, 541–548, doi : 10.1038/ 35046000.

Knoll, Max & Ruska, Ernst [1932], Das Elektronenmikroskop, Zeitschrift für Physik, 78(5), 318–339, doi : 10.1007/BF01342199.

Kohn, Walter [1959], Analytic properties of Bloch waves and Wannier functions, Physical Review, 115, 809–821, doi : 10.1103/PhysRev.115.809.

Kolmer, Marek, Olszowski, Piotr et al. [2017], Two-probe STM experiments at the atomic level, Journal of Physics : Condensed Matter, 29(44), 444 004, doi : 10.1088/1361-648X/aa8a05.

Lafferentz, Leif, Ample, Francisco et al. [2009], Conductance of a single conjugated polymer as a continuous function of its length, Science, 323(5918), 1193–1197, doi : 10.1126/science.1168255.

Livoreil, Aude, Dietrich-Buchecker, Christiane O. et al. [1994], Electrochemically triggered swinging of a [2]-catenate, Journal of the American Chemical Society, 116(20), 9399–9400, doi : 10.1021/ja00099a095. 159

Nacci, Christophe, Ample, Francisco et al. [2015], Conductance of a single flexible molecular wire composed of alternating donor and acceptor units, Nature Communications, 6, 7397–7404, doi : 10.1038/ncomms8397.

Nielsen, Michael & Chuang, Isaac [2002], Quantum Computation and Quantum information, Cambridge : Cambridge University Press.

Ohmann, Robin, Meyer, Jörg et al. [2015], Supramolecular rotor and translator at work : On-surface movement of single atoms, ACS Nano, 9(8), 8394–8400, doi : 10.1021/acsnano.5b03131.

Perera, Uduwanage Gayani E., Ample, Francisco et al. [2013], Controlled clockwise and anticlockwise rotational switching of a molecular motor, Nature Nanotechnology, 8, 46–51, doi : 10.1038/nnano.2012.218.

Soe, We-Hyo, Manzano, Carlos et al. [2011], Manipulating molecular quan- tum states with classical metal atom inputs : Demonstration of a single molecule NOR logic , ACS Nano, 5(2), 1436–1440, doi : 10.1021/ nn1033899.

Interview de Christophe Vieu, directeur de recherche au LAAS, Toulouse, le 8 décembre 2017

Christophe Vieu LAAS-CNRS, Toulouse

Bernadette Bensaude-Vincent Université Paris 1 Panthéon-Sorbonne (France)

Résumé : Plongé au cœur des nanos, Christophe Vieu souligne la diversité des secteurs touchés par l’approche nano. À l’idée d’une convergence des secteurs scientifiques, il oppose l’image d’une espèce invasive. Il se sent de ce fait investi d’une responsabilité de l’ensemble des technosciences.

Abstract: Actively engaged in research at the very heart of the nanosciences, Christophe Vieu emphasises the wide range of sectors touched by the nano approach. He deploys the image of an invasive species as a counterpoint to that of the convergence of scientific fields. The invasive nature of the domain means that researchers have a responsibility with respect to all the other contemporary technosciences.

Bernadette Bensaude Vincent (BBV) : À quoi ressemble le champ des nanotechnologies dans le monde de la recherche aujourd’hui ? Christophe Vieu (CV) : Les nanotechnologies n’ont plus vraiment d’existence au niveau des appels d’offre ou des sources de financement. C’est devenu un outil de recherche, un outil très puissant qui opère partout. On peut dire que les nanotechnologies ont tenu leurs promesses en tant qu’enabling technologies. Elles ont un pouvoir capacitant considérable, elles ont une capacité à intervenir sur tous les fronts. On trouve du nano partout, dans tous les secteurs, au cœur de la transition énergétique, de la transition numérique, dans la médecine personnalisée, dans l’intelligence artificielle.

Philosophia Scientiæ, 23(1), 2019, 161–164. 162 Christophe Vieu & Bernadette Bensaude-Vincent

BBV : Est-ce à dire que la convergence NBIC s’est effectivement concrétisée ? CV : Il ne s’agit pas vraiment d’une convergence au sens où il n’y a pas de but commun qui orienterait les recherches dans une direction unique. C’est plutôt un mouvement de dispersion, de divergence. Les nanos rencontrent divers secteurs technologiques et démultiplient leurs capacités. La génomique, le séquençage de masse, c’est les nanos qui ont permis cela. L’industrie électronique, la miniaturisation (le fameux more than Moore1), c’est encore les nanos. Le design de médicaments vectorisés... tout se passe à l’échelle de quelques dizaines de nanomètres. C’est un outil de recherche qui se déploie sur des axes divers.

BBV : Comment cela se traduit-il au niveau de la recherche ? CV : On a de grosses structures qui se déploient sur plusieurs axes. Ainsi, le LAAS est une énorme unité de recherche qui rassemble environ 700 personnes. C’est organisé en huit départements scientifiques avec 26 équipes qui travaillent sur 4 axes de recherche (l’intelligence ambiante, le vivant, l’énergie et l’espace). Le LAAS est un vrai labo d’ingénierie avec des nanos-roboticiens, de l’intelligence artificielle, du génie tissulaire, de la génomique... Au niveau individuel, le formidable potentiel des nanos a eu des répercus- sions sur mon propre itinéraire de recherche. Il m’a conduit à m’engager de plus en plus dans la réflexion éthique car les nanos démultiplient aussi nos responsabilités. Les nanos sont impliquées dans tant de secteurs qu’on ne sait pas où cela peut mener. Si ça tourne mal quelque part, si par exemple la génomique vire à l’eugénisme, il faudra s’en prendre aux nanos. Ce sont les nanos qui ont permis cela. Les nanos obligent donc à mener une réflexion sur l’ensemble des technosciences. C’est un engagement de chercheur.

BBV : Quelles formes prend cet engagement ? CV : C’est un engagement au sein du laboratoire et auprès du public, des jeunes en particulier. Quand je parle des nanos au public, dans les lycées par exemple, j’essaie de dégager cinq idées très simples : – Les nanos c’est petit et plus c’est petit, plus on en met partout. À force d’augmenter la puissance des processeurs et du numérique, on « électronifie » notre environnement. Les objets communicants, l’internet des objets, les capteurs sont partout. De fait, les nanos existent et façonnent notre société. C’est comme une espèce invasive créée par nous. – Quand c’est petit, ça se faufile partout y compris à travers les barrières biologiques, ce qu’on exploite dans les applications médicales et les médicaments. D’où les questions de toxicité, de réactions du corps.

1. Moore’s law ou la loi de Moore prédit que le nombre de transistors dans un circuit intégré double tous les deux ans avec une augmentation de la puissance de calcul qui s’en suit. Les nanotechnologies offrent, théoriquement, la possibilité d’augmenter la puissance de calcul davantage. Interview de Christophe Vieu, 8 décembre 2017 163

– Du fait de leur taille, les nanoparticules ou nano-objets acquièrent des propriétés nouvelles. Changer la taille cela permet d’obtenir de nouvelles propriétés qu’on exploite dans les nanotubes de carbone notamment. Mais on ne maîtrise pas le cycle de vie de ces objets, leur devenir dans le corps et dans l’environnement. D’où une incertitude. – Du fait du nombre supérieur d’atomes de surface par rapport aux atomes de cœur, les objets nanos sont cent fois plus réactifs. Ils catalysent des réactions qu’on utilise pour la dépollution, les électrodes de batteries... Dans le secteur de l’énergie, les promesses ont été tenues. On a fait même mieux que prévu, parce qu’on a un savoir-faire industriel important, – Enfin dans le vivant, tout se passe à l’échelle nano. Les objets nanos nous permettent de dialoguer avec les cellules. On peut interfacer avec le vivant par les nanos. Toute ma recherche consiste à présenter des objets nanos à une cellule vivante qui interprète ces signaux et se met à interagir. Le cytosquelette étant très plastique, il peut être actionné par des objets nanos. La barrière vivant/artificiel est estompée. On peut avec du nano demander à une cellule souche de remonter le temps, faire de la médecine régénérative ou des implants.

BBV : Et comment se traduit l’engagement au niveau du laboratoire, dans l’équipe de recherche ? CV : On est impliqués dans tous les problèmes évoqués plus haut. On a une cellule de réflexion éthique collective. Sept sur 700 personnes sont engagées à fond, soit 1 %. C’est pourquoi il est bon d’avoir une masse critique de chercheurs pour pouvoir engager une réflexion collective. On réfléchit ensemble en évitant le syndrome de l’expert : il faut que le biologiste puisse penser l’intelligence artificielle, la robotique et réciproquement. Quand on soulève la question de la moralité des robots, il ne s’agit pas que le roboticien dise qu’on n’y connaît rien, qu’on n’a rien compris aux robots.

BBV : Quels sont les objets de cette réflexion collective ? CV : On réfléchit au sens des objets qu’on met dans le monde. Les machines dites intelligentes sont avant tout efficaces. C’est de l’efficacité très bête que l’on obtient à l’aide de capteurs et d’actuateurs. Il n’empêche que ces machines soulèvent des problèmes philosophiques au sens où elles transforment l’humain. Depuis les débuts de l’humanité, les machines sont en général un prolongement du corps humain. On a externalisé nos organes. Dans le futur, on va intégrer les machines dans l’organisme. C’est pourquoi la question de l’homme augmenté perdure. Les machines créent de nouveaux humains, une nouvelle espèce. De plus, contrairement aux machines traditionnelles qui répondaient à une fonction précise, nos machines actuelles ne servent à rien car elles servent un peu à tout. Elles sont polyvalentes, multifonctionnelles. Du coup, la relation entre structure, fonctions et usages est très difficile à appréhender. 164 Christophe Vieu & Bernadette Bensaude-Vincent

Les chercheurs/ingénieurs ne peuvent prévoir l’usage qu’on fera de leurs machines. Ils en sont néanmoins responsables car ces objets n’auraient pas existé sans eux. Le graphène a le potentiel de transformer le monde. Ce sont les chercheurs qui l’ont fait venir à l’existence. Mon investissement dans la réflexion éthique sur nos recherches procède de ce sentiment de responsabilité. Même si on n’est pas responsables de l’usage final, on a un rôle d’alerte.

BBV : Auriez-vous fait ce cheminement vers l’éthique sans les nanos ? CV : Non, ce sont les nanos qui ont suscité ces questions sur les technosciences en général ainsi que des rencontres avec des équipes de philosophes. Ce qu’on appelle aujourd’hui « sciences de l’ingénierie » ne correspond pas du tout à la démarche de l’ingénierie traditionnelle. On ne peut suivre l’éthique de l’ingénieur quand on est embarqués dans une dynamique de recherche exploratoire, heuristique où l’on regarde comment ça marche. On a mené une réflexion critique sur la notion d’acceptabilité des inno- vations en nanotechnologies. Globalement, on est prêts à tout accepter, on a un seuil très bas. Une innovation est jugée inacceptable seulement s’il y a des milliers de morts, comme dans une pandémie. On est très laxistes parce qu’on s’habitue. Les objets issus des nanos sont insidieux, ils nous séduisent. Mais il faut poser la question de l’inacceptable plutôt que de l’acceptable parce qu’on sait qu’on acceptera tout.

BBV : Ce sont donc les nanos qui ont amorcé ces réflexions éthiques chez les chercheurs ? CV : On part de questions naïves avec de jeunes docteurs, des ingénieurs, des chercheurs confirmés, mais on veut vraiment faire de la réflexion prospective au sens de l’ingénierie. Ça bouge, la réflexion se propage dans les milieux des sciences de l’ingénierie. Un mouvement d’étudiants ingénieurs engagés à Lyon semble se propager rapidement. Pour certains, l’engagement conduit à se mettre en marge du système pour suivre leurs propres valeurs. Ils ont du mal à comprendre qu’on puisse être engagés de l’intérieur du système. C’est délicat mais c’est aussi une voix à respecter et à favoriser. En tous cas, ce sont les nanotechnologies qui ont amorcé cette pratique de la recherche avec éthique intégrée. Et c’est la philosophie qui doit structurer la réflexion sur nos pratiques. Varia

Axioms as Definitions: Revisiting Poincaré and Hilbert

Laura Fontanella Université Aix-Marseille, CNRS, Institut de Mathématiques de Marseille, Marseille (France)

Résumé : Un problème fondamental dans la réflexion sur les fondements des mathématiques consiste à déterminer ce qu’est un axiome. Cette question est spécialement importante en vue de l’étude de nouveaux axiomes pour la théorie des ensembles (tels que les axiomes de grands cardinaux) dont la légitimité est fortement controversée ; cet article s’insère dans le débat. En analysant les écrits de Poincaré et de Hilbert, nous observons que, malgré les différences profondes dans la pensée de ces deux logiciens, ils parvinrent à la même conception des axiomes de la géométrie qui ne seraient que des définitions déguisées. Nous généralisons cette conception des axiomes comme définitions de concepts à n’importe quel système axiomatique (en particulier aux axiomes de la théorie des ensembles). Abstract: A fondamental problem in the discussion on the foundations of mathematics is to clarify what an axiom is. This is especially important in the light of the most recent advances in set theory where new axioms have been proposed whose legitimacy is highly controversial (for example, large cardinal axioms); this paper is a contribution to this discussion. By analysing the view of Poincaré and Hilbert on axioms, we observe that, despite the deep differences in their philosophical thinking, the two logicians came to the same conception of the axioms of geometry as definitions in disguise. We revisit and generalise this view by arguing that any axiomatic system (set theory in particular) is the definition of some concepts.

1 Introduction

Before the development of non-Euclidean geometries in the 19th century, the axioms of Euclid’s Elements were regarded as absolute principles formalising

Philosophia Scientiæ, 23(1), 2019, 167–183. 168 Laura Fontanella the basic laws of space. The introduction of hyperbolic and elliptic geometries challenged the dominant view on the a prioricity of geometry; if the principles of geometry were a priori, their violation would lead to a contradiction, instead, different geometries are possible. Despite the deep differences in their philosophical views, Poincaré and Hilbert came to the same conception of the axioms of geometry: they are definitions in disguise; rather than asserting undeniable truths, they fix the meaning of the basic terms of geometry (point, line, etc.) that would otherwise remain undefined. The main purpose of this paper is to extend Poincaré’s and Hilbert’s account of geometry to any axiomatic system. Nowadays, mathematicians no longer consider geometry as a body of indisputable principles. Pluralism in geometry is safely accepted; in fact, contemporary geometers study different geometries and this has valuable applications to physics or even cryptography and coding theory. On the other hand, it is generally believed that the axioms of theories such as set theory or arithmetic have a different status, that they should have a non-conventional, possibly intrinsic, justification. In the words of Feferman:

When the working mathematician speaks of axioms, he or she usually means those for some particular part of mathematics such as groups, rings, vector spaces, topological spaces, Hilbert spaces, and so on. These kinds of axioms have nothing to do with self- evident propositions, nor are they arbitrary starting points. They are simply definitions of kinds of structures which have been rec- ognized to recur in various mathematical situations. I take it that the value of these kinds of structural axioms for the organization of mathematical work is now indisputable. In contrast to the working mathematician’s structural axioms, when the logician speaks of axioms, he or she means, first of all, laws of valid reasoning that are supposed to apply to all parts of mathematics, and, secondly, axioms for such fundamental concepts as number, set and function that underlie all mathematical concepts; these are properly called foundational axioms. [Feferman, Friedman et al. 2000, 403]

We will question Feferman distinction between structural and foundational axioms by presenting a new approach to axioms, inspired by Poincaré’s and Hilbert’s account of geometry, that considers any axiomatic system as the definition of some concepts. We will focus in particular on set theory and argue that this can be regarded as an axiomatic definition of the concept of “set”. The paper is structured as follows. In section 2 we briefly explain the motivations for the present work. In sections 3 and 4, we discuss Poincaré’s and Hilbert’s conception of geometry, and we outline our broader conceptual approach to all axiomatic systems. In section 5 we propose a non-literal Axioms as Definitions: Revisiting Poincaré and Hilbert 169 interpretation of the axioms of ZFC that justifies our view of set theory as a definition. Moreover, in this section we suggest that the very nature of the concept of “set” should not be sought in the idea of a collection of objects regarded as a totality in its own right, but rather in the possibility of performing specific operations on such collections. Finally, in section 6 we argue that our view point on set theory ties up with set theoretic pluralism.

2 Motivations for this work

Whilst axiomatic set theory was originally meant to provide a foundation for all mathematics, the development of forcing led to the discovery that many important mathematical problems cannot be solved within the classical theory of sets ZF. Several additional axioms have been introduced over the years that partially answer some of the questions that are independent from ZF: large cardinals axioms, forcing axioms, the axiom of determinacy and projective determinacy and others. This variety of strong principles, often incompatible with each other, led to debate what should be suitable criteria for new axioms for set theory. Several suggestions have been made which appeal to intrinsic or extrinsic criterias of various kind [see for instance Maddy 1988a,b], but the discussion largely depend on some preliminary questions: what foundational role shall we expect from set theory? Is there one absolute set theory? This paper is meant as a contribution to this discussion. We will argue that pluralism in set theory is the natural consequence of our approach to axioms as definitions: each theory of set is equally legitimate as a definition of the concept of “set” (see section 6). Whether or not an axiom system is a definition in disguise is not just a linguistic matter, but changes the perspective on the role of axioms in mathematics and the goal of set theory. By arguing that any axiomatic system is the definition of some concepts, we intend to oppose to the very idea that the axioms of a theory express absolute mathematical truths. Thus, our view challenges the foundational role traditionally ascribed to set theory: we cannot expect for set theory to legitimate mathematical knowledge, although it can still aim at a fundamental role, namely providing a conceptual basis for mathematics by determining a concept of “set” as general as possible to embrace all suitable mathematical notions. More details about our view of the role of set theory for mathematics will be given in section 6. We shall first clarify something that may easily lead to a misunderstanding. Every definition contains a sign or expression which had no meaning before and whose meaning is given by the definition. In mathematics it is quite common to introduce axiomatic definitions of symbols that are not included in the language of the theory considered. More precisely, suppose we have a theory T written in a language L and we want to define a new symbol R (a singular term, a predicate or a function) that is not in the language L ; 170 Laura Fontanella we can give a definition of R through sentences in the expanded language L + {R}. These are called implicit definitions. Beth definability theorem establishes that, in first order logic, every implicit definition is equivalent to an explicit definition, namely one that depends on a formula of the original language L of the theory. Nevertheless, axiomatic definitions of this kind (implicit or explicit) require a background theory. In what sense then, can axiomatic systems that do not have a background theory, such as set theory, be regarded as definitions? We shall answer that in that case, the axioms fix the very meaning of the non-logical symbols of the language of the theory (i.e., the signature of the theory), such as ∈ and = in the case of set theory. These symbols are defined simultaneously through the axioms which establish their mutual relations with each other. Thus, as we will see, the axioms provide a system of relations between the terms so defined. To simplify the terminology, we will just say that the axioms of set theory define the concept of “set”, or the axioms of arithmetic define the concept of “number”, where what we actually mean is that set theory defines the symbols ∈ and =, arithmetic defines 0, S, + and ×, and so on.

3 Poincaré’s account of geometry

Let us briefly discuss the evolution of geometry that led to the introduction of non-Euclidean geometries. Euclid’s Elements is the first axiomatic pre- sentation of a branch of mathematics; in this work, geometry is developed through five “axioms” and five “postulates”. It is quite difficult to clarify the precise distinction between these two notions. For Aristotle an “axiom” is a self-evident proposition stating some general truth that is common to all sciences, while a “postulate” concerns only a specific science. It is not clear what meaning Euclid accorded to these two notions and whether or not he shared Aristotle’s distinction; in any case, we can grant that in the philosophical tradition of the ancient Greeks both “axioms” and “postulates” shared the character of being undeniable statements. Nevertheless, one of Euclid’s postulates, the fifth, was quite controversial. The fifth postulate, also called “parallel postulate”, can be stated as follows: Given a straight line and a point outside the line, there is one and only one straight line passing through the point which is parallel to the given one. Unlike the other principles of Euclid’s axiomatisation, the legitimacy of the parallel axioms as a postulate was repeatedly questioned. While it was generally believed to be true, the common opinion seems to have been that it ought to be proved. There is evidence that Euclid himself tried to derive it from the other axioms and postulates before including it in his axiomatisation. Over the centuries, several attempts to find a demonstration of the parallel postulate (from the other axioms) were made, unsuccessfully. Then, in the 19th century, Lobachevskii, Bolyai, Gauss and Riemann considered various Axioms as Definitions: Revisiting Poincaré and Hilbert 171 negations of the fifth postulate: the parallel postulate was replaced by the statements asserting the existence of more than one straight line (hyperbolic geometry) or no straight line (elliptic geometry) passing through the point and parallel to the given one. The resulting geometries were later proven to be consistent by Beltrami in 1868 (assuming the consistency of Euclid’s geometry). For instance, the consistency of elliptic geometry can be proven by considering a sphere as a model, where the plane is identified with the surface of the sphere, the straight lines are the great circles, and points at each other’s antipodes are taken to be equal. Poincaré’s analysis of these results is quite unassailable [see Poincaré 1902]: the axioms of Euclid’s geometry do not establish experimental facts, because we do not have experience of ideal straight lines and points, nor they express a priori knowledge for otherwise it would not be possible to violate the fifth postulate without contradiction. What is, then, the nature of these axioms? Poincaré’s answer is “they are definitions in disguise”.

The axioms of geometry are therefore neither synthetic a priori intuitions nor experimental facts. They are conventions. Our choice among all possible conventions is guided by experimental facts; but it remains free and is only limited by the necessity of avoiding every contradiction [...]. In other words, the axioms of geometry (I do not speak of those of arithmetic) are only definitions in disguise. [Poincaré 1902, 75–76, translation mine]

Viewed as definitions the axioms are neither true, nor false.

What, then, are we to think of the question: Is Euclidean geometry true? It has no meaning. One geometry cannot be more true than another; it can only be more convenient [...] because it sufficiently agrees with the properties of natural solids, those bodies which we can compare and measure by means of our senses. [Poincaré 1902, 76, translation mine]

Pluralism in geometry is the natural consequence of this approach which grants absolute freedom for the formulation of new axiomatisations, provided they do not lead to a contradiction. No geometrical system is absolute, although one can be more appropriate than others for modelling certain aspects of the physical universe. For instance, Euclidean geometry conforms to our daily experience of the distance, length, dimension of the objects around us, while elliptic geometry gives a better account for larger portions of the Earth: if we consider any three items in our apartment and measure the sum of the internal angles of the triangle formed by these items, the result of our measurement would be 180◦ (approximatively) in accordance with the laws of Euclidean geometry, on the contrary if we take one item in Paris, the second in Sidney and the third in Buenos Aires the resulting value would be larger than 180◦ as elliptic geometry predicts. 172 Laura Fontanella

In this view, geometry is not concerned with the properties of the objects to which the geometrical system is applied, but rather with the set of relations that hold between the primitive terms. We shall then stress another important aspect of Poincaré’s conception of geometry: the meaning of the basic terms can only be fixed through their relations with each other, thus if we take a primitive term out of the axiomatic system it would lose all meaning. If one wants to isolate a term and exclude its relations with other terms, nothing will remain. This term will not only become indefinable, it will become devoid of meaning. [Poincaré 1900, 78, translation mine] As we will see in section 4, analogous considerations can be found in Hilbert’s writings: geometry defines a system of relations between the primitive terms which are meaningless outside the axiomatic system. Poincaré’s analysis of the other branches of mathematics is surely different. In his view, arithmetic is actually synthetic a priori and its certainty is guaranteed by the intuition. Mathematical induction, he says, is a synthetic a priori principle which is “imposed upon us with such a force that we could not conceive of the contrary proposition” [Poincaré 1902, 48]. Nevertheless, we shall object that it is actually possible to consider an arithmetic where the induction principle fails. For instance, Robinson Arithmetic Q is a version of arithmetic without induction [see Robinson 1950], and one can easily exhibit a model of Q where induction fails.1 In principle, then, we can extend the same approach for geometry to arithmetic and regard its axioms not as absolute indisputable truths but as mere definitions of certain terms, namely “zero”, “successor”, “sum” and “product”. The very meaning of these terms changes with the axiomatisations, thus “successor” means something different in Peano Arithmetic and Robinson Arithmetics. Concerning set theory, Poincaré was one of the most famous opponents of modern set theory for its use of impredicative concepts. We will discuss his criticisms in more detail in Section 5 and we will appeal precisely on impredicative concepts to endorse our view that the axioms of set theory should be regarded as definitions.

4 The Frege-Hilbert controversy

Hilbert’s first reference to axioms as definitions appears in his Grundlagen Der Geometry [Hilbert 1899] where he says that his axiomatization of geometry

1. It is enough to consider the natural numbers plus two additional elements a and b, then we interpret s and + as the natural successor function and addition operation on the natural numbers, but we impose s(a) = a, s(b) = b, and for every natural number n, a + n = a, b + n = n, for every element in the domain x, x + a = b and x + b = a. This model can be easily shown to satisfy the axioms of Q, and the induction fails as 0 + a = a while induction would imply for every x, 0 + x = x. Axioms as Definitions: Revisiting Poincaré and Hilbert 173 should be intended as the definition of the concepts of “point”, “line” and “plane”. Puzzled by this statement, Frege asks for an explanation [see Frege 1980]; the exchange that followed enlighten us about the two logicians’ general views on the role of axioms in mathematics. Frege objects that axioms should be assertions, while definitions do not assert anything, but lay down something. Hilbert replies:

In my opinion, a concept can be fixed logically only by its relations to other concepts. These relations formulated in certain statements, I call axioms, thus arriving at the view that axioms (perhaps together with propositions assigning names to concepts) are the definitions of the concepts. [Hilbert, letter to Frege 22.09.1900, translated in [Frege 1980, 51]]

Furthermore, the meaning of the terms so defined is tangled with the axioms chosen, and a different axiomatisation would change the meaning of the terms.

[...] to try to give a definition of a point in three lines is to my mind an impossibility, for only the whole structure of axioms yields a complete definition and hence every new axiom changes the concept. [Hilbert, letter to Frege 29.12.1899, translated in [Frege 1980, 40]]

We shall rephrase this thought. Imagine we were asked to provide a precise definition for every geometrical notion. Then, for instance, we would define the notion of “triangle” as a “polygon” with three “edges” and three “vertexes”. The notion of “polygon” is defined from the notion of “plane”, the notion of “edge” can be defined from the notion of “line”, and “vertex” is defined from “point”. At this point we are supposed to define “plane”, “line” and “point”, but if we find a concept χ (or more concepts) that defines these notions, we will have to find a definition for χ and so on, leading to an infinite regress. Thus, as Hilbert says, we can define a concept only if we put it in relation to other concepts. We can stop this process, if we define the notions of “point”, “line” and “plane” axiomatically, namely by describing their relations to one another through certain axioms.

This is apparently where the cardinal point of the misunderstand- ing lies. I do not want to assume anything as known in advance; I regard my explanation in sec. 1 as the definitions of the concepts point, line, plane—if one adds again all the axioms of groups I to V as characteristic marks. If one is looking for other definitions of a “point”, e.g., through paraphrase in terms of extensionless, etc., then I must indeed oppose such attempts in the most decisive way; one is looking for something one can never find, because there is nothing there and everything gets lost and becomes vague and 174 Laura Fontanella

tangled and degenerates into a game of hide-and-seek. [Hilbert, letter to Frege 29.12.1899, translated in [Frege 1980, 39]]

Analogous considerations can be made for the axioms of set theory. We can define every mathematical notion, including the notion of “function” and “number” from the notions of “set” and “membership” which can then be “defined” trough a series of axioms (the case of set theory will be discussed more extensively in section 5). In modern terms we say that the concept of “set” is a primitive notion as it is not defined in terms of other concepts. Nevertheless, the terminology “primitive notion” refers to something which is undefined as it is in no need for a definition, because its meaning is immediately understood; on the contrary, the view point that we want to defend here is that primitive notions are actually “defined” through axioms in the sense that the axiomatic system fixes their meaning in a formal and rigorous way. Frege’s conception of axioms represents the dominant view at the time of this correspondence with Hilbert: I call axioms propositions that are true but are not proven because our knowledge of them falls from a source very different from the logical source, a source which might be called spatial intuition. From the truth of the axioms it follows that they do not contradict one another. There is therefore no need for a further proof. [Hilbert, letter to Frege 27.12.1899, translated in [Frege 1980, 37]] This evokes the delicate problem of mathematical truth. As we have seen, after claiming that geometry is a definition, Poincaré concludes that, as such, geometry is just a convention and its axioms are therefore neither true nor false. Hilbert, instead, does not intend to give up the idea that geometry is a body of truths. So in his reply to Frege, he does not deny the truth of axioms, but explains that their correctness has to be demonstrated by showing that they do not contradict each other.

I was very much interested in your sentence: “From the truth of the axioms it follows that they do not contradict one another”, because for as long as I have been thinking, writing, lecturing about these things, I have been saying the exact reverse: if the arbitrarily given axioms do not contradict one another with all their consequences, then they are true and the things defined by the axioms exist. This is for me the criterion of truth and existence. [Hilbert, letter to Frege 29.12.1899, translated in [Frege 1980, 42]]

We should point out (as C. Franks brilliantly observes in [Franks 2009]) that, despite this insistence on the necessity to prove the consistency of the axioms— not just for geometry but in general—, Hilbert was not skeptical about the correctness of mathematics, he simply had higher standard for what counts as a proof of it. While he was convinced that mathematical experience speaks Axioms as Definitions: Revisiting Poincaré and Hilbert 175 for the consistency of axioms, his goal was to show that mathematics could stand on its own and prove its own consistency without appealing to extra- mathematical justifications. Now, while the consistency of geometry can be reduced to the consistency of analysis, it became clear very quickly to Hilbert himself that the direct consistency of analysis (i.e., not relative to another theory) would face significant difficulties. Hilbert hoped to ultimately reduce all mathematics to a unique axiomatic system and then prove its direct consistency. Today we know from Gödel’s incompleteness results that Hilbert’s program cannot be accomplished. For our part, by claiming that any axiomatic system is a definition, we oppose to the very idea that axioms express absolute truths. While any axiom takes the form of a sentence, it does not assert anything, it is meaningful only insofar as it contributes with the other axioms of the system to the definition of a concept. Thus, no axiom system expresses profound absolute mathematical truths, it only defines certain concepts which may be more or less suitable for modelling different things. Whatever is the source of mathematical knowledge, or whether mathematics is simply not a body of truths, is not the object of the present paper. Hilbert further illustrates his view of the nature of the definition provided by the axioms in yet another passage of this correspondence. Frege objects that the basic concepts that are claimed to be defined by his axioms (“point”, “line”, “plane”) are not unequivocally fixed. For instance, a “point” could be a pair of numbers, a triple, a tuple and so on. Hilbert replies:

You say that my concepts, e.g., “point”, “between” are not unequivocally fixed; e.g., “between” is understood differently on p. 20, and a point is there a pair of numbers. But it is surely obvious that every theory is only a scaffolding or schema of concepts together with their necessary relations to one another, and that the basic elements can be thought in any way one likes. If in speaking of my points I think of some system of things, e.g., the system: love, law, chimney-sweep... and then assume all my axioms as relations between these things, then my propositions, e.g., Pythagoras’ theorem, are also valid for these things. [...] At the same time, the further a theory has been developed and the more finely articulated its structure, the more obvious the kind of application it has in the world of appearances and it takes a very large amount of ill will to want to apply the more subtle propositions of plane geometry or of Maxwell’s theory of electricity to other appearances than the ones for which they were meant... [Hilbert, letter to Frege 29.12.1899, translated in [Frege 1980, 42]]

Thus the axioms do not point out concrete systems of things, altogether they define a schema of concepts. To explain this view, we may consider, as an analogy, the distinction between an individual and its shape: distinct 176 Laura Fontanella individuals can have the same shape, since the axioms define the “shape” so to speak, not the individuals. Both Poincaré and Hilbert refer to the axioms of geometry as fixing the relations between the primitive terms. In this attention for the structural aspects of the axioms, we can perhaps see in the two logicians a hint of modern structuralism. In fact, Shapiro refers to Hilbert’s kind of axiomatic definitions as “structural definitions” [Shapiro 1999]. Nevertheless, one of the main concerns of Ante Rem structuralism is categoricity, which does not seem any of Hilbert’s worries as far as geometry is concerned2: in fact, in the above passage, Hilbert seems to suggest that no system of things is the unique possible interpretation of the axioms. Thus, in this view, the axioms of geometry would most certainly not define an Ante Rem structure. In the case of set theory, Shapiro’s structuralist approach to axioms as definitions would be especially problematic, because neither ZFC, nor second-order ZFC are categorical3 as Ante Rem Structuralism requires. More importantly, Shapiro replaces the requirement of consistency of a theory with the one of “satisfiability”, namely the existence of a model4, but as Shapiro himself points out, satisfiability requires a background set theory where such a model can be found, hence it cannot work as a reasonable criterion for set theory itself. In our view, the axioms of a theory do not entail any ontological commitment to the schema of concepts so defined, the axioms only fix the conditions for a certain system of things to match the schema. We shall discuss this more precisely in the next section.

5 On the meaning of existential quantifiers

We mentioned Poincaré’s skepticism about set theory due to the use of impredicative concepts. An example is given by the Powerset axiom which is formally stated as ∀x∃y∀z(z ∈ y ⇐⇒ z ⊆ x). This sentence, if interpreted as

2. On the other hand, categoricity was for Hilbert an important issue in connection with the reals. 3. Shapiro, however, remarks that second-order ZFC is quasi-categorical, namely if M and M 0 are two standard models of second order ZFC, then either M is isomorphic to M 0 or else one of them is isomorphic to an initial segment of the other. Quasi- categoricity is enough to fix certain references, for example the empty set ω, and others are unique up to isomorphism. 4. This is motivated by the observation that in second-order logic a theory can be consistent and yet not have a model. In fact, there is no completeness theorem, so if T is the conjunction of the second order axioms of Peano arithmetic and G is a standard Gödel sentence that states the consistency of T, then by the incompleteness theorem P ∧ ¬G is consistent, but it has no models because every model of P is isomorphic to the natural numbers, hence G is true in all such models. Therefore, despite its consistency, P ∧ ¬G fails at describing a possible structure. Axioms as Definitions: Revisiting Poincaré and Hilbert 177

“given a set x, the set of all subsets of x exists”, involves a circularity, because for this statement to make sense, one needs to assume the very possibility of a totality of all subsets of x, precisely the Powerset of x. For this reason, if we think of this axiom as legitimating the totality of all subsets of x, we run into a circle. On the other hand, if one thinks that of statement as a way of “singling out” something which is already available as a legitimate mathematical notion, then the axiom is not problematic. In other words, for the axiom to make sense, we need to assume that in the domain of discourse over which the variables may range we already have something that plays the role of the the Powerset of x, the axiom does not literarily establish its existence or legitimate it as a valid mathematical notion, but only label it as a “set”. In this sense, the most appropriate interpretation of the Powerset axiom would be “given a set x, the Powerset of x is a set”. In other words, the existential quantifiers in set theoretic sentences act as filters of sets, namely as a way of selecting sets from other collections that should not be regarded as sets. This interpretation is supported by Zermelo’s terminology in his original axiomatisation of set theory: Set theory is concerned with a “domain” of individuals, which we shall call simply “objects” and among which are the “sets”. [Zermelo 1908, 262] as translated in [Van Heijenoort 1967, 203] Since we do not want to make any ontological commitment to the mathematical notions that are involved in set theory, we will avoid the locution “object” and we will talk of a “domain of legitimate mathematical notions” among which are the “sets”. We shall not discuss the nature of the “legitimate mathematical notions” (whether they exist as part of a platonic reality immutable and independent from human thinking or they are just useful fictions) as this is irrelevant to the main thesis of the present work that any axiom system is a definition in disguise. Nor we will propose suitable criteria for establishing the legitimacy of a given mathematical notion. We will simply assume that the legitimate mathematical notions are previously available in the domain of individuals where “sets” are selected. Analogous considerations can be made for arithmetic where the existential quantifiers would act as “filters of numbers”. Thus, in general, the existential quantifiers in a mathematical sentence filter out the legitimate notions that fall into the concept defined through the relevant axiom system. Based on these considerations, we propose the following non-literal interpretation of the axioms of ZFC. – (Extensionality) Two sets are equal if they contain the same sets as elements. – (Pairing) Given two sets a and b, the pair {a, b} (the collection containing exactly a and b as elements) is a set; – (Separation) Given a formula ϕ(x, ~p) with sets parameters ~p, and given a set a, the collection of all x in a that satisfy ϕ(x, ~p) is a set; 178 Laura Fontanella

– (Union) Given a set a, the union of a (the collection of all sets that belong to some element of a) is a set; – (Power Set) Given a set a, the collection of all subsets of a is a set; – (Replacement) Given a function f(x) (defined with set parameters) and a set a, the collection of all f(x) with x ∈ a is a set; – (Foundation) A collection of sets that does not have an ∈-minimal element is not a set; – (Infinity) Consider all the collections of sets that contain the empty set and are closed by the operation x 7→ x ∪ {x}, at least one of such collections is a set; – (Choice) For every family of nonempty sets which is itself a set, the image of the choice function is a set.

Under this interpretation, the axioms of ZFC are not meant as instructions for constructing mathematical objects, but rather as an axiomatic definition of the word “set”. The basic set theoretic operations such as the union, or the power set of a set are not legitimated by the axioms, instead the possibility of those operations is assumed in advance. The axioms state that the resulting collections can be considered to be “sets”. Formal existence is really a matter of what the axioms, taken as a whole, determine to be a “set”. In fact, when we apply the existential quantifier to a certain collection, we make the collection available for the other axioms. In this sense, the whole theory ZF (or ZFC) defines the word “set”. Now, we argue that the very nature of the concept of set should not be sought in the idea of a collection of objects regarded as a totality in its own right, but rather in the possibility of performing specific operations on such collections. To support this claim, let us go back to the very origin of the concept of set, that historians of mathematics date back to Cantor’s “derived point-set”.

It is a well determined relation between any point in the line and a given set P to be either a limit point of it or no such point, and therefore with the point-set P the set of its limit point is conceptually co-determined; this I will denote P 0 and call the first derived point-set of P . [Cantor, as quoted in [Ferreirós 2007, 143]]

At this point, Cantor applied the operation of derivation to the derived point- set P 0, obtaining “the second derived point-set P 00 and reiterated the process. What is crucial here is that, before Cantor, there was already talk of sets and collections of points. Therefore, Cantor’s original contribution should not be sought in the concept of collections intended as totalities in their own right, but rather in the very idea that such collections were available for certain mathematical operations such as the derivation. In this spirit, we consider the Axioms as Definitions: Revisiting Poincaré and Hilbert 179 concept of “set” to be related to the possibility of applying specific operations to a given collection of objects. To further illustrate this view, we should consider for instance the Axiom of Choice, AC. The fact that AC was implicitly used in many mathematical proofs, even before Zermelo’s explicit formulation, suggests that this was generally accepted as a natural principle. Thus the choice function can be regarded as a perfectly legitimate operation and the debate should not concern the legitimacy of this function, but rather the idea that the collection of objects derived from such a collection (namely the image of the choice function) can be taken to be a “set”, that is whether it can be made available for the set theoretic operations definable from the other axioms. We shall, then, reconsider our approach to the thorny problem of the choice of new axioms for set theory. When discussing the axiom of choice or large cardinals axioms, we wonder whether or not a choice function exists, inaccessible cardinals exist and so on. This leaves us under the impression that the foundational goal of set theory is to detect which mathematical entities do or do not exist, are or aren’t legitimate mathematical notions. But again it would be naïve to think that set theory can dictate the terms for an ontology of mathematics. In our perspective, the discussion over large cardinal axioms should not be phrased in terms of existence or non-existence of large cardinals, but rather as the problem of whether a certain “large” collection of ordinals can be made available for the operations defined by the axioms of ZFC (or ZF) without contradiction. In fact, in second-order logic one can define being “(strongly) inaccessible” as a property of classes then, for instance, the class of ordinals is inaccessible in this sense (provided this class is accepted as a legitimate mathematical notion). Thus the legitimacy of large cardinals axioms concerns the problem of whether or not any inaccessible collection can be taken to be a set, namely whether one can apply the other set-theoretic operations to such a collection without contradiction.

6 A multitude of concepts of set

We have outlined our view of the axioms of set theory as definitions in disguise. In this perspective, a single axiom is not an assertion inherently true or false, but the whole system of axioms defines the concept of “set”. It follows that by adding or removing one or more axioms, we change the concept defined. So, in particular, the concept of set defined by the theory ZF is different than the one defined by the theory ZFC, or ZF+V=L, or ZF + ∃κ(κ is measurable). Now, a definitional view point of axioms does not necessarily lead us to embrace a wild formalist view of mathematics considered to be a meaningless game where random theories are investigated together with their logical consequences. We cannot rule out, for instance, the possibility of an inter-subjective or innate 180 Laura Fontanella concept of set that would make some set theories “truer” than others to the extent that they conform to the intended concept. Nevertheless, the panorama of theories defended nowadays leaves little hope for a universal agreement on a unique concept of set. Conflicting intuitions are behind the most promising enrichments of ZF: on the one hand, we have the idea that sets must be obtained by a cumulative process, namely at each stage we throw in “the basket of sets” only those collections that are obtained from the previous stages with operations that are definable in a somehow “canonical” way—it is the case, for instance, of V=L or V=Ultimate L—, on the other hand, we have more “liberal” axioms, such as Forcing Axioms, where roughly anything that can be forced by some “nice” forcing notions is in V, in other words it is a “set”. This pluralism of concepts of sets brings us to support Feferman’s thesis that the continuum hypothesis is an inherently vague question [see Feferman 2011–2012], although our arguments are somewhat different. The very meaning of the continuum changes over the theories, indeed we may even agree on what is a collection of natural numbers5, but which of those collections are “sets” depends on the concept of “set” considered. Thus for instance, in the framework of forcing axioms, all the collections of natural numbers that can be forced by nice forcing notions are “sets”, thus the continuum is quite large and, not surprisingly, CH fails (the strongest forcing axioms imply that the continuum is ℵ2). On the contrary, the concept of “set” defined by the theory ZF+V=L is more restrictive, thus only few collections of natural numbers are sets in this theory, and in fact CH holds. Now, if set theory is not a body of absolute truths, but a mere definition of some concept, it is natural to wonder in what rests its role for mathematics. The answer may be sought in the richness or abstractness of the concept defined, as all the basic mathematical notions such as groups, vector spaces, even numbers can be regarded as sets and can be made available for the set theoretic operations definable from the axioms of the theory ZF. Thus the goal of set theory does not consist in justifying the existence of mathematical notions or the truth of mathematical propositions, the aim of a theory of sets should be to define a notion of “set” as rich as possible to embrace every useful mathematical notion. On the basis of the “richness” of the underlying concepts of sets, we may come to prefer one theory over the others. This line of thought brings us to evoke a fact that is often considered to be a natural motivation for large cardinals axioms. As pointed out by Steel,

The language of set theory as used by the believer in V=L can certainly be translated into the language of set theory as used by the believer in measurable cardinals, via the translation

5. The continuum is the size of R, or equivalently the size of the powerset of N. Axioms as Definitions: Revisiting Poincaré and Hilbert 181

ϕ 7→ ϕL. There is no translation in the other direction. [Feferman, Friedman et al. 2000, 423]

In other words, the concept of set carried by V=L can be expressed in the theory of measurable cardinals, while the converse seems to be prima facie impossible. Thus, the notions of “set” underlying the theory of large cardinals is more expressive than V=L. Nevertheless, Hamkins presented a serious challenge to this argument. He showed that:

Even if we have very strong large cardinal axioms in our current set-theoretic universe V, there is a much larger universe V + in which the former universe V is a countable transitive set and the axiom of constructibility holds. [Hamkins 2014, 29]

Thus, even the axiom of constructibility is rich enough to allow us to talk about the concept of sets underlying large cardinals axioms within a model of V = L. It follows that the natural outcome of our definitional perspective is pluralism, which in contemporary set theory is represented by the “multiverse conception” (of which Hamkins is one of the main supporters [Hamkins 2012]). This can be described as the view that there are many distinct and equally legitimate concepts of sets, as opposed to the “one universe view” which in contrast asserts that there is only one absolute set concept with a corresponding absolute set-theoretic universe where every set-theoretic question has a definite answer. We should, however, add an important note: some set theorists endorse the multiverse view as an extreme form of Platonism where not just one, but many universes exist as an independent reality; our view, on the contrary, does not entail any ontological commitment to the concept defined through the axioms.

7 Conclusion

In conclusion, we have revisited Poincaré’s and Hilbert’s view of geometry as a definition in disguise and we have extended this approach to all axiomatic systems. Then, we have proposed an interpretation of the axioms of ZFC where the existential quantifiers are intended as filters of sets, namely as ways of singling out sets from collections that are not worth the title of “set”. This naturally leads us to regard set theory as an axiomatic definition of the concept of set. Furthermore, we have argued that the very nature of this concept should not be sought in a collection of objects regarded as a totality in its own right, but rather in the idea that certain collections are available for specific operations definable from the other axioms. We have observed that the concept of set so defined changes if one adds or removes one or more axioms from the 182 Laura Fontanella theory. This leads to a pluralism of concepts of sets varying with the theories. Finally, we have claimed that despite pluralism, set theory can still play a fundamental role for mathematics, and this is to be sought in the “richness” of the concept of set underlying the theory, which is meant to embrace all suitable mathematical notions.

Acknowledgements

I am indebted to the anonymous reviewers for their insightful comments and suggestions which greatly helped improve the quality of this paper.

Bibliography

Feferman, Solomon [2011–2012], Is the continuum hypothesis a definite mathematical problem?, http://logic.harvard.edu/EFI_Feferman_ IsCHdefinite.pdf, for Exploring the frontiers of incompleteness (EFI)Project, Harvard.

Feferman, Solomon, Friedman, Harvey M., et al. [2000], Does mathematics need new axioms?, Bulletin of Symbolic Logic, 6(4), 401–446, doi: 10.2307/ 420965.

Ferreirós, José [2007], of Thought: A History of Set Theory and its Role in Modern Mathematics, Basel: Birkhäuser, 2nd edn., edited by E. Hiebert and E. Knobloch.

Franks, Curtis [2009], The Autonomy of Mathematical Knowledge: Hilbert’s Program Revisited, Cambridge University Press.

Frege, Gottlob [1980], Philosophical and Mathematical Correspondence, Oxford: Blackwell, edited by G. Gabriel et al.

Hamkins, Joel David [2012], The set-theoretic multiverse, Review of Symbolic Logic, 5(3), 416–449, doi: 10.1017/S1755020311000359.

—— [2014], A multiverse perspective on the axiom of constructiblity, in: Infinity and Truth, edited by C-T Chong, Q. Feng, T. A. Slaman, & W. H. Woodin, Hackensack: World Scientific, 25–45, doi: 10.1142/ 9789814571043_0002.

Hilbert, David [1899], Grundlagen der Geometrie, Leipzig: Teubner, English translation in Foundations of Geometry, L. Unger (ed.), La Salle: Open Court Press, 1971.

Maddy, Penelope [1988a], Believing the axioms. I, Journal of Symbolic Logic, 53(2), 481–511, doi: 10.1017/S0022481200028425. Axioms as Definitions: Revisiting Poincaré and Hilbert 183

—— [1988b], Believing the axioms. II, Journal of Symbolic Logic, 53(3), 736– 764, doi: 10.2307/2274569.

Poincaré, Henri [1900], Sur les principes de la géométrie: réponse à M. Russell, Revue de Métaphysique et de Morale, 8(1), 73–86.

—— [1902], La Science et l’Hypothèse, Paris: Flammarion, 1968, English translation in Science and Hypothesis: The Complete Text, M. Frappier, A. Smith, and D. J. Stump (trans.), London; New York: Bloomsbury, 2017.

Robinson, Raphael M. [1950], An essentially undecidable axiom system, in: Proceedings of the International Congress of Mathematics, 729–730.

Shapiro, Stewart [1999], Implicit definitions and abstraction, for Abstraction Day 99, University of St Andrews.

Van Heijenoort, Jean [1967], From Frege to Gödel: A source book in mathematical logic, Cambridge, Mass.: Harvard University Press.

Zermelo, Ernst [1908], Untersuchungen über die Grundlagen der Mengenlehre, I, Mathematische Annalen, 65, 261–281, English translation in [Van Heijenoort 1967, 201–215].

Nietzsche for Physicists

Juliano C. S. Neves Universidade Estadual de Campinas (UNICAMP), Instituto de Matemática, Estatística e Computação Científica, Campinas (Brazil)

Résumé : L’un des philosophes les plus importants de l’histoire, Friedrich Nietzsche, est presque ignoré par les physiciens. L’auteur qui a déclaré la mort de Dieu au xixe siècle était enthousiasmé par la science, principalement durant la deuxième partie de son œuvre. À l’aide de la notion physique de force, Nietzsche a créé son concept de volonté de puissance. En pensant à la conservation de l’énergie, le philosophe allemand a eu une certaine inspiration pour créer son concept de l’éternel retour. Dans cet article, on souligne cer- taines influences de la physique sur Nietzsche et on discute de l’actualité de sa position épistémologique—le perspectivisme. À partir du concept de volonté de puissance, je propose que le perspectivisme conduise à l’interprétation où la physique et la science en général sont considérées comme un jeu.

Abstract: One of the most important philosophers in history, the German Friedrich Nietzsche, is almost ignored by physicists. This author who declared the death of God in the 19th century was a science enthusiast, especially in the second period of his work. With the aid of the physical concept of force, Nietzsche created his concept of will to power. After thinking about energy conservation, the German philosopher had some inspiration for creating his concept of eternal recurrence. In this article, some influences of physics on Nietzsche are pointed out, and the topicality of his epistemological position— the perspectivism—is discussed. Considering the concept of will to power, I propose that the perspectivism leads to an interpretation where physics and science in general are viewed as a game.

Philosophia Scientiæ, 23(1), 2019, 185–201. 186 Juliano C. S. Neves 1 Introduction: an obscure philosopher?

The man who said “God is dead” [GS § 108]1 is a popular philosopher, well- regarded worldwide. Nietzsche is a strong reference in philosophy, psychology, sociology and the arts. But the question is whether Nietzsche had any influence on the natural sciences, especially physics? Among physicists and scientists in general, the thinker who created a philosophy that argues against Platonism is known as an obscure or irrationalist philosopher. However, contrary to common ideas, although Nietzsche was critical about absolute rationalism, he was not an irrationalist. He criticized the hubris of reason (the Socratic rationalism)2 the belief that mankind could be guided by reason alone. I.e., the German philosopher was a strong critic of Enlightenment3 [or Aufklärung in German]; to him, the idea of salvation and redemption by reason was an equivocal one. Nietzsche accused the hubris of reason,4 but this philosopher did not deny the use of reason itself. As we shall see, this is clear from Nietzsche’s education [Bildung]. Among his references, there are several natural philosophers and/or scientists. Nietzsche read Charles Darwin (or at least the Darwinian ideas), Hermann von Helmholtz, Roger Boscovich, and other authors. He tried to keep up-to-date with scientific debate during the 19th century. Therefore, this philosopher who is also considered a poet (Thus Spoke Zarathustra is poetry as well) never denied the importance of science. Of course, his scientific view was different from common sense, and Nietzsche thought about science from another point of view—by using his perspectivism.5 According to research in the Nietzschean philosophy, the author’s works are didactically divided into three periods. In the first, an approximation with Romanticism, Schopenhauer, and the German musician Richard Wagner

1. Nietzsche’s works are indicated by initials, with the corresponding sections or aphorisms established by the critical edition of a complete work edited by Colli & Montinari [Nietzsche 1978]. The Birth of Tragedy [Nietzsche 2007a] is known as BT, Human, all too Human [Nietzsche 2005a] is HH, Gay Science [Nietzsche 2001] is GS, Beyond Good and Evil [Nietzsche 2002] is BGE, Ecce Homo [Nietzsche 2005b] is EH, Twilight of the Idols [Nietzsche 2005b] is TI, On the Genealogy of Morality [Nietzsche 2007b] is GM, On Truth and Lying in a Non-Moral Sense [Nietzsche 2007a] is TL, and the posthumous fragments (or notebooks) are PF, indicated by numbers and years. 2. According to Nietzsche, Socrates is “the archetype of the theoretical optimist”. He had “the imperturbable belief that thought, as it follows the thread of causality, reaches down into the deepest abysses of being, and is capable not simply of understanding existence, but even of correcting it” [BT § 15]. 3. The philosopher suggests a new Enlightenment in several texts. See, for example, fragments 25 [296], 26 [298], 27 [79] and 27 [80] from 1884. 4. In On the Genealogy of Morality, one reads: “Hubris today characterizes our whole attitude towards nature, our rape of nature with the help of machines and the completely unscrupulous inventiveness of technicians and engineers” [GM III § 9]. 5. I will discuss this cardinal concept in Nietzschean philosophy in Section 4. Nietzsche for Physicists 187 exists. In the second, Nietzsche breaks off his friendship with Wagner and stays away from Romanticism and Schopenhauer’s influence. The third is the period where the Nietzschean philosophy acquires its “full identity” and originality. Science’s influence on Nietzsche is present in all these periods. However, from the second period onward, this influence is more evident. In a book from this period, Human, all too Human, Nietzsche states: “Optimism, for the purpose of restoration” [HH II, Preface, 5]. That is, Nietzsche identifies science with optimism (an idea originally proposed in his very first book, The Birth of Tragedy, where he criticized Socratism) and emphasizes the beginning of a process of a cure. The philosopher recovered his health with the assistance of science. His illness was blamed on Schopenhauer’s pessimism and Wagner. In On the Genealogy of Morality, Nietzsche defined the purpose of science in our time, modernity: All sciences must, from now on, prepare the way for the future work of the philosopher. Thus, this work is understood to mean that the philosopher has to solve the problem of values and that he has to decide on the rank order of values. [GM I § 17] Therefore, as we can observe, the importance of science in Nietzschean philosophy transcends the scientific realm. We have already seen that the multifaceted Roger Boscovich is among Nietzsche’s influences. In the next section, we shall discover how important the concept of force from physics (due to Boscovich) was in developing Nietzsche’s concept of will to power [Wille zur Macht]. From that concept, Nietzsche built his cosmological view: the eternal recurrence of the same, as we shall see in section 3. His epistemological position, perspectivism, is presented in section 4 with an application to two problems in modern physics: wave-particle duality and the gravitational phenomenon. In section 5, I use the concepts of will to power and perspectivism to interpret physics—and science in general—as a game.

2 Physics in Nietzsche’s main concepts

The Croatian thinker Roger Boscovich (physicist, mathematician, philosopher, etc.) was a decisive reference for Nietzsche’s philosophy. Nietzsche’s reading of Boscovich’s concept of force was an essential ingredient for the construction of his famous concept of will to power. In the 18th century, Boscovich studied body collisions. From his research, the Croatian concluded that matter is a manifestation of forces. According to the physicist and historian of physics Max Jammer [Jammer 1999, 178], for Boscovich “impenetrability and extension [...] are merely spatial expressions of forces, ‘force’ is consequently more fundamental than ‘matter’ [...]”. Nietzsche confirms that idea and wrote in Beyond Good and Evil: 188 Juliano C. S. Neves

Boscovich taught us to renounce belief in the last bit of earth that did ‘stand still’, the belief in ‘matter’, in ‘material’, [...]”. [BGE § 12] Along with Boscovich, Nietzsche emphasized the concept of force [Kraft] to the detriment of matter [Materie]. The material world is a manifestation of forces, which, in the Nietzschean case are translated into wills to power, as we shall see. For Nietzsche, the physical concept of force was important even though such a concept was an empty word. In a posthumous fragment, with a touch of irony, this is clear: The triumphant concept of “force”, with which our physicists excluded God from the world, needs supplementing. It must be ascribed to an inner world which I call “will to power” [...]. [PF 36 [31] of 1885]6 In another fragment, the idea is stressed: “a force we cannot imagine (like the allegedly purely mechanical forces of attraction and repulsion) is an empty phrase and must be refused the rights of citizenship in science” [PF 2 [88] of 1885]. The will to power, according to Nietzsche’s thought, completes the concept of force. A will to power is a quantum of power. It is “characterized by the effect it exerts and the effect it resists [...]. The quantum of power is essentially a will to violate and to defend oneself against being violated. Not self-preservation” [PF 14 [79] of 1888], said the philosopher. In this sense, becoming is considered to be a result of the intention to increase power; it is not considered a result of intentions of “self-preservation”.7 Above all, Nietzsche wrote, “everything that happens out of intentions can be reduced to the intention of increasing power” [PF 2 [88] of 1885]. Therefore, will to power means that everything, whether organic or inorganic, “wants” to increase its power. Such a quantum of power is neither a metaphysical concept nor a substance, it cannot be confused with a being. “The will to power not a being, not a becoming, but a pathos, is the most elementary fact, and becoming, effecting, is only a result of this..”.8 [PF 14 [79] of 1888]. By using the concept of force, in a famous fragment, Nietzsche wrote what the world is: And do you know what “the world” is to me? [...]. This world: a monster of force, without beginning, without end, a fixed, iron

6. According to Nietzsche Source (http://www.nietzschesource.org), the passage translated in [Nietzsche 2003, 26], “our physicists have created God and the world”, is not correct. 7. This is the point where Nietzsche finds his disagreement over Darwinian theory. 8. We must be careful about the use of “fact” in that fragment. As we shall see, Nietzsche denies any fact defended by positivism. The Greek word pathos may be translated into affect as well. Nietzsche for Physicists 189

quantity of force which grows neither larger nor smaller, [...] a play of forces and force-waves simultaneously one and “many” [...]—This world is the will to power—and nothing besides! [PF 38 [12] of 1885]. The world as will to power can be viewed as forces struggling for more power. A fragment similar to 2[88] of 1885 has been found, but in this case it indicates the concept of force: “all that happens, all movement, all becoming as a determining of relations of degree and force, as a struggle”. [PF 9 [91] of 1887]. There is no goal for all events, “for all that happens”, then Nietzsche denied any shadow of teleology as we can see in fragment 36 [15] of 1885: If the world had a goal, it could not fail to have been reached by now. If it had an unintended final state, this too could not fail to have been reached. If it were capable at all of standing still and remaining frozen, of “being”, if for just one second in all its becoming it had this capacity for “being”, then in turn all becoming would long since be over and done with, and so would all thinking, all “mind”. The fact of “mind” as a becoming proves that the world has no goal and no final state and is incapable of being. The fragment above shows Nietzsche’s refusal to accept an ultimate goal, and this is the reason for rejecting the idea of heat death of the Universe (including the second law of thermodynamics), which was already being debated during his lifetime. The world as will to power may be read both in the singular or plural forms.9 In the singular, the world is will to power. There is nothing beyond or “nothing besides!” There is no metaphysical world. Nietzsche denies a metaphysical world and, like Spinoza,10 considers nature and mankind as the same thing. Nietzsche, in a sense, naturalizes man. Will to power in the plural means a finiteness of forces. The natural and human worlds are manifestations of forces or wills to power. The importance of the concept of force in Nietzsche, besides the concept of will to power, is essential to his cosmological view, and Nietzschean cosmology is the so-called eternal recurrence of the same.

3 The eternal recurrence of the same

Somehow the eternal recurrence of the same [die ewige Wiederkunft des Gleichen] is one of the most intriguing concepts in Nietzschean philosophy.

9. See also [Müller-Lauter & Griffin 1992] for an abundant discussion on these two forms of facing the will to power. 10. In [Spinoza 2002, part III] the philosopher criticizes those that have considered “man in Nature as a kingdom within a kingdom”. 190 Juliano C. S. Neves

In published works, it appeared for the first time in Gay Science, a book of 1882, in the section, or aphorism, called “The heaviest weight”:

What if some day or night a demon were to steal into your loneliest loneliness and say to you: “This life as you now live it and have lived it, you will have to live once again and innumerable times again; and there will be nothing new in it, but every pain and every joy and every thought and sigh and everything unspeakably small or great in your life must return to you, all in the same succession and sequence [...]. The eternal hourglass of existence is turned over again and again, and you with it, speck of dust!” Would you not throw yourself down and gnash your teeth and curse the demon who spoke thus? Or have you once experienced a tremendous moment when you would have answered him: “You are a god, and never have I heard anything more divine”. If this thought gained power over you, as you are, it would transform and possibly crush you; the question in each and every thing, “Do you want this again and innumerable times again?”, would lie on your actions as the heaviest weight! [...] [GS § 341].

In this view, the eternal recurrence appears to be an ethical thought or a challenge. That is, Nietzsche points out a life experience where each singular moment or “every thing” must be approved. In life each moment—approving it and confirming it—is necessary to accept the possibility of the repetition of the whole life an infinite number of times, “all in the same succession and sequence”. For an affirmative person, each moment is accepted as it is. This is the supreme “yes” to existence, according to Nietzsche. In Ecce Homo the philosopher stresses that eternal recurrence is the “highest possible formula of affirmation” [EH, Thus Spoke Zarathustra, § 1]. On the other hand, the nihilist, who denies the sensible world or the single world,11 is not able to say “yes” and confirm the existence. Then the eternal recurrence, in this view, is a necessary condition to overcome nihilism.12

11. Plato, according to Nietzsche, is considered nihilist because he created the ideal world, the word “where” the Ideas live. Rejecting the sensible world, Plato formulated the True World against the illusory world (the sensible world). In the same way, Nietzsche accuses Christianity because “Christianity is Platonism for the ‘people’ ” [BGE, Preface]. In Twilight of the Idols it is written: “the true world is gone: which world is left? The illusory one, perhaps?... But no! We got rid of the illusory world along with the true one!” [TI, How the true world finally became a fable, § 6]. In a sense, Nietzsche assumes only one world, this world. Then his philosophy is immanent. 12. Nihilism, the “uncanniest of guests”, presents several consequences. In fragment 2 [127] of 1885, the philosopher shows its consequences on science, politics and arts. Nietzsche for Physicists 191 3.1 A cosmological interpretation

From another point of view, eternal recurrence is a cosmology or a cosmological interpretation.13 The Nietzschean ingredients for this cosmological points of view are: (1) both finite and conserved forces and (2) infinite time. Translating into the language of physics, the first one is indicated by the finiteness of energy in the observable Universe. Moreover, Nietzsche considers force a conserved quantity.14 To him, this is confirmed by the first law of thermodynamics.15 The philosopher wrote about this law and its relation to eternal recurrence: “the principle of the conservation of energy demands eternal recurrence” [PF 5 [54] of 1886]. The second one is the eternity of the world. For Nietzsche, the recurrence of “all in the same succession and sequence” is possible with eternity and both conserved and finite forces. All force configurations, within eternity, according to Nietzsche, would repeat their states. In a sense, Nietzsche works in the same direction as Poincaré,16 who stated the “eternal recurrence theorem” years after the German philosopher began his first thoughts on the physicality of his concept. Contrary to the ethical version, the eternal recurrence of the same, as a scientific thought, appears mainly in the posthumous fragments. One of the most important is fragment 14 [188] of 1888, called The new world-conception, where Nietzsche wrote:17 if the world may be thought of as a certain quantity of forces and as a certain number of centers of force—and every other representation remains indefinite and therefore unusable—thus it follows that in the great dice game of existence it must pass through a calculable number of combinations. In infinite time, every possible combination would be sometime reached once; even more, it would be reached an infinite number of times. As we can see, the two ingredients are present. The first is indicated by “a certain quantity of forces”, and the second can be read directly. The attempts to “prove” the eternal recurrence by using scientific concepts can be viewed, according to the arguments in [Neves 2013], as an expedient 13. See also [Krueger 1978], [Nehamas 1980], [Marton 1990] and [D’Iorio 2011] for discussions on the cosmological meaning of this Nietzschean concept. In [Neves 2013, 2015], this discussion is presented from our state of the art in cosmology. [Neves 2013] discusses the possibility of eternal recurrence by means of the scientific knowledge today. Nietzsche himself said that the eternal recurrence “is the most scientific of all possible hypotheses” [PF 5 [71] of 1886]. 14. Indeed, a mechanical system described only by conservative forces has its mechanical energy conserved. 15. As we have already seen, the philosopher was a critic of the second law of thermodynamics, but the first law was welcomed by him. 16. A historical description of the Nietzschean eternal recurrence and its similarity to Poincaré’s theorem is found in [Brush 1976, vol. II, 628]. This similarity is stressed in [D’Iorio 2011] as well. 17. I translated this fragment directly from the critical edition of [Nietzsche 1978]. 192 Juliano C. S. Neves used by the philosopher to attract readers. A scientific form for the eternal recurrence is more acceptable to people immersed in a scientific culture.

3.2 The possibility of an eternal universe

Today, cosmology is typically Einsteinian. From solutions of Einstein or Einstein-type equations, cosmological models have been constructed. One of the most important features in these cosmological solutions is the problem of the initial singularity. In the standard cosmological model (Λ-CDM model),18 the initial singularity is called the Big Bang. It is interpreted as the initial state of the Universe, a singular state where physical quantities like matter energy density, and geometrical quantities like space-time curvature, diverge. These are unbounded at the initial singularity. A common belief that the Big Bang is a breakdown of Einstein’s equations exists, and a complete quantum theory of gravity would solve this problem. However, there are possible solutions of this problem without invoking the complete quantum theory of gravity. Bouncing cosmologies19 appear today as a possibility to avoid the initial singularity within current physics. If we presume violations of energy conditions, regular or nonsingular solutions come from Einsteinian gravitation. This is provided these violations are acceptable, which should be the case given the observation of cosmic accelerated expansion. The energy conditions link pressure and energy densities of the cosmological fluid, and the fluid description is a good “approximation” to describe the matter content of the Universe. Such conditions are necessary hypotheses in the singularity theorems. Presuming the energy and geometrical conditions, the singularity theorems show that space-time curvature possesses a singularity or a singular state. That is, in cosmology, for example, it is possible to show that the matter satisfying the energy conditions leads to the Big Bang or the initial singularity. Then, with energy condition violations, the singularity theorems are not valid, and it is possible to avoid the Big Bang.20 In this perspective, the singularity is replaced by a regular transition—a bounce—between a contraction phase and an expansion phase (where we live today). The possibility of constructing cyclic cosmologies exists in such contexts, where the Universe passes through successive phases of contraction and expansion. The ekpyrotic cosmology [Lehners 2008], whose name is inspired by Stoicism, presents a cyclic cosmology. Moreover, this cosmology provides solutions to the typical problems of the standard model (flatness, isotropy,

18. CDM means Cold Dark Matter, which is a type of non-relativistic matter able to interact only by means of the gravitational interaction. Lambda is the cosmological constant developed by Einstein. Today the cosmological constant is the “source” of the cosmic acceleration, according to several models. 19. See, for example, [Neves 2017] and the major review of [Novello & Perez Bergliaffa 2008]. 20. A detailed study on singularity theorems (so-called Hawking-Penrose theorems) is found in [Wald 1984, chap. 9]. Nietzsche for Physicists 193 homogeneity and horizon problems), without the inflationary mechanism from the Λ-CDM model.21 I.e., the inflationary mechanism and its qualities that appeared during exponential expansion when the universe was young, are replaced by the ekpyrotic phase, a slow cosmic contraction phase that precedes our current expansion phase. This contraction phase defines the ekpyrotic cosmology. During the contraction phase, besides the solved problems of standard cosmology, a generation of quantum fluctuations exists that is responsible for structure formation (structures such as galaxies). In the Λ-CDM model, this achievement is due to inflation. Then the initial singularity problem, or the Big Bang problem, as well as typical standard model problems and structure formation, may be solved by adopting an alternative cosmological model.22 Contrary to the critics and some Nietzschean scholars, a cyclic cosmological model is possible today even in Einsteinian theory (the ekpyrotic cosmology, whose origin is in the extra dimension context, may be thought of as an effective theory in four dimensions, described by general relativity). The door is open to a “new” point of view,23 where the cosmos is viewed as uncreated, i.e., it is immanent and eternal. The strange death of God, emphasized by Nietzsche, has several meanings: one of the most important is related to the question of the cosmos’ eternity. The modern rationality may refuse the Creator or the Demiurge of the Universe, forbidding the Big Bang as an instant of creation, because, above all that instant may be viewed as a shadow of the dead God.24 The question of the possibility of recurrence of the same remains an open issue because the knowledge of the structure formation (such as galaxies and galaxy clusters), the evaporation in the contraction phase, and the thermodynamic problem (entropy would increase in each expansion phase) are not totally solved within our science at the current time. As Nietzsche points out in his Gedankenexperiment, his idea of eternal recurrence as a thought

21. The inflationary mechanism assumes a quantum field—the inflaton—able to expand exponentially the space-time fabric in the initial phase of the cosmos [see Linde 2007, for a review] and to solve the standard model problems. 22. In black hole physics it is possible to solve the problem of singularities within the Einsteinian context as well [see Neves & Saa 2014]. In particular, the singularity inside black holes is removed by energy violations. 23. A cyclic view of the cosmos is an old idea. Even Nietzsche writes that “The doctrine of the ‘eternal return’, which is to say the unconditional and infinitely repeated cycle of all things —is Zarathustra’s doctrine, but ultimately it is nothing Heraclitus couldn’t have said also. At least the Stoics have traces of it, and they inherited almost all of their fundamental ideas from Heraclitus” [EH, The Birth of the Tragedy, § 3]. 24. See the aphorism 108 from Gay Science, where the philosopher writes: “God is dead; but given the nature of people, there may still be for millennia caves wherein they show his shadow.—And we—we must still defeat his shadow as well!” The thesis that the Big Bang may be interpreted as God’s shadow is supported in [Neves 2013]. 194 Juliano C. S. Neves experiment assumes the eternal repetition of the same states for generating ethical consequences, “all in the same succession and sequence”.25

4 Perspectivism as an epistemological position

Nietzschean perspectivism,26 or his epistemological position, is indicated in a frequently cited posthumous fragment of 1886: Against positivism, which halts at phenomena—“There are only facts”—I would say: no, facts are just what there aren’t, there are only interpretations. We cannot determine any fact “in itself”: perhaps it’s nonsensical to want to do such a thing. “Everything is subjective”, you say: but that in itself is an interpretation, for the “subject” is not something given but a fiction added on, tucked behind.—Is it even necessary to posit the interpreter behind the interpretation? Even that is fiction, hypothesis. Inasmuch as the word “knowledge” has any meaning, the world is knowable: but it is variously interpretable. It has no meaning behind it, but countless meanings. “Perspectivism” [PF 7 [60] of 1886]. Denying the thing-in-itself,27 the fact (or positivism belief), the final truth (because there is no “being” and becoming has no goal) and any truth behind or beyond the sensible world (there is no metaphysical world), Nietzsche claims perspectivism. Knowledge is perspectivistic, it is something human, all too human. In a sense, Nietzsche follows Kant and points out the dependence of the human conditions (body structure in the Nietzschean case) for the generation of knowledge. According to Zarathustra’s author, even physics is a perspective or an interpretation, as we can read in Beyond Good and Evil: Now it is beginning to dawn on maybe five or six brains that physics also is only an interpretation and arrangement of the world (according to ourselves, if I may say so) and not an explanation of the world. [BGE § 14]

25. A debate exists concerning recurrence: is it the recurrence of the same or of the different? I agree with [Krueger 1978] because only recurrence of the same would have an impact on ethical issues. 26. There is an intense debate on Nietzschean perspectivism. See, for example, [Anderson 1998] on truth and objectivity in Nietzsche’s perspectivism and the book organized by Babich, which possesses several works on the topicality of this philosophical position [Babich 1999]. 27. “The ‘thing-in-itself’ is absurd. If I think away all the relationships, all the ‘qualities’, all the ‘activities’ of a thing, then the thing does not remain behind” [PF 10 [202] of 1887]. Nietzsche for Physicists 195

The world with its “ambiguous character” [see GS § 373] has become infinite, according to the aphorism Our new “infinity”: “the world has once again become infinite to us: insofar as we cannot reject the possibility that it includes infinite interpretations” [GS § 374]. Above all, interpretations do not reveal any fact or something behind or beyond the sensible world. In a provocative form, Nietzsche, as a philologist by trade, criticizes the physicists and their notion of law of nature:

You must forgive an old philologist like me who cannot help maliciously putting his finger on bad tricks of interpretation: but this “conformity of nature to law”, which you physicists are so proud of, just as if—exists only because of your interpretation and bad “philology”. It is not a matter of fact, not a “text”, but instead only a naive humanitarian correction and a distortion of meaning that you use to comfortably accommodate the demo- cratic instincts of the modern soul! “Everywhere, equality before the law,—in this respect, nature is no different and no better off than we are” [...]. But, as I have written, this is interpretation, not text [...]. [BGE § 22].

The old philologist shows the historical and temporal features of knowledge. Our “fixation” on the laws of nature, according to Nietzsche, is a feature of modernity. Knowledge is created today by assuming concepts, like the concept of isonomia or equality before the law, which are values for us. Once again, it is emphasized in the quotation above that scientific knowledge does not reveal a fact or a “text”. Let us use the Nietzschean perspectivism to look at two questions of modern physics, the wave-particle duality and the gravitational phenomenon. We can enrich our discussion on this philosophical concept.

4.1 Wave or particle?

Returning to modern physics, Nietzschean perspectivism may help us. With the aid of Nietzsche, dichotomies are banned. For example, the wave- particle duality in quantum mechanics. What is the true reality of matter in the quantum mechanics realm? Wave or particle? For the Nietzschean philosophy, both or neither! Both, because wave and particle are working interpretations, scientific perspectives of the sensible world (and we shall see what the meaning of “working interpretations is”). Neither, because these interpretations do not show facts or the thing-in-itself. That is, for Nietzsche, there is no perfect correspondence between mind and “reality”. Because the “reality” such as we know it, the “reality” given by concepts is not a thing- in-itself, it is a product of human interpretations.28 Nietzsche rejected naive

28. It must be emphasized that Nietzsche was not a solipsism enthusiast. 196 Juliano C. S. Neves realism. Moreover, the philosopher rejects Platonic idealism and the possibility of existing mathematical entities. We find this rejection in a posthumous fragment: Mathematics contains descriptions (definitions) and conclusions from definitions. Its objects do not exist. The truth from its conclusions depends on the correctness of logical thought. [PF 25 [307] of 1884]29 Mathematics is grounded in error, i.e., “the invention of the laws of numbers was made on the basis of the error, dominant even from the earliest times, that there are identical things” [HH I § 19]. Denying identity,30 such as Heraclitus, and the basis of the classical logic, Nietzsche indicated that even mathematics is a human creation. Hence, for example Nietzschean philosophy “solves” the debate on the reality of the wave function in quantum mechanics. The wave function is only a tool to interpret (in itself it is an interpretation!).

4.2 Force or space-time curvature?

Another problem in modern physics is: What is the true nature of gravity? Is gravity expressed by force or space-time curvature? Is the Einsteinian theory (or something else in the future) the true or final answer to the gravitational problem? According to Nietzsche, the final answer is only an illusion. Nietzsche denies final knowledge or a final truth (and even his point of view is an interpretation, a provisional perspective).31 An absolute point of view is absurd and contains a contradiction in terms [see BGE § 16] because every perspective is provisional, temporary. In this sense, both gravitational theories (Newtonian and Einsteinian) are true during some period of time. Within Nietzschean philosophy, truth, in general, has a polemical definition given by an early text of 1873, On Truth and Lying in a Non-Moral Sense. The philosopher asked what is truth? and answered: [Truth is] a mobile army of metaphors, metonymies, anthropo- morphisms, in short a sum of human relations which have been subjected to poetic and rhetorical intensification, translation, and decoration, and which, after they have been in use for a long time, strike a people as firmly established, canonical, and binding; truths are illusions of which we have forgotten that they are illusions, metaphors which have become worn by frequent use and have lost all sensuous vigour, coins which, having lost their stamp, are now regarded as metal and no longer as coins [TL § 1].

29. I translated this fragment directly from the critical edition of [Nietzsche 1978]. 30. “The predominant disposition, however, to treat the similar as being identical— is an illogical disposition, for there is nothing identical as such—is what first supplied all the foundations for logic” [GS § 111]. 31. “Granted, this is only an interpretation as well—and will you be eager enough to make this objection?—well then, so much the better” [BGE § 22]. Nietzsche for Physicists 197

The classical philologist (a young philologist when he wrote that text) indi- cated in the above quotation, truth as “human relations” or as a perspective, as it would be related many years later. Specifically, in Nietzschean philosophy, the scientific truth only means that it works for the purposes of subsistence of the type (the scientist is one of several types) and obeys specific rules, which, in the case of physics, are both mathematical and empirical. Both rules are interpretations. We have already seen the first one. The second is stressed in modernity, because both the divine and the metaphysical criteria of truth are rejected. Above all, the empirical obligation (and the scientificity) is motivated by the will to truth [Wille zur Wahrheit]. According to Nietzsche, the will to truth is grounded on morality, because the scientist, assuming the empirical obligation says: “I will not deceive, not even myself” [GS § 344]. Even without God (because “God is dead” in modernity) and the metaphysical world (the True World is a fable), the will to truth remains a dominant impulse that seeks stability, identity. In our scientific time it appears directly related to the sensible world, i.e., the will to truth seeks to obtain what it wants in our single world: the truth as something that does not change.32 It is an error, according to Nietzsche, because identity, or something such as the metaphysical world that does not suffer corruption, was rejected. In a sense, by using the Nietzschean philosophy, scientific work should assume another position: it should look at the truths with new eyes, considering them as interpretations, something temporary. Above all, as something human, all too human. Lastly, the purpose of subsistence of the type due to knowledge is similar to “food” to Nietzsche. A kind of food for the spirit [Geist, without any metaphysical sense], which is metaphorically comparable to a stomach: “[...] ‘spirit’ resembles a stomach more than anything” [BGE § 230]. After all, the scientific type uses science as food to increase his power. The possibility of several interpretations or perspectives is welcomed in Nietzschean philosophy. In On the Genealogy of Morality, Nietzsche said: “[...] the more eyes, various eyes we are able to use for the same thing, the more complete will be our ‘concept’ of the thing, our ‘objectivity’ ” [GM III § 12]. In this sense, “objectivity” in Nietzschean philosophy, means to have several perspectives on the same thing. Each perspective is a manifestation of impulses or wills to power. During the historical period, mankind lived with/within several perspectives or “truths”. The Einsteinian and Newtonian theories are “true”. Of course, the Einsteinian theory contains further elements and is more sophisticated than the Newtonian one. Then, it is more “objective” (e.g., the dual aspect of the matter in quantum mechanics brings us a more

32. Plato, in The Republic (VI, 485b), presents the philosopher’s nature and his love of truth. Truth is indicated “as reality which always is, and which is not driven this way and that by becoming and ceasing to be”. This is a common position even today, and, accordingly, truth is revealed by science because true scientific theories work independently of time. However, the geocentric model functioned during past centuries but it is now ruled out. 198 Juliano C. S. Neves

“objective” look). However, as well as any theory, it is still an interpretation and presents an increase of power to the men who created/supported it. In the Nietzschean philosophy, each perspective reflects the plurality of the human body. That is to say, “our body is, after all, only a society constructed out of many souls” [BGE § 19]. In his immanent philosophy, a soul means impulses or wills to power. Nietzsche has a plural vision, a perspectivistic view on “reality”.

5 Physics or science as a game

The world as a game may be read in several parts of Nietzsche’s works. As early as 1872, the young philosopher wrote the text Homer’s Contest. The text indicates an aspect that remained unaltered in his mature works: the concept of agon. The Greek concept of agon indicates a contest, dispute, or struggle. For the mature Nietzsche, both becoming and the agon are subsumed under the concept of will to power. Furthermore, the world as will to power means the world as a game or play as well, as we can read in the cited fragment 38 [12] of 1885: the world “as a play of forces and force-waves [...]”. These are the ingredients of his Dionysian world view. Nietzsche saw company in Heraclitus, a thinker who claimed a similar point of view.

The affirmation of passing away and destruction that is crucial for a Dionysian philosophy, saying yes to opposition and war, becoming along with a radical rejection of the very concept of “being”—all these are more closely related to me than any- thing else people have thought so far. [EH, The Birth of the Tragedy, § 3]

Becoming as a game, Nietzsche and Heraclitus are in agreement.33 In a sense, this is a view that may be indicated even today, using our cosmology. In [Neves 2015], one shows that Nietzsche’s idea of Dionysian cosmology, with the concepts of becoming and struggle, may be approximated to the notions of the cosmological eras, or eras of domination. In cosmology, the space- time fabric and its dynamics, i.e., its expansion, contraction or staticity is determined by Friedmann’s equations. Such equations provide the dominant term, which drives the space-time dynamics for a specific time period. Each term is a cosmic fluid component in the equations. First, when the Universe was young, radiation dominated expansion, then matter. In our current time, dark energy begins to dominate cosmic expansion. Somehow, this picture indicates a game or struggle among the matter-energy forms (radiation, matter and dark energy). In each era, an energy-matter form dominates. In cyclic

33. In a fragment attributed to Heraclitus one reads: “Lifetime is a child at play, moving pieces in a game. Kingship belongs to the child” [Kahn 1979, 71]. Nietzsche for Physicists 199 cosmology, the eras of domination alternate, the sequence radiation-matter- dark energy is repeated, and the agon is suggested. The world as a game is a good metaphor from this cosmological perspective. As part of the Dionysian world, science is also a result of contests. The agon or contest between scientific perspectives is determinant to scientific development. As we have seen, science, in particular physics, obeys rules. Then, as part of the Dionysian world, science may be viewed as a game. This conclusion comes from the concepts of will to power and perspectivism: the scientific interpretations struggle for dominance. Science as a game means the most influential game today. From the beginning of modern times, science is the most dominant game. Then, the scientist is a type of player (someone who obtains in science the subsistence of the type) who is immersed in such a sophisticated game and, in general, does not realize that he/she is playing it.

6 Final comments

Contrary to common belief, Nietzsche was neither obscure nor an irrationalist thinker. Maybe the reason for this opinion is found in his work. Using aphorisms, Nietzsche created his work differently from the scientific model. Denying all powers to reason, the philosopher pointed out the limitations of the latter. However, in his published work and posthumous fragments the German philosopher exhibited admiration for science and its rationality. With the aid of natural sciences [Naturwissenschaften] his concepts were created. In particular, relying on physics Nietzsche developed the concept of will to power. With the Boscovichean concept of force being more fundamental than the concept of matter, the German philosopher thought about the entire world in terms of forces in struggle. His position on the nature of reality is more than relevant today. A philosophy without facts denies a world in itself, or a thing-in-itself. The Nietzschean perspectivism is an epistemological option. “There are no facts, only interpretations” is the “fundamental truth” in Nietzschean philosophy. Nietzsche stressed the perspectivistic view of knowledge because “this world is will to power”, i.e., this world is plural, as are the interpretations in physics today. “Long live physics!”,34 wrote the philosopher in Gay Science. There is no doubt about the influence of physics on the Nietzschean main concepts. Will to power and eternal recurrence depend on the physical concept of force to appear. But the contrary is not true. There is a lack of Nietzsche’s influence on physics and physicists. However, in our point of view (a perspective), the Nietzschean perspectivism is a good option to interpret the modern results in physics to ban false dichotomies or the improbable final truth.

34. See [GS § 335], where Nietzsche describes the importance of the physicist’s honesty. 200 Juliano C. S. Neves

From the important concepts of will to power and perspectivism, I derived an interpretation where science is viewed as a game. The Dionysian view reveals the world as a contest, a game among wills to power. The multiplicity of perspectives in science obey imposed rules and present science as a game and the scientific activity as agon.

Acknowledgments

This work was supported by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), Brazil (Grant No. 2013/03798-3). I thank IMECC- UNICAMP for kind hospitality and Elena Senik for reading the manuscript.

Bibliography

Anderson, R. Lanier [1998], Truth and objectivity in perspectivism, Synthese, 115(1), 1–32, doi: 10.1023/A:1004984312166.

Babich, Babette (ed.) [1999], Nietzsche, Epistemology, and Philosophy of Science, London: Kluwer Academic Publishers.

Brush, Stephen [1976], The Kind of Motion we Call Heat: A history of kinetic theory of gases in the 19th century I and II, Amsterdam: Elsevier.

D’Iorio, Paolo [2011], The eternal return: Genesis and interpretation, Agonist, IV(1), 1–43.

Jammer, Max [1999], Concepts of Force, New York: Dover.

Kahn, Charles [1979], The Art and Thought of Heraclitus, Cambridge: Cambridge University Press.

Krueger, Joe [1978], Nietzschean recurrence as a cosmological hypothesis, Journal of the History of Philosophy, 16(4), 435–444, doi: 10.1353/hph. 2008.0557.

Lehners, Jean-Luc [2008], Ekpyrotic and cyclic cosmology, Physics Reports, 465(6), 223–263, doi: 10.1016/j.physrep.2008.06.001.

Linde, Andrei [2007], Inflationary cosmology, in: Inflationary Cosmology, edited by M. Lemoine, J. Martin, & P. Peter, Berlin; Heidelberg: Springer Berlin Heidelberg, 1–54, doi: 10.1007/978-3-540-74353-8_1.

Marton, Scarlett [1990], Nietzsche, das forças cósmicas aos valores humanos, São Paulo: Editora Brasiliense. Nietzsche for Physicists 201

Müller-Lauter, Wolfgang & Griffin, Drew E. [1992], Nietzsche’s teaching of will to power, Journal of Nietzsche Studies, 4–5, 37–101, doi: 10.2307/ 20717572.

Nehamas, Alexander [1980], The eternal recurrence, The Philosophical Review, 89(3), 331–356, doi: 10.2307/2184393.

Neves, Juliano C. S. [2013], O eterno retorno hoje, Cadernos Nietzsche, 32, 283–296.

—— [2015], Cosmologia dionisíaca, Cadernos Nietzsche, 36(1), 267–277.

—— [2017], Bouncing cosmology inspired by regular black holes, General Relativity and Gravitation, 49(9), 124, doi: 10.1007/s10714-017-2288-6.

Neves, Juliano C. S. & Saa, Alberto [2014], Regular rotating black holes and the weak energy condition, Physics Letters B, 734, 44–48, doi: 10.1016/j. physletb.2014.05.026.

Nietzsche, Friedrich [1978], Sämtliche Werke. Kritische Studienausgabe, Berlin; New York: de Gruyter.

—— [2001], Gay Science, Cambridge: Cambridge University Press, translated by Josefine Nauckhoff.

—— [2002], Beyond Good and Evil, Cambridge: Cambridge University Press, translated by Judith Norman.

—— [2003], Writings from the Late Notebooks, Cambridge: Cambridge University Press, translated by Kate Sturge.

—— [2005a], Human, All Too Human, Cambridge: Cambridge University Press, translated by R. J. Hollingdale.

—— [2005b], The Anti-Christ, Ecce homo and Twilight of the Idols, Cambridge: Cambridge University Press, translated by Judith Norman.

—— [2007a], The Birth of Tragedy and Other Writings, Cambridge: Cambridge University Press, translated by Ronald Speirs.

—— [2007b], On the Genealogy of Morality, Cambridge: Cambridge University Press, translated by Carol Diethe.

Novello, Mario & Perez Bergliaffa, Santiago [2008], Bouncing cosmolo- gies, Physics Reports, 463(4), 127–213, doi: 10.1016/j.physrep.2008.04.006.

Spinoza, Baruch [2002], Complete Works, Indianapolis; Cambridge: Hackett Publishing Company, translated by Samuel Shirley.

Wald, Robert [1984], General Relativity, Chicago: University of Chicago Press.

Adresses des auteurs

Bernadette Bensaude-Vincent Cécile Legallais Université Paris I Université de technologie Panthéon-Sorbonne de Compiègne 17, rue de la Sorbonne UMR CNRS 7338 75231 Paris Cedex 05 – France Biomécanique et Bioingénierie bernadette.bensaude-vincent@ Rue du Dr Schweitzer univ-paris1.fr CS 60319 60203 Compiègne Cedex – France Laura Fontanella [email protected] Université Aix-Marseille, CNRS Institut de Mathématiques Sacha Loeve de Marseille 163, avenue de Luminy IRPhiL 13009 Marseille – France Université Jean Moulin Lyon 3 [email protected] 18, rue Chevreul 69007 Lyon – France Xavier Guchet [email protected] Université de technologie de Compiègne Juliano C. S. Neves UMR CNRS 7338 Universidade Estadual de Campinas Rue du Dr Schweitzer (UNICAMP) CS 60319 Instituto de Matemática, 60203 Compiègne Cedex – France Estatística e Computação Científica [email protected] CEP. 13083-859 Campinas, SP – Brazil Thierno Guèye [email protected] Université Cheikh Anta Diop Faculté des Sciences et Technologies de l’Éducation et de la Formation Alfred Nordmann Boulevard Habib Bourguiba Institut für Philosophie BP 5036 Technische Universität Darmstadt Dakar-Fann – Sénégal Dolivostr. 15 mailto:[email protected] 64293 Darmstadt – Germany [email protected] Christian Joachim Pico-Lab CEMES-CNRS Gry Oftedal 29, rue J. Marvig IFIKK, University of Oslo BP94347 Postboks 1020 Blindern 31055 Toulouse Cedex – France 0315 Oslo – Norway [email protected] [email protected] Jonathan Simon AHP–PReST Université de Lorraine, Université de Strasbourg, CNRS 91, avenue de la Libération BP 454 54001 Nancy Cedex – France [email protected]

Chris Toumey Center for Environmental Nanoscience & Risk Department of Environmental Health Sciences Arnold School of Public Health University of South Carolina Columbia SC 29208 – USA [email protected]

Louis Ujéda 75015 Paris– France [email protected]

Christophe Vieu LAAS-CNRS 7, Avenue du Colonel Roche 31400 Toulouse – France ABONNEMENT Philosophia Scientiæ Travaux d’histoire des sciences et de philosophie Revue périodique publiée par les Archives Henri-Poincaré Philosophie et Recherches sur les Sciences et les Technologies UMR 7117 CNRS – Université de Lorraine – Université de Strasbourg

Je m’abonne à Philosophia Scientiæ à partir du volume : ......

Nom: ...... Prénom: ...... Institut ou bibliothèque : ...... Adresse: ...... Code postal : ...... Ville : ......

Règlement joint à la commande à l’ordre des Éditions Kimé. Date: ...... Signature : ......

Prix du numéro 24 e

L’abonnement annuel couvre 3 numéros

Tarif France 65 e Tarif étranger 70 e

Pour vous abonner remplissez ce bulletin et renvoyez-le à l’adresse suivante : ÉDITIONS KIMÉ 2, Impasse des Peintres 75002 Paris