<<

War and Algorithm

16028-0303f-Finalpass-r01.indd 1 9/24/2019 12:03:26 PM OPEN ACCESS

The open access publication of this book is made possible by a grant from Riksbankens Jubileumsfond (The Bank of Sweden Tercentenary Foundation) for the Advancement of the Humanities and Social Sciences.

16028-0303f-Finalpass-r01.indd 2 9/24/2019 12:03:26 PM War and Algorithm

Edited by Max Liljefors, Gregor Noll, and Daniel Steuer

London • New York

16028-0303f-Finalpass-r01.indd 3 9/24/2019 12:03:26 PM Published by Rowman & Littlefield International Ltd 6 Tinworth Street, , SE11 5AL, United Kingdom www.rowmaninternational.com

Rowman & Littlefield International Ltd. is an affiliate of Rowman & Littlefield 4501 Forbes Boulevard, Suite 200, Lanham, Maryland 20706, USA With additional offices in Boulder, New York, Toronto (Canada), and Plymouth (UK) www.rowman.com

Selection and editorial matter © Max Liljefors, Gregor Noll, and Daniel Steuer, 2019

Copyright in individual chapters is held by the respective chapter authors.

All rights reserved. No part of this book may be reproduced in any form or by any electronic or mechanical means, including information storage and retrieval systems, without written permission from the publisher, except by a reviewer who may quote passages in a review.

British Library Cataloguing in Publication Data

A catalogue record for this book is available from the British Library

ISBN: HB 9781786613653 PB 9781786613646

Library of Congress Cataloging-in-Publication Data Available ISBN 9781786613653 (cloth : alk. paper) ISBN 9781786613646 (paper : alk. paper) ISBN 9781786613660 (electronic)

The paper used in this publication meets the minimum requirements of American National Standard for Information Sciences—Permanence of Paper for Printed Library Materials, ANSI/NISO Z39.48-1992.

16028-0303f-Finalpass-r01.indd 4 9/24/2019 12:03:26 PM Contents

Acknowledgments vii List of Illustrations ix

1 Introduction: Our Emerging World of War 1 Max Liljefors, Gregor Noll, and Daniel Steuer 2 Prolegomena to Any Future Attempt at Understanding Our Emerging World of War 9 Daniel Steuer 3 Anthropokenosis and the Emerging World of War 53 Howard Caygill 4 War by Algorithm: The End of Law? 75 Gregor Noll 5 Law’s Ends: On Algorithmic Warfare and Humanitarian Violence 105 Sara Kendall 6 Omnivoyance and Blindness 127 Max Liljefors 7 Of the Pointless View: From the Ecotechnology to the Echotheology of Omnivoyant War 165 Allen Feldman

v

16028-0303f-Finalpass-r01.indd 5 9/24/2019 12:03:26 PM vi Contents

8 Visions 191 Max Liljefors, Gregor Noll, and Daniel Steuer

Bibliography 205 Index 223 About the Authors 231

16028-0303f-Finalpass-r01.indd 6 9/24/2019 12:03:26 PM Acknowledgments

We would like to express our sincere gratitude to colleagues who have gen- erously commented on parts of this manuscript: Leila Brännström, Antonia Hofstätter, and Jayne Svenungsson. We would also like to thank a number of people who helped to ensure that a rough manuscript was transformed into a book. With subtlety and candor, the indefatigable Tim Carter has edited our language. Frankie Mace at Rowman & Littlefield provided us with an extraordinary amount of support throughout the whole process. Last but not least, two anonymous referees offered helpful remarks on our book proposal. Max Liljefors would like to thank Lila Lee-Morrison, Aud Sissel Hoel, Kristin Veel, Nanna Bonde Thylstrup, and Daniela Agostinho for valuable discussions throughout various stages of the project. Gregor Noll would like to thank Matilda Arvidsson, Leila Brännström, Moa Dahlbeck, Markus Gunneflo, Valentin Jeutner, Amin Parsa, and Aleksandra Popovic for commenting on his chapter. He would like to acknowledge the generous support of the Marianne and Marcus Wallenberg Foundation, the Ragnar Söderberg Foundation, the Swedish Foundation for Humanities and Social Sciences, and the Torsten Söderberg Foundation. Daniel Steuer would like to thank Megan Archer, Roland Begenat, Tim Carter, Paul Davies, Antonia Hofstätter, German Primera, Jeanne Riou, and Philipp Schönthaler for discussions and valuable comments on the project throughout its various stages.

vii

16028-0303f-Finalpass-r01.indd 7 9/24/2019 12:03:26 PM 16028-0303f-Finalpass-r01.indd 8 9/24/2019 12:03:26 PM List of Illustrations

2.1 The universal monoculture of informational naturalism 29 4.1 Autonomous AI as part of cybernetics 81 4.2 Monotheism is to law what cybernetics is to AI 95 5.1 The Project Maven seal. Image source: Tom Simonite, “Pentagon will expand AI project prompting protests at Google,” Wired, May 29, 2018 (available at https://www .wired.com/story/googles-contentious-pentagon-project-is- likely-to-expand/) 116 6.1 Hieronymus Bosch (or follower) 130 The Seven Deadly Sins and the Four Last Things, 1505–1510 Oil on poplar panel 47 × 55 in. (119.5 cm × 139.5 cm) Museo del Prado, Madrid (Detail) 6.2 Gabriel Orozco 136 Island within an Island (Isla en la Isla), 1993 Silver dye bleach print (Cibachrome) 16 × 20 in. (40.64 × 50.8 cm) Courtesy of the artist and Marian Goodman Gallery

ix

16028-0303f-Finalpass-r01.indd 9 9/24/2019 12:03:26 PM x List of Illustrations

6.3 Walter Hahn 142 Dresden, view from the city hall tower on the ruined city center, 1945 Negativ 5 × 7 in. (13 × 18 cm) Courtesy of SLUB Dresden/Deutsche Fotothek 6.4 Gerhard Richter 144 Townscape Paris, 1968 Oil on canvas 78.7 × 78.7 in. (200 × 200 cm) Froehlich Collection, Stuttgart © Gerhard Richter 2019 (0045) 6.5 Aerial view of the destruction after the atomic bomb over Hiroshima, 1945 145 U.S. Navy National Museum of Naval Aviation Commander Francis N. Gilreath Collection 6.6 NASA, first color photograph of the whole earth, shot from the ATS-3 satellite, November 10, 1967 151 Wikimedia Commons 6.7 NASA, the “Blue Marble” photograph of the earth, shot from 17 spacecraft, December 7, 1972 152 Wikimedia Commons 6.8 Caspar David Friedrich 157 Wanderer above the Sea of Fog, ca. 1817 Oil on canvas 37.3 in. × 29.4 in. (94.8 cm × 74.8 cm) Kunsthalle Hamburg, Hamburg 6.9 Gilles Mingasson 158 A drone and a drone sensor operator practice on a simulator at Holloman Air Force base in New Mexico, 2012 Courtesy of Gilles Mingasson 6.10 The “phases” of a perspectival gaze 159 Max Liljefors

16028-0303f-Finalpass-r01.indd 10 9/24/2019 12:03:26 PM Chapter 1 Introduction: Our Emerging World of War Max Liljefors, Gregor Noll, and Daniel Steuer

In June 1945, a group of American nuclear physicists, led by Nobel laure- ate James , presented the U.S. government with a report on the likely consequences of dropping an atomic bomb. The Franck report advised that, rather than dropping the bomb on Japan without warning, the United States should publicly demonstrate the effects of the weapon, so as to allow for “the possibility of taking into account the public opinion of [the U.S.] and of the nations before deciding whether these weapons should be used against Japan. In this way, other nations may assume a share of responsibility for such a fateful decision.” The U.S. government decided otherwise, and atomic bombs were dropped on Hiroshima and Nagasaki on August 6 and August 9, 1945. Why is this sequence of events relevant to a book on war and algorithms? The situation today does not, at first glance, seem analogous to that of 1945: no world war is about to conclude, and it does not seem that some single decision that will transform the fates of millions is imminent. Neither are we privy to the details of some new Manhattan project for the twenty-first century. But the rejection of the Franck report’s recommendations tells us at least this much: that a war, even if it is about to be won, is not a good to begin a public deliberation over transformative military technology. And while the actual use of a nuclear weapon is far more grave than even the most disquieting truths publicly revealed about new developments in military - nology, we now know that a weaponized form of knowledge has already been established in the world, and that the world has already been transformed by it. Convening the court of public opinion to rule on the nuclear bomb would not have done away with that technology, even if it might have delayed its use. The same applies to the military use of algorithms today. The Franck report proposed demonstrating the bomb’s effects in order to provide that

1

16028-0303f-Finalpass-r01.indd 1 9/24/2019 12:03:26 PM 2 Chapter 1

court of public opinion with evidence. The evidence of the effects of algo- rithmic warfare available to us today is much more difficult to make sense of than any such demonstration of the effects of the atom bomb would have been. Nonetheless, we must at least attempt to make sense of this evidence. The first point we wish to make in this introduction is that public debates about radically new and powerful ways of waging war must take place long before any conflicts that might motivate their employment. Once technolo- gies have emerged and matured, history will have invested itself in the new possibilities, and the space of politics will have shrunk. Because we are approaching our subject before it has fully matured, describing and analyzing it presupposes a measure of speculation. Franck and his colleagues saw, with appropriate clarity, that radically innovative ways of waging war will radi- cally transform our societies, even in of peace. Major conurbations, as suitable targets for the use of nuclear weapons, would become huge liabilities for the United States, as the report made clear. Franck and his colleagues observed that the bomb would transform the way Americans lived and worked; in this way, they went beyond the science of nuclear physics into the broader sciences of the social and the political. In doing so, they expressed a general fact about technology. Technology not only serves the purpose for which it is invented but also determines, through its form, a particular type of social synthesis: a way in which humans who use that technology will find themselves organized in space and time. Nuclear physics endowed the Franck report with authority, but its content could have been authored by any reasoning human being daring to extrapolate from technological ground to social form. It was most definitely written for such reasoning human beings. The second point to be made, then, is that reflection on an emerging world of war cannot be restricted to certain specialized disciplines. Those who reflect on this world must give themselves the speculative means to do so. These means are to be found in the various traditions of thought, speech, and visual representation that are at work in the novel forms warfare takes, tradi- tions that thus make the emergence of these forms seem plausible or even historically inevitable. Taking our cue from these means, we may extrapolate from them into the future, attempting, as it were in a different key, to do what Franck’s group did for nuclear warfare (even if they did it, through no fault of their own, all too late). Any resistance can only grow from the point that emerges once we have gone far enough, deep enough, in our pursuit of the traditions of representation that are now morphing into a world of war. It is at that point that different views and possible trajectories might open up, perspectives from which it becomes clear that the seemingly inevitable esca- lation of war is in fact not inevitable at all. The third point to mention at the outset is that such a perspective cannot be reached easily, and that, even once it has been reached, it cannot be established

16028-0303f-Finalpass-r01.indd 2 9/24/2019 12:03:26 PM Introduction 3

once and for all. We do not purport to reach it in what follows—we attempt only to offer a language and exchange that may allow those engaging with the book themselves to work toward its achievement. We hope that this might be the beginning of a resistance whose other conditions remain as yet unknown. At the start of this project came a workshop on the theme of “Grounding War,” held in 2013 at the Faculty of Law at Lund University, convened by Gregor Noll, with funding from the Torsten and Ragnar Söderberg Founda- tions. Out of this workshop grew a conversation between Noll, Daniel Steuer, and Max Liljefors that continued over two, three, five years, mostly via e-mail, but importantly also in the form of occasional yet intense and absorb- ing real-life discussions; the three of us would lock ourselves in Noll’s office at the Faculty of Law for days of explorative dialogue, developing our respec- tive perspectives on our common topic: the growing power of the algorithm in emerging forms of warfare. That topic, we saw, was rapidly expanding, as technological, political, and economic developments accelerated like intensifying storm winds that seemed—and still very much seem—to be propelling us toward a menacing future. We also knew, or felt intuitively, that in spite of the increasing speed with which things were moving—or, rather, because of this—our think- ing and writing needed to be slow, to slow down, in order not to be carried away by the intoxicating velocity with which things were progressing and, instead, to carve out alternative ways of reflecting and speculating about these developments. The very fact of the complexity of the different interweaving temporalities and paces itself grew into a recurrent subtheme in our writings. It also made itself felt in the very act of writing; writing became an extended dialogue over relatively long stretches of time. We come from different disciplines: philosophy, law, art history. We found that our prolonged dialogue allowed us, gradually, to speak our ways out of our professional confines, to achieve voices that sounded, to some extent, new even to ourselves. We do not call this project “interdisciplinary,” but we think it expressed through the process of its development, and retains in its result, a measure of disciplinary unruliness. At some point, a seemingly simple question arose: what do we want? Steuer remarked that our joint inquiry is fueled by three fears, emerging from each of our particular academic backgrounds. One is the fear of the philoso- pher: there will be nothing left to be understood, no one left to be understood. Another is that of the lawyer: there will be nothing and no one left to be judged. The third is the fear of the art historian: there will be nothing left to see and nothing to interpret. Our chapters in this book are, in a way, attempts to counter the emergence of those disturbing scenarios, scenarios that threaten when matters of life and death become increasingly subsumed under the growing power of the automated decisions of machines: the eclipse of critical

16028-0303f-Finalpass-r01.indd 3 9/24/2019 12:03:26 PM 4 Chapter 1

reflection, the disappearance of human responsibility, and the elimination of alternative perspectives. We are deeply grateful to our three respondents, Howard Caygill, Allen Feldman, and Sarah Kendall, for taking on the task of considering our chap- ters in their contributions to this book. Their pieces are, in fact, much more than mere “responses.” Rather, they constitute independent interventions with intellectual trajectories of their own, which—in a gesture of intellectual gen- erosity—engage in a dialogue with our texts. In this book, they are ordered such that they follow after the chapter to which they are primarily addressed, but these independent inquiries into our joint topic are just as important as the chapters to which they respond. The resulting tripartite structure of this book, then, reflects three limits with which machine intelligence confronts the human mind: the limits of understanding, law, and vision. Steuer’s chapter, “Prolegomena to Any Future Attempt at Understanding Our Emerging World of War,” takes its cue from the observation that recent literature on warfare contains a proliferation of conceptual indistinctions, including, ultimately, that between war and peace themselves. The chapter develops the dystopian notion of global partisan warfare through a discus- sion of the literature on so-called new wars, hybrid wars, wars among the people, or gray zone conflicts. Steuer then argues that mimetic escalation, the circle of mutual imitation of tactics and strategy, between partisans and regular armies, insurgents and counterinsurgents, and so forth, ultimately renders these and many other distinctions obsolete. In a global theater of war, the aim of large-scale actors is to become the accepted security provider and the controller of the master narrative. Small-scale actors aim to optimize their individual positions. Crucial tools for both are access to data and data analysis across various spheres of human life, which allow for a merging of military and business models. The chapter then turns to the conceptual and technological prehistory of what Steuer calls “informational naturalism,” a universal monoculture that views the world as an ensemble of data. The con- cepts of energy, information, and money, and the technologies they inform, put into practice the idea of universal convertibility and a vision of the world in which, in principle, everything can be tagged and tracked. The question is: toward what end is such a system directed, and who are the actors and forces that decide on the direction it takes? Howard Caygill’s response, “Anthropokenosis and the Emerging World of War,” takes Steuer’s dystopian argument even further by suggesting that the present moment may be characterized by a retreat of the human world, anthropokenosis, a possibility currently discussed within the discourses on the “Anthropocene” and the sixth or “human” mass extinction event. Caygill adopts a radically nonanthropocentric perspective from which what we are witnessing might better be understood as the planet emptying itself of the

16028-0303f-Finalpass-r01.indd 4 9/24/2019 12:03:26 PM Introduction 5

human world. He then uncovers the anthropocentric assumptions that are retained in the Anthropocene discourse and in earth system science. With- out those assumptions, what seems catastrophic for the human species, and may well result in its extinction, is business as usual for the planet, a fleet- ing moment in its long geological history. Thus, the emerging world of war destroys the human world but not the earth. What makes the present situation all the more futile, Caygill concludes, is the blindness to the fact that humans are increasingly powerless. The much-discussed age of the Anthropocene is in fact the age of anthropokenosis, the absencing of the human. Noll’s chapter, “War by Algorithm: The End of Law?” asks whether it is possible to subject algorithmic forms of warfare to the rule of law. It begins by looking at the way code rules, using an AI weapons system proposed in 2013 as an example. This system possesses a trait that is representative of the integration of artificial agents and human agents: it suggests that the human operator is at the top of the decision hierarchy, exercising something akin to free will, whereas the operator is actually tethered to the system’s reduc- tionist logic, according to which truth emerges from the signal strength of neural connections, not from conscious human cognition. The chapter then moves on to the question of the rule of law, to which this reductionism poses a significant threat: as soon as we seek to apply contemporary legal rules to algorithmic weapons systems, standard legal questions about human inten- tion turn out to be unanswerable. The central part of the chapter demonstrates that these problems cannot be overcome by means of new legislation. Since the advent of monotheism, central to the concept of law has been that it is studied by humans. But with AI, human study can only begin once the system has already made a decision. This marks an epochal shift, and our traditional understanding of law—indeed, the only understanding of law we currently have—is simply incapable of bridging it. Sara Kendall’s response, “Law’s Ends: On Algorithmic Warfare and Humanitarian Violence,” argues that the worrying prospect of Lethal Autonomous Weapons Systems (LAWS), which follow what Noll calls an “excarnate” form of law, should not distract us from the fact that the con- ventional, “incarnate” law of the Western monotheistic tradition does not necessarily do a better job. The United States’ “unable or unwilling” theory, for instance, which stipulates that intervention in another territory is justi- fied if its government cannot or does not want to act, demonstrates that, even under the present conditions of international law, sovereignty is often suspended. There is unconditional sovereignty, and then there is conditional sovereignty, and whether a state is afforded one or the other is decided not least by technological and geopolitical asymmetries. Thus, even if LAWS could be brought under the law, this would not guarantee an ethically desir- able outcome. Rather, the colonialist features of the law of armed conflict

16028-0303f-Finalpass-r01.indd 5 9/24/2019 12:03:26 PM 6 Chapter 1

and international humanitarian law, protecting certain actors and limiting violence to certain regions, would continue to operate. Kendall uses the example of the U.S. military’s “Algorithmic Warfare Cross-Functional Team,” informally known as “Project Maven,” to dem- onstrate the logic of “preemptive temporality:” the attempt to turn big data into an advantage “by ‘owning’ more of the ‘blink’ between perception and response.” The project illustrates how the military-corporate assemblage pushes inexorably toward the singularity. Ultimately, Kendall concludes, it might be more promising to adopt a counterstrategy of “ethical preemption;” in this connection, she cites the protest by Google employees over the com- pany’s involvement in Project Maven, which ultimately forced the company to withdraw from it. Human judgment might play a more useful role in such ethical preemption than in the context of a humanitarian law steeped in violence. Liljefors’s chapter, “Omnivoyance and Blindness,” outlines three kinds of blindness arising out of the use of visual artificial intelligence, or “technovi- sion,” in war: first, blindness to the things surveilled by technovision; second, blindness to the opacity of the techno-visual apparatus itself, its impenetra- bility to human judgment; and, third, “blindness to the blindness,” that is, a structural ignorance of any limit to the power of sight, a loss of horizon and perspective. Each discrete kind of blindness corresponds to a particular type of superiority claimed on behalf of technovision. “Omnivoyance” is a central concept in the critique offered here. This refers to of the gaze of the subject of a portrait following the viewer, even as she moves about the room, a phenomenon that to medieval Christians represented God’s omniscient gaze. Paul Virilio has applied the term to the “god-like” powers of satellite-based surveillance systems; they are, he says, “omnivoyant.” But to understand the impact of technovision, Liljefors argues, we must appreciate that any claim to omnivoyance is structurally accompanied by a blindness of equal scope: the “god-likeness” of military techno-visual power depends on our blind faith in it. If we obey, we risk losing sight of the human horizon in matters of war. In his chapter, Liljefors combines analytical reasoning with an “illustrated myth” of the evolution of human ascent from bipedalism to drone warfare. Allen Feldman’s response, “Of the Pointless View: From the Ecotechnol- ogy to the Echotheology of Omnivoyant War,” looks at omnivoyant war from the perspective of two intertwined schemas. He presents a topological phenomenology of omnivoyant algorithmics as an ecotechnological enfram- ing of the world and an echotheology of omnivoyant visuality through the apophatics of Nicholas de Cusa and the pastoral governmentality of . Omnivoyance, Feldman argues, drawing on Reiner Schürmann, is the “law of laws.” It is an “ontologizing political .” Omnivoyance is by

16028-0303f-Finalpass-r01.indd 6 9/24/2019 12:03:26 PM Introduction 7

definition a pointless view, as any concrete perspective must be selective. The omnivoyance of machines excludes what Feldman calls the “computationally non-descript:” data sets are acted upon by the technological as if they were the things themselves. Feldman then turns to a comparison of the dream of omnivoyance with Nicolas de Cusa’s De Visione Dei, which may be translated either as “our vision of God” or as “God’s vision of us.” What this polysemy suggests is that the “representational sovereign”—the “omnivoy- eur”—is the only one unaffected by the play of mirroring sights and actions. In Jakob von Uexküll’s terminology, he is not part of the Umwelt but part of a Gegenwelt, outside of the labyrinth of gazes. De Cusa’s focus on the seeing of seeing allows him, inadvertently, to describe the heart of modern surveil- lance. Feldman concludes by relating this state of affairs to Foucault’s notion of pastoral care. The omnivoyeur is the anonymous master of everything, and he punishes all who try to look away and thus not be seen. The respondents were given the final versions of our chapters as they appear here. The final part of the book, “Visions,” was completed only later and should be seen as separate from the rest of the book insofar as the respondents did not read it before publication. It presents a series of short meditations on questions about the emerging world of war that were raised by our chapters, themes that turned out to coincide and coalesce without us having pursued them intentionally. Each of these meditations was, initially, written by one of us before being revised by the other two; now, however, it is often not clear even to us who wrote a particular passage first. In both form and content, this part of the book is unapologetically speculative and does not abide by academic etiquette. The theological dimension of these meditations emerged in the course of writing—not against our will but not as part of an intellectual plan either. We would like to stress, though, that we do not aim to contribute to theological debates; rather, the theological motifs help to illuminate the themes of our book. As the project developed over the past six years, we repeatedly came up against two difficulties in particular, both of which are associated with tempo- rality. One was the experience, which all three of us had, that the subject matter was always running ahead of us, that we could never quite catch up with the latest developments. Our technological future seemed to be being developed not only partly in secret but also at a speed that outstripped our capacities of reflection. The other was the problem of specifying what exactly was new in the phenomena we were chasing and what was simply a continuation of existing tendencies, features that might possibly be as old as warfare itself. Our chapters touch upon these two difficulties, but they do not fully address them. Nevertheless, these difficulties form the basis of two conclusions that have determined the way we approach our themes more generally. First, the

16028-0303f-Finalpass-r01.indd 7 9/24/2019 12:03:26 PM 8 Chapter 1

runaway logic of technological development serves an ideological purpose, that of immunizing inventions and innovations against criticism (“your criticism would have been appropriate for version 2.0, but it does not apply to version 2.1”). Second, the question of “old versus new” may distract us from the specificity of what is taking place and thus also help to immunize these—potentially blind—practices against criticism. Our original title for the project, now the title of this introduction, tries to avoid the pitfalls of the old-versus-new conundrum by emphasizing the specific temporality and spatiality of “emergence,” a process that is not simply a unilinear progression through empty time or space but closer to an oscillation between appearance and disappearance, between the and inability to see.

16028-0303f-Finalpass-r01.indd 8 9/24/2019 12:03:26 PM Chapter 2 Prolegomena to Any Future Attempt at Understanding Our Emerging World of War Daniel Steuer

It may be that science & industry & their progress, are the most enduring thing in the world today. That any guess at a coming collapse of science & industry were for now, & for a long time to come, simply a dream, & that science & industry after & with infinite misery will unite the world, I mean integrate it into a single empire, in which to be sure peace is the last thing that will then find a home. For science & industry do decide wars, or so it seems. — in 1947, Culture and Value

The real surprise, however, would be if we had arrived at anything bet- ter . . . by using the most defective method possible, one that claims to evaluate every war in terms of the ends pursued and not by the means employed. . . . [I]n each era war consists of a quite specific kind of vio- lence, and we must study its mechanism before passing any judgment. . . . One can neither solve nor even state a problem relating to war without having first of all taken apart the mechanism of the military struggle, that is, without having analysed the social relations it implies under given technical, economic, and social conditions. — in 1933, “War and Peace”

In these two quotations, from Ludwig Wittgenstein and Simone Weil—one written before, the other shortly after World War II—we find at least three important observations that will guide our preliminary reflections on the emerging world of war: the increasing integration of the world under scien- tific and economic principles will not automatically lead to peace; war must be judged not by the ends pursued but by the means employed; and these means, the mechanisms of military struggle, can only be understood on the

9

16028-0303f-Finalpass-r01.indd 9 9/24/2019 12:03:26 PM 10 Chapter 2

basis of the technical, economic, and social conditions on which they rest. In light of this, the following thoughts cannot hope to do more than prepare some of the ground for asking the right questions regarding the emerging world of war. The first section, “A World of Indistinctions: Into the Gray Zone,” intro- duces the notion of global partisan warfare as the dystopian endpoint of mimetic escalation, that is, the circle of mutual imitation, in terms of tactics and strategy, between partisans and regular armies, insurgents and counterin- surgents, and so forth, which ultimately renders these, and many more, dis- tinctions obsolete—for to fight partisans one has to fight like a partisan. I then go on to show how global partisan warfare becomes the default vision that informs both (military) practice and (academic) reflection on these practices. The so-called New Wars are characterized by the “fragmentation and infor- malization of war” and by a new type of “globalized informal economy.”1 Such is its “hybridity”2 that this type of war becomes war “amongst the peo- ple,”3 in which the aim is to establish oneself as the security provider and the one who controls the master narrative.4 Data analysis across any conventional divide between separate areas of human life plays a major role in this.5 The result is a merging of military and business models, a vision of the world as a global theater of war, and a diffuse cluster of “gray zone conflicts.”6 The second section, “The Emerging World of War: Between Synthesis and Conglomerate,” looks at some methodological problems caused by the indistinctions and escalating tendencies of the New Wars. The situation, it suggests, is only partially open to analysis because it only partially rests on a proper synthesis. This section also makes some general suggestions about how to approach this situation. These draw on Adorno (the double perspec- tive of natural history), Sohn-Rethel (exchange abstraction), and Wittgenstein (family resemblance). Together, these might indicate a direction in which the description of what seem elusive phenomena—a necessary first step before explanation or intervention—may become possible. The third section, “The World as Global Reflex System,” discusses some of the conceptual and practical prehistory of the vision of the world as a global theater of war. Against the backdrop of methodological constructiv- ism, it considers the emergence of the concepts of energy and information, which lead to the idea of universal convertibility and of the world as a system (informational naturalism). It then looks at foreign exchange flow markets as a global scopic reflex system and as one of the most advanced examples of the “world as system.” Flow markets and global partisan warfare (or, in ’s terms, the “global frontierland” and “reconnaissance wars”) are formally congruent. As the concluding section, “Universal Monoculture: How to Avoid Theoretical Mimesis and Mimetic Escalation,” will argue, any attempt at

16028-0303f-Finalpass-r01.indd 10 9/24/2019 12:03:27 PM Prolegomena to Any Future Attempt 11

understanding the emerging world of war must try to establish a com- prehensive perspective on the contemporary conditions of a universal monoculture, yet it must always remember that the idea that “the power of thought” is “sufficient to grasp the totality of reality” is an illusion.7 Instead, philosophy must “answer the questions of a pre-given reality,” must employ the imagination in rearranging “the elements of the ques- tion without going beyond the circumference of the elements,”8 and must “stop where irreducible reality breaks in upon it.”9 Thus, the following text, despite its seemingly linear progression, actually aims at producing a constellation of material that may remove some obstacles to the under- standing of our emerging world of war. It should be read as “informed speculation”—something like exact theoretical imagination with a decid- edly dystopian twist. Thinking, Adorno says, must “think against itself. And that means that it must measure itself against the utmost extreme [ Äußerste], the absolutely unimaginable [schlechterdings Unausdenkbare], to have any right to be a thinking at all.”10 Thus, the aim is to pitch exact imagination against the unimaginable.

A WORLD OF INDISTINCTIONS: INTO THE GRAY ZONE

According to General Sir Rupert Smith, industrial interstate war no longer exists, and “we are now engaged, constantly and in many permutations, in war amongst the people.”11 Whether written by practitioners or scholars, lit- erature in this area almost without exception observes that we are witnessing an ever-increasing blurring and blending of phenomena, producing intercon- nected and overlapping indistinctions. Yet, at the same time, most authors hold on to some moral or normative distinction. For military and security practitioners, these distinctions become framework conditions within a global security paradigm and thus strategic factors. The binaries in these writings— order (hierarchy) and (swarms, “leaderless resistance”), rulers and ruled, decision makers/actors and populations, narrators and listeners, thugs and nonthugs—are ultimately construed in a Manichean fashion. Let us first try to imagine the extreme endpoint of this increasingly blurred post-Westphalian landscape. Following on from the principle ascribed to Napoleon by —“il faut opérer en partisan partout où il y a des partisan”12—we may name it global partisan warfare. Within this world, the figure of the techno-economic partisan—no longer telluric, no longer even cosmic, but fundamentally without place or time: a normless ideal type—may take the form of the soldier, the business person, the terrorist, the logistics operator, and so forth. This ideal dystopian actor can be inserted into any

16028-0303f-Finalpass-r01.indd 11 9/24/2019 12:03:27 PM 12 Chapter 2

context and can take on any perspective; her actions are guided by methods of analysis that are universal, purely formal, and indifferent to content.13

Global Partisan Warfare: Mimetic Escalation There are no differences anymore. Reciprocal action is so amplified by globalization, the planetary in which the slightest event can have repercussions on the other side of the globe, that violence is always a length ahead of our movements. Violence steals a march on politics, and technology escapes our control. —René Girard, Battling to the End

According to Clausewitz, war is a chameleon that changes appearance according to the admixture of “blind natural instinct” (people), “free activ- ity of the soul” (the general and his army), and “reason” (government), that is, passion, strategy (probabilities and chance), and political logic.14 In René Girard’s interpretation of the development of this triad, we are now, as Howard Caygill puts it, “fully and irreversibly engaged in an apocalyptic logic of escalation.”15 At the heart of it, I suggest, is the figure of the partisan. His methods are adopted by the “regular” force, radicalizing, in turn, the partisans’ methods, which are then again adopted by the regular force, and so on. As a result, the theater of confrontation becomes increasingly dispersed in geographical, temporal, institutional, and technological terms. Today, the armies of the technologically most advanced states develop units that oper- ate in a partisan fashion in order to “defend” what are seen as global security interests, while insurgents use global technological infrastructure and other infrastructure. Thus, the between the resistant “subjectivity of the partisan,” whose strategy “is vaporous as opposed to the movement of solid and liquid masses characteristic of military subjectivity,”16 disappears. Clausewitz, Caygill suggests, was interested in “the imaginative response to chance over the combination of force and consciousness” and in avoiding “the logic of escalation” that leads to a “final apocalyptic battle.”17 And yet partisan strategies do not avoid escalation. On the contrary, they make the enemies permanently present to each other, even (and especially) in their physical absence. Partisan escalation is a partly invisible, silent escalation. Does this delay the apocalypse or turn it into a permanent presence?18 In the protean universality of global partisan warfare—fought in virtual as well as physical spaces—all sides are potentially attacker and attacked. They are “persecuted persecutors.”19 If once partisan warfare gave rise to new political communities, forging “bonds of solidarity that grow out of the unconditional battle against a common enemy,”20 then—once we come to today’s techno-economic partisan—these bonds of solidarity are no lon- ger unconditional, and the common enemy is as fleeting as these bonds.

16028-0303f-Finalpass-r01.indd 12 9/24/2019 12:03:27 PM Prolegomena to Any Future Attempt 13

Any identity (or destruction thereof) is a weapon rather than a cause or a consequence of conflicts. Thus, absolute partisan warfare is the opposite of Schmitt’s absolute war in which the enemy is “a monster that must not only be defeated but also utterly destroyed.”21 Rather, the intensity of such enmity, which is based on existential difference,22 is replaced with constantly chang- ing fields of opportunities and temporary and relative friend–enemy relations. Schmitt’s reflections on the modern partisan still used superpowers and superior military strength (ultimately based on the possession of nuclear weapons) as decisive coordinates. The techno-partisan is either reduced to a “transportable and exchangeable tool of a powerful central agency of world politics” that deploys or deactivates him “as the situation demands;”23 or, as “technical-industrial partisan,”24 he is “motorized, and linked to an informa- tion network with secret transmitters and radar gadgetry;”25 or, in a postnu- clear war scenario, he becomes “a new type of partisan” practicing a new type of “land-appropriation;”26 or, finally, he morphs into a “cosmopartisan,”27 who is nevertheless still deployed in the interest of a power’s rule over the planet. Schmitt does not consider the possibility of the partisan logic becom- ing total and infiltrating any institutional framework that might act as a kat- echon so that what he calls “transitional, intermediate between total war and peace”28—smoldering forms of warfare, as it were—become self-sustaining. Tomorrow’s techno-economic partisan will blend in with the new means and become part of a man–machine assemblage. As an agent, he will always already be a double agent. Within the interconnected networks of exchange, he will be both user and producer of fleeting, metastable con- figurations. On this a priori micro-foundation, we may visualize macro forms of war as the intersections between warlords, regular entrepreneurs, govern- ments and ministries, the global formal and informal economy, ethnic/cul- tural/religious groups, and international criminal organizations. These forms stand in shifting relations of family resemblance, and by definition no list of them will ever be exhaustive, as the shifting never ends. The elements that make up the macro forms—such as states and corporations—provide tem- porary framework conditions, but they are not sovereign providers. Rather, they temporarily coordinate individual micro-level events while the overall process drifts toward contingency, driven not by actors but by reactors. Having taken a quick speculative look at this dystopian endpoint, let us now approach the same, or almost the same, point by looking at some of the literature on warfare.

From New Wars to War among the People Originally published in 1998, in the wake of the war in the former Yugoslavia, Mary Kaldor’s New and Old Wars describes New Wars as involving “net- works of state and non-state actors,” as being “both global and local,”29 and as

16028-0303f-Finalpass-r01.indd 13 9/24/2019 12:03:27 PM 14 Chapter 2

moving beyond the distinction between war, organized crime, and large-scale violations of human rights. Kaldor speaks of “a predatory social condition”30 in which “the fragmentation and informalization of war” is paralleled by a new type of “globalized informal economy . . . in which external flows, espe- cially humanitarian assistance and remittances from abroad, are integrated into a local and regional economy based on asset transfer and extra-legal trading.”31 The New War economy is as much economy as it is war: “This new type of informal economy is, moreover, no longer marginal to the formal economy; it is directly sponsored by the ruling elites.”32 New Wars are mutual enterprises. The parties involved are, “for both political and economic reasons,” not interested in winning or losing: “The inner tendency of such wars is not war without limits but war without end.”33 Despite seeing very clearly that the state monopoly on violence, and hence the authority of states, is thus being eroded from both above and below, Kaldor suggests a normative and cosmopolitan approach to this situation, based on a strengthening of civil society, the rule of law, and international policing. But as we shall see, within the framework of “global security” and “war amongst the people,” any mandate remains trapped inside partisan esca- lation. One side’s law is not the other side’s justice, and the appeal to the legal paradigm will simply be seen as an attempt to exercise power in another form. According to Kaldor, the term “hybrid wars,” or “hybrid threats,” “nicely captures the blurring of public and private, state and non-state, formal and informal,” but if it is used exclusively to refer to “a mixture of different types of war (conventional warfare, counter-insurgency, civil war, for example),” we risk failing to notice “the specific logic of new wars.”34 The hybridity of the New Wars is not just a matter of there being such an admixture of old types of war and fighting; it results from a blending of categories in all of its elements.35 From the perspective of military and security experts, hybridity marks the shift from the paradigm of war to that of policing and security operations, but it does not challenge the friend–enemy distinction.36 A hybrid threat is “any adversary that simultaneously and adaptively employs a fused mix of conven- tional weapons, irregular tactics, terrorism, and criminal behaviour.”37 Thus, hybridity is not the medium of all action but a trait of identifiable moves carried out by identifiable opponents (“challengers” or “competitors”38). The distinction between legality (or legitimacy) and illegality (or illegitimacy) is retained. But, as we shall see, hybrid threats require, and breed, hybrid responses. In Rupert Smith’s analysis, war among the people “is the reality in which the people in the streets and houses and fields—all the people, anywhere—are the battlefield. . . . Civilians are the targets, objectives to be won, as much as an opposing force.”39 The true aim in using military force is to “win the

16028-0303f-Finalpass-r01.indd 14 9/24/2019 12:03:27 PM Prolegomena to Any Future Attempt 15

clash of wills” and to that end to “change or form the intentions of the civilian population.”40 “Individual guerrillas” or “alternative political leaders” move and fight among “the specific people of [their] interest, whilst the audience of these conflicts has become, courtesy of the media, the people of the world.” This audience influences political decisions at least as much as do “events on the ground,” and thus the guerrillas or alternative leaders “use the media to influence decisions, and above all the will of those people they seek to lead and co-opt. This is not so much the global village as the global theatre of war, with audience participation.”41 War among the people is a competitive exercise in social transformation and control, a hybrid form of hegemony, with the aim of determining the nar- ration of events.42 The use of force may help you to become the master narra- tor, but it cannot achieve that goal by itself.43 The “confrontational battlefield” includes “economic, diplomatic, political, humanitarian,” and other organs of power,44 and a commander therefore needs military and civilian staff that are multidisciplinary as well as multinational, in order to gather the required information in ethnographically foreign contexts.45 The necessity of joint political and military analysis means that “there are no longer purely military or political situations,” especially as “information, not firepower, is the cur- rency upon which it [war among the people] is run.” And just as “information is neither purely political nor purely military,”46 so it is “no longer possible to simply divide activities between military and other services.”47 Thus, “the question will be who is providing security and from what, whose laws and regulations prevail, and who are the judges.”48 What kind of expert is required in this situation? In 2010, David Kilcullen, a former member of the Australian Army and advisor to the U.S. military, founded the “research and analysis firm” Caerus Associates, which “helps clients understand, adapt, and operate in complex and conflict-affected areas around the globe.” Apart from the United States and allied governments, clients include “financial institutions, media companies, non-profit and philanthropic organizations, legal institutions, and technology companies.” Big data analysis is used “to find meaning in complexity:” “By working with clients to translate meaningful behaviors into data—and back again—we support actionable insight by identifying ‘honest signals’ in com- plex, local environments.”49 This description contains all the components of the emerging form of war: any part of the world may be translated into data, and action is taken on the basis of the analysis of these data. Differences between military and nonmilitary operations—whether the object is a “conflict ecosys- tem” or a complex (business) environment—are at best a matter of degree. The parameters provided by the “client” determine the desirable outcome. In his earlier research papers and books, Kilcullen describes a global theater of war in which “[f ]orces contributing to effects, but not physically

16028-0303f-Finalpass-r01.indd 15 9/24/2019 12:03:27 PM 16 Chapter 2

within theatre” are included as parts “of a Virtual Theatre.”50 As they are for Smith, so for Kilcullen “the people remain the prize,”51 and by “the people” he means “a global audience rather than a local citizenry.”52 Modern counter- insurgency has to navigate “a cultural and demographic jungle of population groups,” and this navigation requires data collection combining “open-source research and ‘denied area ethnography.’ ”53 The aim, according to his “theory of competitive control,” is to be “the local armed actor that a given population perceives as best able to establish a predictable, consistent, wide spectrum normative system of control,” as such an actor is “most likely to dominate that population and its residential area.”54 A government is no more than “a political organization that has success- fully outcompeted its rivals across the full coercion-persuasion spectrum, allowing it to establish an uncontested normative system over a given population or territory.”55 It follows that any normativity is secondary to the competition over the capacity to establish it in the first place.56 The “coercive end of the spectrum is critical” because, following Max Weber, all other parts “rest on the ultimate sanction of force.”57 All differences between types of force disappear, leaving a crisscrossing landscape of rivaling entities who aim to become the source of normativity and the security provider. At that point, warfare and the battlefield take the form of a market in which hybrid competitors fight over the control of populations, offering security as their product. The people are the prize, the extraction of value the aim. This situ- ation is the permanent state of exception: the temporary sovereign is who or whatever is able to forge the common predicament, the shared objective, the shared perception of a threat or problem, and so forth, and provide protection and solutions. The master paradigm is no longer war and peace but risk and security.58 The sovereign is he who wins the will or obedience of the people by whatever means.59

The World as Gray Zone The Gray Zone, a white paper published by the U.S. Special Operations Command, illustrates how the perspective of the security expert/business- man/techno-economic partisan takes hold in military circles. Gray zone adversaries are aware of the mismatch between conventional U.S. military models and gray zone challenges, and the white paper’s recommendation is—in line with the principle of mimetic escalation—to emulate their strate- gies and tactics in order to remove this disadvantage. Democratic control increases the risk of “self-induced paralysis” and of “reacting late to more nimble autocratic gray zone actors.”60 What is needed is an “unprecedented level of interagency coordination capable of synchronizing all elements of national power.”61

16028-0303f-Finalpass-r01.indd 16 9/24/2019 12:03:27 PM Prolegomena to Any Future Attempt 17

The paper suggests that military decision making should adopt “a busi- ness vocabulary and a ‘SWOT’ model (strength, weakness, opportunity and threat):”

Similar to the way businesses decide how to allocate capital, we would neces- sarily distinguish between opportunities and threats and have at least an esti- mate of our expected return on investment. Talking and thinking differently about national security in the gray zone would help us measure the oft-ignored opportunity costs and come up with some metric, however imperfect initially, to measure our expected return on investment for defense dollars.62

The military morphs into the business of deterring or destroying gray zone challenges, and this in turn means targeting “values” as much as “identifi- able people, places and things.”63 Within this strategy, “the point of action” might be far removed from “the point of effect,” for example, China’s African interests might be the target in order to influence China’s claims regarding the South China Sea Islands. The “intent” is “to alter the decision-making calculus regardless of geography.”64 What emerges is a proactive, centralized military state that operates under as little democratic control as possible, takes its decisions according to a business model, and considers literally anything a potential target or means to achieve its ends. The world becomes an inte- grated ideal theater of simmering gray zone challenges staging an endless performance.65 Where might the script for this play come from?

THE EMERGING WORLD OF WAR: BETWEEN SYNTHESIS AND CONGLOMERATE

An important point is overlooked when analysis is used alone: every analysis presupposes a synthesis. A pile of sand cannot be analyzed. —Goethe, “Analysis and Synthesis”

I can well understand why children love sand. —Wittgenstein, quoted by Maurice O’Connor Drury

The examples from the literature on New Wars show that the conventional categories of warfare no longer apply. And this is also reflected in the prolif- eration of new names suggested for these “conflicts,” which, in turn, reflects an uncertainty over what, precisely, constitutes the newness of these wars.66 If phenomena appear to defy categorization, then either the right categories have not yet been found or the phenomena are singularities, and therefore no analytic categories will ever fit them. With regard to natural phenomena,

16028-0303f-Finalpass-r01.indd 17 9/24/2019 12:03:27 PM 18 Chapter 2

Goethe warned against analysis in cases in which “there is no underlying synthesis,”67 because a pure “aggregation, a juxtaposition, a composite”68 does not permit the application of analytical categories. Maybe, then, New Wars are in important respects aggregations or conglomerates, made up of partly homogeneous, partly heterogeneous elements; maybe they are more akin to a heap of sand than a structured totality and therefore resist analysis. This might also help explain the indistinctions that affect even the most fun- damental concepts, for example those between combatants and civilians, state and nonstate actors, and, ultimately, even between war and peace themselves. The phenomena jump conceptual ship, so to speak, whenever approached from a particular angle. What looks like an economic activity is, upon closer scrutiny, a criminal activity; what looks like a criminal activity turns out to be part of a military operation. The phenomena blend in with different areas of human practice that are traditionally seen—and treated, by the law as well as by public and academic discourse—as separate. As a result, the distinctions slip through the analyst’s fingers. How might it be possible to get a grip on this sand? In an early essay, Adorno made the following methodological sugges- tion for “overcoming the usual antithesis between nature and history:”69 one should try “to comprehend historical being in its most extreme historical determinacy, where it is most historical, as natural being” and try “to compre- hend nature as a historical being where it seems to rest most deeply in itself as nature.”70 The very general significance of this idea becomes apparent from Adorno’s understanding of the two terms. History is “that mode of conduct established by tradition that is characterized primarily by the occurrence of the qualitatively new,” while nature stands for “mere identity, mere reproduc- tion of what has always been.”71 An analogous double perspective might be useful when dealing with the New Wars. Where something looks like a radical novelty, consider it made up of the old; where it looks just like a conventional form of conflict, tactic, strategy, look at it as qualitatively new. It might also be useful when applied to indistinctions: look at a state actor as made up of nonstate actors; look at a business transaction as a weapon, the employment of a weapon as part of a business transaction; look at globalization in terms of local changes; and so on. Yet it does not follow from the inadequacy of the traditional categories, concepts, and narratives of political science, history, philosophy, or social and political thought that something ineffable or mysterious must be taking place. Rather, the apparent lack of structure is a red herring. The situation suggests that there is a nonorchestrated and chaotic process taking place, but at the same time this process is used to hide the structuring devices that are operative within it and thus the intentionality that is at work. Within the

16028-0303f-Finalpass-r01.indd 18 9/24/2019 12:03:27 PM Prolegomena to Any Future Attempt 19

global partisan form of life, war and other forms of human interaction become inseparable. The practices of this life form, both in their social-political and in their technological aspects, do not allow for the drawing of strict distinc- tions; they are, as the next section will suggest, based on money, energy, and information, both as conceptual frameworks and as actually existing media of exchange. Together, these three produce the semblance of universal convert- ibility. And where any element can potentially be transformed into any other element, phenomena blend into each other, and individual actors operate in metastable social and institutional configurations in which the fixed names of certain entities only cover the underlying fluidity. To the extent that there are structuring devices at work (both conceptual and physical ones), the situ- ation is open to analysis. To the extent that these produce (the semblance of) universal convertibility, it is not. The global partisan form of life rests on a synthesis to the extent that it is sustained by the formal integration of interactions through exchange media. Alfred Sohn-Rethel’s theory of the functional integration of society assumes that the practice of exchange must necessarily give rise to certain transcen- dental ideas that then inform how the world is viewed by those who engage in such exchange.72 While Sohn-Rethel, writing in the first half of the twen- tieth century, derived the transcendental subject of classical philosophy from the classical form of economic exchange, the subject today—suspended in a world based on interconnected global networks for the exchange of energy, information, and money—will take its form from the abstractions that are enacted in these three media. Our “techno-economic partisan” needs to be rendered as an “(in-)dividual,” an actor that is split between various contexts. And as a part-digitized “dividual,”73 she is a virtual as well as a real, embod- ied actor. This (in-)dividual is an optimizing and self-optimizing “partisan” figure, one not only dislodged from any specific location but—ultimately— not localizable at all. But in its passive form, the (in-)dividual is also the target of economic and military strategies—most strikingly when it is, in its embodied form, the referent of an item on a hit list. Warfare in the emerging world of war is not a uniform practice. Its compo- nents stand in relations of family resemblance.74 The question is not whether a particular act per se counts as an act of warfare or whether something is a weapon; the question is what position an act or object takes within the global system of interconnected exchanges and how it relates to neighboring acts and objects.75 The question that finally arises is the following: At what point will these forms of exchange no longer allow for any fixed forms at all? Put differ- ently: Might the speed of transformation within the network of “predatory formations”76 or “predatory social condition[s]”77 become so fast that all that remains is an unconscious, instantaneous practice that consists of nothing

16028-0303f-Finalpass-r01.indd 19 9/24/2019 12:03:27 PM 20 Chapter 2

but opportunistic quasi-instinctual acts (reflex actions), with extraction and consumption being more or less simultaneous? Such a perpetual war carried out in an eternal present would, in effect, mean the disappearance of time and language. On the way to such a world, the “screen” seems to hold a privileged place, as we shall see. Screens are the focal points of practices that unravel and recombine any texture, ground reality to sand and reassemble it.78 Why? How? For whom?

THE WORLD AS GLOBAL REFLEX SYSTEM

The vision of a world ruled by global partisan warfare is not altogether new.79 The novelty is not so much that the entire world appears to be engulfed by warfare; it is that the world becomes a total system of war- fare. After first looking at its conceptual and theoretical prehistory (“The Phantasmagoria of Informational Naturalism”), this section will attempt to characterize this system in more detail (“The World as a System”). Finally, the screen and the grid will be introduced (“Flow Markets and the Global Frontierland”)—the grid as an infinitely malleable mapping device and the screen as the interface between a world that has been turned into data and an (in)dividual user who reacts to it. The screen, a reflex-triggering surface taken as “world,” at once condenses global time and space into a here and now without depth and distributes the local input across space and time. One might say the screen “world” is a preapocalyptic world: it is neither temporal—it has no history—nor has it reached the end of time. Global reflex systems, as closed worlds, are dystopian endpoints. They render reflection impossible.80 The idea of the world turned into data, and these data as a system, forms today’s world picture. For a world picture to become total, the world, and the human practice and language that produce it, must become invisible. The fol- lowing can therefore also be read as a story of increasing invisibility.

The Phantasmagoria of Informational Naturalism The idea of the world as a system—ultimately as an information processor— has a theoretical prehistory in the formalization of the natural sciences. The concept of “energy” found its conclusive mathematical formulation in 1847, and the mathematical definition of “information” was published a century later, in 1948. The former was seen as the foundation for all physical trans- formations in nature; the latter extends this to include the human world, including language and understanding. Energy and information are media that allow for universal convertibility: any phenomenon can potentially be

16028-0303f-Finalpass-r01.indd 20 9/24/2019 12:03:27 PM Prolegomena to Any Future Attempt 21

converted, or translated, into any other, and qualitative differences become fleeting surface effects. Universal convertibility tends toward a situation in which no identities or values can crystallize and endure. The global partisan form of life in a world conceived of as an information processor (the world of data and logistics) means that circulation and extraction become ends in themselves.81 When this makes the extraction of value (almost) impossible, the success of capitalism turns into its crisis and demise.82 Money, energy, and information stand at the end of a process of abstraction that establishes quantitative relations without (semantic or material) content and considers these relations as naturally given. They are real abstractions in Sohn-Rethel’s sense. Although money is the root medium of the three, I shall here concentrate on energy and information, assuming that Sohn-Rethel is right and the rules governing economic exchange become transcendental categories of the understanding employed in the production of knowledge.

The Theoretical Prehistory of Informational Naturalism I: Energy The “discovery” of the principle of energy conservation has two sources. First, there was the practical problem of the efficient design of heat engines. Second, there was a theoretical preference for universal, quantitative theories and laws over specific, qualitative ones.83 What is preserved in the conversion of heat into mechanical work, as in all other natural transformations, can only be “an undefined entity,”84 not something with a specific quality. The image that results from these attempts at optimizing steam engines and unifying the forces active in the universe is a world that operates as if it were a gigantic machine performing work.85 But considering all changes taking place purely as changes in energy distribution does not tell us anything about what hap- pens—other than that there is some sort of increase or decrease (in specified subparts of a closed system) in the ability of the “machine” to perform opera- tions (any operations). The energy perspective hides the qualitative differ- ences in the work being carried out by the machine.86

The Theoretical Prehistory of Informational Naturalism II: Information Despite Claude Shannon’s disclaimer at the beginning of his Mathematical Theory of Communication, which states that the “semantic aspects of commu- nication are irrelevant to the engineering problem”87 that his theory addresses, the confusion of information as a quantitative measure of data (by mathemati- cal analogy with entropy)88 with information as meaning has become wide- spread.89 This paves the way for informational naturalism, which constitutes a reversal of what has been called the principle of “methodical order.” This principle highlights the fact that “objects of our technological civilization,

16028-0303f-Finalpass-r01.indd 21 9/24/2019 12:03:27 PM 22 Chapter 2

whether they belong to every-day life or to scientific research, are brought about by complex chains of human actions” and that a “change in the order of sub-actions will cause failure.”90 However, the presentation of scientific theories usually does not follow “the pragmatic order in which the actions to which they refer are performed” or, worse, “they simply neglect that they are referring to particular actions at all.”91 Thus, we have two types of repression, that of the agency necessary to produce concepts and knowl- edge and that of the order that must be followed if certain results are to be obtained, that is, the repression of purpose or intention (desired outcome). The dominant interpretations and uses of the concepts of energy, information, and (the value of) money are prime examples of this. They ignore the experi- mental, political, and economic practices that give rise to these concepts and treat them as representing natural entities. All three transform quality into quantity: Forces become “energy.” Human labor becomes “money” (capital). Meaning becomes data/information. The naturalized “information” of cybernetics, computer sciences, genetics, physics, and so on, treats nature as if it were a script that is written in directly readable data and thus renders invisible the experimental and theoretical prac- tices employed in making it readable.92 Janich calls the idea of informa- tion as a natural object a “legend”93 that loses sight of the distinction between information bearing meaning and validity and information in the sense of the “preservation of structure . . . without the structures therefore necessar- ily being specifically linguistic, carrying meaning, or having the capacity to be true.”94 That the two do not coincide becomes apparent when instruments for encoding, transmitting, and decoding malfunction: a “malfunctioning” instrument follows natural laws as much as a “functioning” one; what it does not do is serve the intended purpose of the experimenter (the one sending the signal).95 Thus, “malfunctioning” indicates that digital information (data) depends on human language and technological practices in the same way that the concepts of physics depend on human language and experimental practices.96 Janich suggests that Weaver’s conflation of meaning and information qua data may in part have been inspired by Charles William Morris’s theory of the sign and semiosis, in which “sign vehicles as natural existences share in the connectedness of extraorganic and intraorganic processes.”97 At the pri- mary level of semiosis, the mediated-taking-account-of performed by these sign vehicles—for example, a dog chases a squirrel upon hearing a certain sound; a traveler prepares for a journey in response to a letter informing him about a certain place; or snow “takes mediated account of” lightning striking a mountain via the thunder that causes an avalanche—are cases of semiosis.98 Everything dissolves into what you might call a “semioscape,” the world as a space in which—once the translation of world into sign vehicles (read: data)

16028-0303f-Finalpass-r01.indd 22 9/24/2019 12:03:27 PM Prolegomena to Any Future Attempt 23

is complete—elements communicate with each other, that is, “mediately- take-account” of each other.99

The World as a System The conflation of the world in the ordinary sense of the word and the “world” as a system is a central feature of the emerging world of war.100 It corresponds to the conflation of representation and reality, of models for something and models of something.101 The world, Paul N. Edwards remarks, “is probably as old as language,” but “it is really only since the Second World War that ‘the world’ has become a system in political, economic, and cultural terms.”102 This emerging world as system was related to the “mutual orientation” between scientists and engineers on the one hand and the military on the other—a direct result of various wartime projects.103 The continuing symbiosis was made possible by computer-based mathematical modeling that could be applied equally well to, for example, weather prediction, aircraft interception, or the simulation of the movement of air masses after the detonation of a hydrogen bomb. This general ability to predict the future behavior of a “system” was then rolled out to the world of business and finally to the world.104 The resulting “comprehensive mathematical simulation models of global dynamic processes”105 require “global, uniformly gridded data.”106 The finer the grid and the more frequent the data collection the better. The remainder is done by the mathematical model into which the data are fed.107 Data-network models are globalizing forces: they do not follow globaliza- tion but produce it:

Global data networks require high levels of standardization, creating new com- monalities in practice and understanding worldwide. . . . [T]hese data networks are among the forces creating concepts of global systems, global problems, and global common interests. . . . [I]t is with models and data networks that modern concepts of “the world” have been built.108

By assuming the identity of world and “world,” systems research is actu- ally bracketing—“black boxing”—the world. Thus, Ross Ashby suggests that “real objects are in fact all black boxes, and that we have in fact been operating with black boxes all our lives.”109 In a black-box “world,” there are only variably arranged inputs and outputs. Pickering gives this “Black Box ” a positive interpretation, saying it provides “a performative image of the world”110 that makes the world an “ontological theatre;” he says cybernetics is not so much about control but “anticontrol.”111 In Heideggerian terms, it exhibits “a stance of revealing rather than enframing—of openness

16028-0303f-Finalpass-r01.indd 23 9/24/2019 12:03:27 PM 24 Chapter 2

to possibility, rather than a closed determination to achieve some precon- ceived object.”112 In what follows I shall try to demonstrate that, on the contrary, informa- tional naturalism turns the world into an information processor and produces a universal monoculture. In an article on cyborgs and World War II, Pickering himself describes Caterpillar’s plant in Decatur, Illinois, modeled on com- puter architecture, as a PWAF, a “plant with a future:” “This is the computer- as-material-agent line coming down from World War II and Whirlwind taken to a new pitch of intensity,”113 one where “the material and social spaces of production have been reconfigured in accommodation to a set of computer- based techniques of surveillance, command, and control, themselves evolv- ing in a process that serves to determine at once the properties of humans and non-humans.”114 But such a configuration represents not performative openness but rather an epistemological and behavioral cage. The outside is forgotten. The world becomes the “world;” the “world” becomes the world. And at the margins between world and “world,” the behavior of human agents takes the form of reflex action.115 The result is not so much a world with but a “world” without a future, and without a past—a world that moves toward the ideal of brute presence.

The World as Picture () Much of this is anticipated in Heidegger’s writings on science and technol- ogy and on the difference between humans and animals. For Heidegger, the “world” belongs to the age of the world picture—that is, “the world grasped as picture,”116 where “picture” means “the collective image of representing production [das Gebild des vorstellenden Herstellens].”117 The “planetary imperialism of technically organized man” produces uniformity as “the surest instrument of the total, i.e. technological domination over the earth. The mod- ern freedom of subjectivity is completely absorbed into the corresponding objectivity.”118 Thus, in the end the human being ceases to be the subjectum, the “unshakable ground of truth, in the sense of certainty.”119 The subject is subsumed under the captivating world picture, a “closed system of spatiotem- porally related units of mass [read: data]” in which “every place is equal to every other. No point in time has precedence over any other. . . . Every natural event must be viewed in such a way that it fits into this ground-plan of nature. Only within the perspective of this ground-plan does a natural event become visible as such.”120 Within such a picture, what happens to the absorbed human agent? Fol- lowing Jakob von Uexküll’s theory of organismic-specific environments—in German, “Umwelten”—Heidegger had earlier ascribed to the animal organ- ism a “fundamental capability for self-encirclement” in a “circumscribed

16028-0303f-Finalpass-r01.indd 24 9/24/2019 12:03:27 PM Prolegomena to Any Future Attempt 25

range of possible disinhibitions.”121 Only certain disinhibiting markers trig- ger reactions. The animal is suspended in its environment like a , with the natural disinhibitors pulling the strings. The world of informational naturalism moves toward a technological recreation of this situation, in which it is no longer the organism and its senses but the organism as information processor that is disinhibited by flows of other information. Reflex actions correspond to feedback technology. Chains of beings are replaced by chains of data whose endless flow is, at best, an “openness-at-sea,”122 while the kind of openness that depends on “having the world” becomes altogether impossible.123 Just as the animal organism’s self-produced captivation is the “condition of the possibility of behaviour”124 and produces a “structural total- ity,”125 the human being’s technologically produced self-captivation is today the condition of behavior. And just as the former is instinctual behavior in an “environing world,”126 the latter is pure reflex within an environing “world” as a global reflex system. The scope of this world is increasingly relayed to human organisms through screens. The techno-economic partisan, a reflex agent, will carry them with, or within, him or her.

Flow Markets and the Global Frontierland The scene: a windowless room with dimmed fluorescent lighting in which “the screens flicker with strings of digits,” observed by “attentive acolytes, their visages lit and their backs darkened like satellites parked in station- ary orbit.” They are speaking into phones or mumbling at other electronic devices: “Some sit stock still, mesmerized, engaging their screen with slight movements of wrist and hand. Others lean into their consoles, then away, as though their swaying might actually have some virtual influence upon the quantum electrodynamics coursing through their station and beyond, to other machines in other places in other similar rooms.”127 Beginning with “Santa Monica, California, at a RAND study of the ‘man-machine inter- face’ ” in 1952, Mirowski moves through a sequence of such rooms—from SAGE128 and war-gaming rooms, to bank checking and credit card systems, to arcade games. Today, through the culture of electronic gadgets, the “world” encroaches on the world by turning the screen into the fundamental mode of access to and medium of agency within it. Microsoft’s “HoloLens” marks the beginning of a new stage in this development, a stage in which “aug- mented reality” and reality merge.129 The “world” is played out in algorithms, in energy and information. But such universal calculability of real-world phenomena can only be achieved in circular fashion: the phenomena to be modeled are converted into data and fed into mathematical models; what- ever resists, or is not selected for, such transformation becomes invisible. This, and not the screen itself, is the interface that matters. If in information

16028-0303f-Finalpass-r01.indd 25 9/24/2019 12:03:27 PM 26 Chapter 2

technology there are “exclusively ‘syntactical’ machines” in which “all that takes place, in fact, is causal interaction between switches,”130 then this causal interaction inside the machine is continued by the reflex actions of the users of the machine. Agency becomes an instantaneous capturing of flows and a simultaneous being captured by them.131 In order to trace the flows in this “world” everywhere and at all times, space must be turned into a grid and each element must be tagged. In terms of the medium thus created, the Internet of Things, Global Information Grid (GIG), and “kill boxes” are much alike. More specifically, economic markets and the global battlefield—insofar as both are modeled on, and operate as, information processors—take on the same form. The following sections will illustrate this.

Flow Markets In their work on forex markets, the most advanced example of a flow mar- ket, Knorr-Cetina and Preda introduce the notion of a global scopic reflex system.132 Because the “real economy,” trading transactions, and political and cultural events have been translated into a self-referential “world” of infor- mation on the screen, for those individuals who participate in this system the computer screen is the world. The global physical space is condensed into screen space, and time is condensed into “now-ness.” The information on the screen may “contain” time (the figures, for example, may reflect the tracking of past transactions) but only in the form of an aggregated “now market.” Time and space lose their depth:

The screen reality . . . is like a carpet of which small sections are woven and at the same time rolled out in front of us. The carpet grounds experience; we can step on it, and change our positioning on it. But this carpet composes itself as it is rolled out; the spatial illusions it affords hide the intrinsic temporality of the fact that its threads (the lines of text appearing on screen) are woven into the carpet only as we step on it and unravel again behind our back (the lines are updated and disappear). As the carpet is woven it assumes different patterns; the weave provides specific response slots to which traders react, taking the patterns in different directions.133

The scopic medium is a network whose spatial dimension has been almost completely condensed into its temporal dimension, and the temporal dimen- sion approximates the now-ness of a perpetual flow out of nowhere into nothingness. It is “stable only long enough to enable transactions to occur and changes with transactions.”134 Computer-based scopic systems replace “embodied transaction and transmission capabilities by a set of technological and behaviourally enhancing components that, together, serve as a medium for the globally temporalized performance of these markets.”135

16028-0303f-Finalpass-r01.indd 26 9/24/2019 12:03:27 PM Prolegomena to Any Future Attempt 27

The “global reflex system of financial screens integrates within its frame- work the conduits for building and maintaining relationships.”136 Traders are confronted “with a market that has become a ‘life form’ in its own right, a ‘greater being.’ ”137 Screens “instantly reflect, project, and extend the reality of these markets in toto.”138 Markets are characterized by “ontological liquid- ity:” “The flow of the market reflects the corresponding stream of activities and things: a dispersed mass of market participants continues to act, events continue to occur, policies take hold and have effects.” Markets are synchron- ically well-defined but “ill-defined with respect to the direction they will take at the next moment.”139 This description of flow markets corresponds to the idea of network-centric warfare, except that here there can be no place for someone corresponding to the—imagined—central commander on his perch—someone who is apart from and overlooking the theater of war.140 The system of screens (interfaces) integrates military units, ultimately individual combatants, who are immersed in a theater of war that has become a life form in its own right. Screens “instantly reflect, project, and extend the reality” of these theaters of war to the combatants, potentially in toto. Combatants continue to act; events occur, and strategies take hold and have effects. But because of their ontological liquidity these theaters of war, as forms of life, though synchronically well-defined, will remain unpredictable—a carpet rolled out into the future only to unravel again the next moment.

Global Frontierland The form of such flow markets and global scopic reflex systems is analogous to the form of Zygmunt Bauman’s global frontierland, in which “alliances” and “frontlines” are in constant flux, and coalitions represent no more than “temporary cohabitations of convenience.”141 This gives prime importance to the gathering and—if possible instantaneous—distribution of informa- tion.142 Bauman therefore speaks of reconnaissance battles, which are meant to explore the range of possible moves. These “bear striking resemblance to ‘focus groups’ ” as “the modern politicians’ favourite means of anticipatory intelligence-gathering.”143 This “confluence” between military, economic, and political practices suggests that the “space of flows”/flow-market model is assuming the place of a transcendental model of reality that informs agency across all areas of human activity. Information becomes a weapon: “Recon- naissance battles are the principal category of violence in an under-regulated environment.”144 The ideal endpoint of reconnaissance wars is the capacity instanta- neously to identify a momentary enemy and take him or her out. Derek Gregory describes the space of the “individual-as-target” by using Kitchin and Dodge’s term “code/space”—that is, a space produced and activated

16028-0303f-Finalpass-r01.indd 27 9/24/2019 12:03:27 PM 28 Chapter 2

by software whose spatiality is “simultaneously local and global, grounded by spatiality in certain locations but accessible from anywhere across the network.”145 Individuation, in this case, is the “technical production of an individual as an artefact of targeting.” Flesh-and-blood individuals “are brought within the militarized field of vision through the rhythm analysis and network analysis of a suspicious ‘pattern of life,’ a sort of weaponized time- geography.”146 The enemy is identified by means of a technology that relies centrally on the expansion of generalized data collection and its analysis “in a computerised process that yields a network of relations commonly known as the disposition matrix.”147 The disposition matrix is “a grid” representing gathered data from which “maps, target lists, individual watch lists and inter- personal contacts” are constructed.148 As an “automated, self-perpetuating intelligence system capable of transforming a large amount of raw data into ‘actionable intelligence’ ”149—recall Caerus’s “actionable insight,” the same body in civilian clothes, so to speak—it can potentially operate without direct human involvement. The disposition matrix opens up the prospect “of a global hunting ground produced through and punctuated by ‘mobile zones of exception,’ ”150 the “kill boxes” of the military strategists: “Kill boxes can be sized for open terrain or urban warfare and opened or closed quickly in response to a dynamic military situation.”151 Chamayou speaks of “temporary lethal microtubes” that can be opened up where “a legitimate target has been located:” the “body becomes the battlefield.”152 These microspaces “must be able to be aimed [at] wherever necessary.”153 Thus, the human body—or, rather, the data set corresponding to it—becomes a globally mobile, globally tractable, and globally targetable zone of hostility and of exception. To the extent that the resulting complexity and speed of the decision making exceeds the capacities of human bodies, it must be automated: flash boys meet hitmen in the digital semioscape of the global frontierland.154

UNIVERSAL MONOCULTURE: HOW TO AVOID THEORETICAL MIMESIS AND MIMETIC ESCALATION

As figure 2.1 shows, the emerging world of war rests on the increasingly universal translation of the (human and nonhuman) world into data. Data are then algorithmically processed and used for modeling. The models represent the “world” as if it were the world. At the most general level, the world is seen as a homogeneous information processor to be manipulated by military and economic logistics. The world increasingly takes the form of the “world”— it is a case of real abstraction (see below). What becomes invisible in this process is (a) the translation of real-world objects and events into data (the

16028-0303f-Finalpass-r01.indd 28 9/24/2019 12:03:27 PM Prolegomena to Any Future Attempt 29

Figure 2.1 The universal monoculture of informational naturalism

work of—for short—sensors, where a “sensor” may be an election poll, a barometer, CCTV, a passport reader, etc.) and (b) the translation of data into models. The picture that holds us captive is that of universal translatability, hence convertibility, hence the infinite malleability of qualitatively different phenomena. In Heideggerian terminology, we may say that, while the trans- formation of the world into data is certainly one possible way of revealing, it becomes enframing as soon as the illusion of data as the one fundamental substratum takes hold. At that point, the “world” of data, as the ontological bedrock, puts intellectual, emotional, and moral imagination under a spell. The world becomes “black-boxed.”155

Sohn-Rethel on the Exchange Abstraction The core of Sohn-Rethel’s account of the exchange abstraction is an account of the origin of the abstract categories of the understanding in acts of com- modity exchange that are entirely separate from the sphere of material (re)production.156 With the dissolution of the once “inseparable unity” of labor and society, labor loses its social force, handing it over to the separate sphere of commodity exchange (circulation): “this monstrous transformation forms the basis of all estrangement, reversals, and reifications which since then dominate the human world, among them the emergence of an intellect that

16028-0303f-Finalpass-r01.indd 29 9/24/2019 12:03:27 PM 30 Chapter 2

is separated from labour, and the conceptual form of human thought ‘in its general form (Marx).’ ”157 This transformation also leads to what might be called Sohn-Rethel’s “social principle of conservation:” his early “Luzerner Exposé” defines “ ‘society’ as a connection between individual human beings with regard to their existence [], a connection at the level at which a piece of bread, eaten by one of them, will not feed any of the others.”158 The functional inte- gration of society through exchange means that individuals are formally tied together, while at the same time, as agents engaged in exchange, they are pursuing opposed interests. In the exchange abstraction, what draws together and unites at the same time repels and separates. The idea of the exchange abstraction has its source in Marx: “by equating their different products to each other in exchange as values, men equate their different kinds of labour as human labour. They do this without being aware of it.”159 Sohn-Rethel brings out the epistemological implications of this act. It is the origin not only of the universal equivalence of commodity value but also of the categories of the understanding. Commodity exchange implicitly abstracts from the use value of the objects exchanged and from the concrete place and time of the exchange, giving rise to notions such as an unchanging substance, space and time as neutral coordinates, or Galileo’s pure movement (which mirrors the movement of capital). As second nature, they are seen as reflecting reality as such, independent of the acts that gave rise to them: “The social origin of his conceptual forms remains absolutely hidden to the intel- lectual.”160 This lack of awareness is compounded by the fact that in the act of exchange, or in scientific research, the motives of the subjects are consciously informed by first-nature problems, tasks, or interests, while they remain unconscious of the abstractions involved in the practice. The user becomes an exchanger, and what is exchanged becomes a commodity. The commod- ity, like pure movement, is an idea. It does not undergo any physical changes during the process of exchange, something that is empirically impossible. Fundamental for Sohn-Rethel is therefore the temporal separation between (ideal) acts of exchange and (practical) acts of use. This is the unconscious source of abstraction; the specter arises out of exchange, not out of the com- modity or out of production.161 Under conditions of universal monoculture, then, there is a separate sphere of social synthesis through exchange. Further, because profit depends on the (temporary) extraction of value from circulation, this sphere needs to expand continually. The need for intensive and extensive growth means that more and more of the world must be translated, or transformed, into the “world” and thus made universally exchangeable. Within this process, the autonomy of intellectual labor is coeval with its alienation, with the loss of those traits that belong to its historical origins.

16028-0303f-Finalpass-r01.indd 30 9/24/2019 12:03:27 PM Prolegomena to Any Future Attempt 31

This allows the intellect to “move according to its own normative nature, its ‘logic.’ ”162 Euclid’s geometry, as opposed to, for example, the Egyptian measurement with ropes, is paradigmatic for this. It is a self-referential geometry that operates independently of the metabolism between mankind and nature. It is a “pure formalism of second nature,”163 while the Egyptian method remained tied to a specific place and task and could not be abstracted from it. Sohn-Rethel assumes a distinction between social synthesis based on first nature and social synthesis based on second nature. This corresponds to the state before and the state after the exchange abstraction and to the differ- ence between technology that is tied to a place and time and technology that is endlessly moveable and variable. Production is always—potentially—first nature; circulation (exchange, trade, commerce) is second nature. Exchange abstraction decouples (qualitative) production from (quantitative) distribu- tion. Subsequently, the former is subordinated to the latter.164 One of the preconditions for our universal monoculture is the modern col- lapse of the distinction between a translunary (mathematically precise) and a sublunary world (of messy production). Under Copernican assumptions, the translunary and sublunary worlds become one: hence, there is a shift from transcendence to pure immanence. The one world is made of the same substance and ruled by the same physical laws as the other, which is what allows Newton to assume “the theoretical unity of the mechanical philosophy of nature.”165 In the process, mathematical precision moves into the world of production, closing the gap between the two worlds but at the same time wid- ening the gap between intellectual and manual labor. Craftsmen are no longer autonomous, and they become increasingly dependent on mathematical and other intellectual laborers. For Sohn-Rethel, science and capital are therefore complementary powers of second nature.166 This sets the limit to science’s capacity for self-reflection. Its criteria will always be those of second nature:

Left to its own devices, this science will tend toward fulfilling the postulate inherent in capital, i.e., that of the automatization of production to the point of automatism incarnate [leibhaftige Automation], irrespective of the economic contradictions which oppose this process. The categories employed in the knowledge of nature are at the same time categories of alienation from nature, without that fact being visible.167

The real problem—a problem that would persist even after the abolition of capitalist property rights—is the division between intellectual and manual labor. It marks the intellectual and ideological aspect of second nature as opposed to its economic aspect: “Money is not only capital; it is also the a priori of abstract understanding [Verstandestätigkeit].”168

16028-0303f-Finalpass-r01.indd 31 9/24/2019 12:03:27 PM 32 Chapter 2

Adorno and the False Totality of Society As a real abstraction, the exchange abstraction is also central to Adorno’s concept of society,169 which is connected to that of totality, not as “an affirma- tive but rather a critical category.”170 Individuals unconsciously produce and obey the constraints of the totality, and to that extent “totality is what is most real.” But as “the sum of individuals’ social relations which screen them- selves off from individuals, it is also illusion—ideology. A liberated mankind would by no means be a totality.”171 In such a mankind, exchange would not “only be abolished, but fulfilled;” that is, no one would be deprived of the yield of his or her labor.172 Under conditions of economic exchange, by con- trast, the “objective rationality of society, namely that of exchange, continues to distance itself through its dynamics from the model of logical reason. Consequently, society—become autonomous [das Verselbständigte]—is, in turn, no longer intelligible, only the law of its autonomisation [das Gesetz von Verselbständigung] is intelligible.”173 In the process of “autonomisation,” a split-off part of society becomes independent (das Verselbständigte) and pres- ents itself as the totality. This irrational totality contradicts even its own ideal of logical reason, which it purports to follow. By insisting on “the distinction between essence and appearance,” dialectical analysis aims to recapture and undo the irrational moment of the translation that turns everything into a part of that totality. Only through theoretical interventions can crucial structures “of the social process, such as that of the inequality of the alleged equiva- lency of exchange,” become visible. Adorno connects such interventions with Nietzsche’s term “nether‑worldly [hinterweltlerisch]” by suggesting that the “concealed essence is non‑essence [daß das verborgene Wesen das Unwesen sei],” that is, not something that would have a lesser status than essence but something secretly active, something dialectical thought criticizes “because of the contradiction between non-essence and ‘what is appearing’ and, ulti- mately, the contradiction between non-essence and the real life of human beings.”174 Unwesen does not just mean “non-essence;” it denotes a set of activities, a pattern or process, that produces, by its very nature, negative outcomes. An Unwesen is an ill wind that does nobody any good. There may be short-term gains for some, but in the long term nobody will profit. According to Adorno, the fundamental contradiction in exchange abstrac- tion is “that exchange takes place justly and unjustly [daß beim Tausch alles mit rechten Dingen zugeht und doch nicht mir rechten Dingen];”175 it is, further, a contradiction that does not rest but has an inherent tendency to expand, to extend itself over ever-wider areas of social life. In other words, it escalates. Exchange abstraction, as a real abstraction, is inseparable from the escalating dynamic of capital, but by presenting it as a quasi-natural force (as a teleological reading of Marx does), the danger is that it is mimetically

16028-0303f-Finalpass-r01.indd 32 9/24/2019 12:03:27 PM Prolegomena to Any Future Attempt 33

affirmed, and the only conceivable exit from it is through its own self- overcoming.176 From a Marxist perspective, the hope for the self-overcoming of the principle of escalation is the hope for socialism. From an Adornian perspective, however, things look different. Rather than placing one’s bets on a self-overcoming of the Hinterwelt that incessantly develops its Unwesen— which is actually the autonomous “world as system”—Adorno pleads for a remembrance of the world as it was before it was formed by the principle of exchange. The opposite of escalating exchange abstraction is Selbstbesin- nung, self-reflection, or the remembrance of nature in the subject. Unlike the philosophical tradition, such remembrance sees no contradiction between nature and reason. Reichelt suggests that, for Adorno, theory, as a presentation of the riven totality, is also “a method, which ‘follows the object’ [sich der Sache anschmiegt] and thereby reconstructs the irrational systematicity of the real system itself.”177 But according to Reichelt, Adorno failed to offer any detailed account of how to use his central concepts in the description of concrete economic processes. And yet the demand for “concretization” also entails the danger of following the object—the “world as system”—too closely. Remem- brance, Selbstbesinnung, is a radical break, a radical moment of arresting a process that otherwise relentlessly moves on. In this way, it is transformative. The detailed reconstruction of social autonomization (Verselbständigung) must follow the logic of its object, while, by contrast, the relationship between a “world” that is governed by exchange abstraction and the world itself reopened by Selbstbesinnung cannot be theoretically constructed. This may explain Adorno’s emphasis on aesthetics: any critique that does not share— even unwittingly—the fundamental features of the totality must anschmiegen itself to the phenomena as they appear before their translation into “world.”

Theoretical Mimesis The danger of theoretical mimesis follows from the spell cast by the universal monoculture and the impoverishment of the imagination that accompanies it. Exchange abstraction is a real abstraction; it informs our thinking and our practices—that is, how we turn the world into the “world”—to the point of disallowing deviating practices and ways of acting and experiencing. The system of interconnected mechanisms for the exchange of money, energy, and information constitutes a medium that can no longer be analyzed accord- ing to conventional political, social, economic, or military categories. The escalating nature of this system is driven by the expansion of capital and by the fact that it cannot be controlled by any normative regulations: it is a post- normative medium in which techno-economic partisans—as (in-)dividuals— replace traditional subjects as agents.

16028-0303f-Finalpass-r01.indd 33 9/24/2019 12:03:27 PM 34 Chapter 2

Critical discourses often mimetically reproduce the system’s properties and therefore end up by posing Manichean alternatives. One of the most perfect cases of the mimesis of universal monoculture is Bruce Sterling’s vision of a world built on what he calls “SPIMES,” manufactured objects that

begin and end as data. They are designed on screens, fabricated by digital means, and precisely tracked through space and time throughout their earthly sojourn. SPIMES are sustainable, enhanceable, uniquely identifiable, and made of substances that can and will be folded back into the production stream of future SPIMES. Eminently data-mineable, SPIMES are the protagonists of an historical process.178

This vision turns the nightmare of an all-pervasive IoT world (a world in which everything is tagged and tracked) into the paradise of an endless cre- ative transformation, running as a frictionless machine, a perpetuum mobile populated by “Wranglers,” as he calls the “people within an infrastructure of SPIMES.”179 The disturbing aspect of Sterling’s optimistic book is that, in the name of sustainable creativity, it calls for a techno-logistical world that would also perfectly fit the needs of the military planner, security expert, and Caerus analyst. The nightmarish aspect of this vision quickly becomes clear if we substitute Foucault’s neoliberal homo oeconomicus for the “Wrangler.” As “an entrepreneur of himself,”180 he represents the commodification not only of human labor but also of the human body, the human being as such. He is “an arbitrary bundle of ‘investments,’ skill sets, temporary alliances (family, sex, race), and fungible body parts.”181 This individual is lodged “within the framework of a multiplicity of diverse enterprises. . . . And finally, the indi- vidual’s life itself—with his relationships to his private property, for example, with his family, household, insurance and retirement—must make him into a sort of permanent and multiple enterprise.”182 It is not at all inconceivable that it will soon be possible to mortgage your body (to promise your organs or tissue for donation, for instance, in order to raise capital or service your debts), in particular if—in line with the fundamen- tal contradiction of universal monoculture—the very body of the neoliberal (in)dividual will be seen as nothing but data. The Visible Human Project, the translation of an entire human corpse into data sets, which began in the early 1990s, rests on “an equation of digital code with vitality” and on the “desire for bodies to behave as closed mechanical systems with reversible temporali- ties, rather than as non-reversible, chaotic systems which necessarily move towards death.”183 Whether or not the project is medically useful, it bears all the hallmarks of our universal monoculture. In particular, the changes that the body undergoes in the process of being translated into data are considered a

16028-0303f-Finalpass-r01.indd 34 9/24/2019 12:03:28 PM Prolegomena to Any Future Attempt 35

technical problem that can be solved. The categorical difference between life and death is slowly erased by technological improvements until flesh-and- blood bodies and digital revenants become indistinguishable.184 Foucault’s lectures from 1977/78 and 1978/79 have been read as “a coher- ent argument for the positive force of globalization.”185 And, in fact, one might go further and suggest that, in his notions of power and the dispositif, one can observe a mimetic approval of neoliberalism (and universal mono- culture). Then there are Jean-Luc Nancy’s reflections on “struction,” on a world in which Being has been replaced by “being with” and in which it is “truly not a question of order or organization that is implied by con- and in- struction.” Aimed against extraction and hierarchy, they end with “the heap, the non-assembled ensemble. Surely it is contiguity and co-presence, but without a principle of coordination.”186 Again there arises the question of how to distinguish between the Manichean alternatives, between the liberated (and liberating?) nonassembled ensemble and an all-pervasive, invisible coordina- tion that favors partial interests. Another example is and Félix Guattari’s nomadic war machine.187 Its form corresponds to global partisan warfare; it turns the planet into a “smooth space” onto which “striated spaces” are projected (see the discussion of flow markets, the global frontierland, and drone targeting above). Nomadism is not the opposite of hierarchical or state power; rather, state power contains nomadism and vice versa. In the field of sociology, we find Neil Fligstein and Doug McAdam’s strategic action fields (SAFs), “the basic structural building block of modern political/organizational life in the economy, civil society, and the state.”188 SAFs form a fleeting world of metastable “socially constructed arenas” in which “actors with varying resource endowments vie for advantage:”189 “All the meanings in a field can break down including what the purpose of the field is. . . . [T]he process of contention is ongoing and the threats to an order always present to some degree.”190 But perhaps the most perfect example of theoretical mimesis is ’s “ultimate reality of impossible exchange,” the “Impossible Exchange Barrier.”191 Here, universal exchange is turned on its head. Set- ting out from the observation that “there is no equivalent to the world,”192 Baudrillard argues that there are no equivalences in the world, that a “con- tinuity of the Nothing . . . grounds the possibility of the Great Game of Exchange,”193 and he concludes that the “whole problem is one of abandoning critical thought” as it is now anachronistic.194 Instead, the “task of thought” is to make the world “even more enigmatic and unintelligible.”195 This, despite the perceptive qualities of the text, is the ultimate confirmation of universal monoculture through its seeming reversal.196

16028-0303f-Finalpass-r01.indd 35 9/24/2019 12:03:28 PM 36 Chapter 2

CONCLUSION

The dystopian endpoint is not a world that has been taken over by machines and programs that develop and follow their own agenda—the singularity nightmare. The real dystopian endpoint is a world in which the process of mutual formation between humanity and its technological inventions has pro- duced a state in which both the human and the nonhuman worlds are modeled on just one, in its foundational principles very limited, invention—the infor- mation processor—and there is no longer any imaginative space in which alternatives might be created. The danger is neither technology destroying the world—this is also a danger but a different one—nor the reduction of the social to the technological but the social becoming locked into the technologi- cal to the point of indistinction. If this happens within a capitalist economic framework, then the result will be a kind of dysfunctional stasis, a lingering apocalypse, life as continual warfare, with techno-economic partisans perma- nently being engaged in instantaneous value extraction.197 The indistinctions that today make themselves felt empirically and in attempts at theoretical explanation will have become a complete homogeneity that may no longer even deserve the adjective “social.” We have moved from conceptual indistinctions in the description of the emerging world of war to the mimetic escalation that drives it forward and produces the “security paradigm,” which is applied to the world as system, the “world” (or the world seen under the spell of universal convertibility). If Sohn-Rethel’s exchange abstraction is a plausible framework for looking at this development, then partisan escalation (at the level of practice) and theo- retical escalation (at the conceptual level) are two codependent factors in the overall process of mimetic escalation. The world is more and more organized as the “world;” individual actors conceive of themselves more and more as reflex (in-)dividuals seeking instantaneous advantage, and any perspective outside the self-referential universal monoculture is more and more difficult to achieve. Thus, my reflections must admit that they, too, end with a Mani- chean alternative: that between a world of blind practice and a world in which Selbstbesinnung is still possible.198

NOTES

1. Mary Kaldor, New and Old Wars: Organized Violence in a Global Era, 3rd ed. (Stanford, CA: Stanford University Press, 2012), 110. 2. Frank G. Hoffman, “Hybrid Threats: Neither Omnipotent nor Unbeatable,” Orbis 54, no. 3 (2010): 441–55.

16028-0303f-Finalpass-r01.indd 36 9/24/2019 12:03:28 PM Prolegomena to Any Future Attempt 37

3. Rupert Smith, The Utility of Force: The Art of War in the Modern World (New York: Vintage, 2008). 4. Ibid.; and David Kilcullen, Out of the Mountains: The Coming Age of the Urban Guerilla (London: C. Hurst, 2013). 5. David Kilcullen, “Counter-Insurgency Redux,” Survival 48, no. 4 (2006), 111–130. See also the discussion of Kilcullen’s firm Caerus Associates below. 6. Philip Kapusta, The Gray Zone, Special Warfare (October-December 2015): 22. 7. Theodor W. Adorno, “The Actuality of Philosophy,” Telos 31 (1977): 120. 8. Ibid., 131. 9. Ibid., 132 (translation modified). 10. Theodor W. Adorno, : Concepts and Problems, trans. Edmund Jephcott (Cambridge: Polity, 2000), 115 (translation modified). 11. Smith, Utility of Force, 4, 415. 12. Carl Schmitt, Theory of the Partisan: Intermediate Commentary on the Con- cept of the Political, trans. G. L. Ulmen (New York: Telos Press, 2007), 13. 13. Guy Debord characterizes Panama’s General Noriega as a figure “who sells everything and fakes everything, in a world which does precisely the same thing,” which makes him “a perfect representative of the integrated spectacle.” Guy Debord, Comments on the Society of the Spectacle (London: Verso, 1980), 58. However, global partisan warfare is not only an integrated spectacle but also a decentered and invisible mass of transactions of which nothing and no one could be a “perfect representative.” It is a world not so much beyond good and evil but beyond good and bad faith. 14. Carl von Clausewitz, On War, trans. Michael Howard and Peter Paret (London: Alfred A. Knopf, 1993), 101. 15. Howard Caygill, On Resistance: A Philosophy of Defiance (London: Bloomsbury, 2013), 60. 16. Ibid., 61. 17. Ibid., 62. 18. This is not Derrida’s “rhetorical-strategic escalation,” which he says provides the “logic” of deterrence, the “economy of deferral or deterrence.” , “No Apocalypse, Not Now (Full Speed Ahead, Seven Missiles, Seven Missives),” Diacritics 14, no. 2 (1984): 29. The threat it carries is not that of the total destruction of the “juridico-literary archive” (ibid., 26) but that of the rendering of any archive useless: practice turns into instantaneous partisan action because the formulation of katechontic stories—the “fabulous specularization” of the “unanticipatable entirely- other” (ibid., 23)—is made impossible. 19. Tobias Voss, “ ‘Ich habe keine Stimme mehr, mein ganzes Leben flieht’: Psychische Dimensionen des Guerilla-Krieges,” in Der Partisan: Theorie, Strategie, Gestalt, ed. Herfried Münkler (Opladen: Westdeutscher Verlag, 1990), 318. 20. Herfried Münkler, “Die Gestalt des Partisanen: Herkunft und Zukunft,” in Der Partisan: Theorie, Strategie, Gestalt, ed. Herfried Münkler (Opladen: West- deutscher Verlag, 1990), 29. Fifteen years later, in Empires, Münkler still speaks of an “identity anti-imperialism” in which war and violence “are no longer just a means to certain ends, but techniques of self-affirmation and self-confirmation.”

16028-0303f-Finalpass-r01.indd 37 9/24/2019 12:03:28 PM 38 Chapter 2

Herfried Münkler, Empires, trans. Patrick Camiller (Cambridge: Polity, 2007), 132. But recently, in Kriegssplitter, his emphasis has shifted: the partisan nature of the new wars is owed to the privatization of war, asymmetry, and, paradoxically, demili- tarization, which leads to a “diffuse admixture of different violent actors.” Herfried Münkler, Kriegssplitter: Die Evolution der Gewalt im 20. und 21. Jahrhundert (Ber- lin: Rowohlt, 2015), 217. 21. Carl Schmitt, The Concept of the Political, trans. George Schwab (London: Chicago University Press, 1996), 36. This may sound surprising in the face of today’s extremisms and fundamentalisms. But the New Wars suggest that extremism and fundamentalism are just two strategies in a universal hall of mirrors. 22. See ibid., 37, 49. 23. Schmitt, Theory of the Partisan, 22. 24. Ibid., 79. 25. Ibid., 76 (translation modified). 26. Ibid., 80 (translation modified). 27. Ibid., 80. 28. Ibid., ‘79, fn 85.’? 29. Kaldor, New and Old Wars, vi. 30. Ibid., 113. 31. Ibid., 110. For an informative chart showing the “resource flows in the new wars,” see ibid., 111. 32. Mary Kaldor, “Elaborating the ‘New War’ Thesis,” in Rethinking the Nature of War, ed. Isabelle Duyvestein and Jan Angstrom (London: Frank Cass, 2005), 216. 33. Kaldor, New and Old Wars, 218. 34. Ibid., 2. 35. Herfried Münkler adopts “hybrid wars” as a key term, because New Wars “defy the system of binary categories” of the Westphalian system. Münkler, Kriegssplitter, 13. New Wars are waged over flows: the controller of flows “almost automatically becomes an imperial actor” (ibid., 325–26), while threats emerge from other empires or from “de-territorialised actors, who, like partisans and pirates, attack these flows and norms” (ibid., 328). Deterritorialized threats cannot be deterred and instead “must be anticipated,” which turns “military operations . . . into policing” (ibid., 317). However, Münkler does not talk about hybridity on the side of the enti- ties doing the policing. He retains the normative distinction between legally protected cooperation and flows and illegal value extraction. The “security provider” is the sovereign who establishes this distinction (see below). 36. Mary Kaldor addresses this incongruence between new threats and a still Westphalian security paradigm in a critique of British parliamentary inquiries. Talk of a “comprehensive approach” refers to “tools, not doctrine and strategy,” and it is the latter that now need to be rethought in light of the question: “What ideas and practices constitute power?” Mary Kaldor, “Missing the Point on Hard and Soft Power?” Politi- cal Quarterly 85, no. 3 (2014): 376. 37. Frank G. Hoffmann is a former marine officer and now national security affairs analyst and research fellow at the Potomac Institute for Policy Studies, Arlington, Virginia.

16028-0303f-Finalpass-r01.indd 38 9/24/2019 12:03:28 PM Prolegomena to Any Future Attempt 39

38. Cf. Frank G. Hoffman, Conflict in the 21st Century: The Rise of Hybrid Wars (Arlington, VA: Potomac Institute for Policy Studies, 2007), 7. 39. Smith, Utility of Force, 6. 40. Ibid., 400. 41. Ibid., 291. Despite this universal militarization, Smith upholds the distinc- tions between “form” and “formless,” “regular” and “irregular.” And despite regular and irregular forces often “morphing” into each other (cf. ibid., 10–11), he character- izes the opponents’ command system as hierarchical above ground but “rhyzomatic” below, with cells spreading according to the logic of franchise (ibid., 332). 42. Ibid., 400ff. 43. “The link between destruction and political objectives on both sides is indi- rect. . . . Intervening factors are necessary to deliver the political desirables.” Isabelle Duyvesteyn, “Exploring the Utility of Force: Some Conclusions,” Small Wars & Insurgencies 19, no. 3 (2008): 427. 44. Smith, Utility of Force, 415. 45. Ibid., 413. 46. Ibid., 377 (emphasis added). 47. Ibid., 378. 48. Ibid., 385. 49. Caerus Associates website, accessed May 9, 2019, http://caerusassociates .com/. 50. David Kilcullen, “Complex Warfighting,” Future Land Operating Concept (Australian Army, Unclassified Draft Developing Concept, April 7, 2004), 9. 51. Kilcullen, “Counter-Insurgency Redux,” 122. 52. Ibid. 53. Ibid., 124 (emphasis added). 54. Kilcullen, Out of the Mountains, 126. 55. Ibid., 133. 56. This capacity is, of course, Schmitt’s sovereignty, making the theory of com- petitive control a version of decisionism. 57. Kilcullen, Out of the Mountains, 134. 58. This is markedly different from Ulrich Beck’s notion of “risk war,” accord- ing to which war is one clearly defined option in the management of risk, shifting risk from us to them. Ulrich Beck, World at Risk, trans. Ciaran Cronin (Cambridge: Polity, 2007), 140–59. Beck is drawing on Martin Shaw, The New Western Way of War (Cambridge: Polity, 2005). (See, in particular, ibid. 47–70, where Shaw introduces the notion of a “global surveillance mode of warfare” and ends with “wars as imag- ined economies.”) It is closer to Didier Bigo’s “emergent field of the management of unease,” which, he argues, explains “the formation of police networks at the global level” and “the policiarization of military functions of combat,” which leads to what he calls the “ban-opticon.” Didier Bigo, “Globalized (In)security: The Field and the Ban- Opticon,” in Terror, Insecurity and , ed. Didier Bigo and Anastassia Tsoukala (London: Routledge, 2008), 10 (emphasis added). According to the ex-NATO General Fabio Mini, “money, information, the control over resources and over people, over decision-makers and thus the rules,” are still the main tools for exercising power, while “war and the threat of war allow for the creation of insecurity, and for it to be

16028-0303f-Finalpass-r01.indd 39 9/24/2019 12:03:28 PM 40 Chapter 2

maintained at the severe level needed for the exercise of power.” Fabio Mini, Che Guerra Sarà (Bologna: il Mulino, 2017), 7–8. Unease, insecurity, and fear are neces- sary for “everything to continue as before,” a situation in which, as Frédéric Gros per- ceptively concludes, security and catastrophe are identical: “Security (catastrophe), that is when it all continues as before.” Frédéric Gros, Le Principe Sécurité (Paris: Gallimard, 2012), 238. The translations from Mini and Gros are mine. 59. This affects Mary Kaldor’s suggestion of normative, cosmopolitan, multidi- mensional identity politics. In emergencies, “political identity” might well be “shaped by those who act—the family, the clan, the municipality, the state, the UN, the EU or the NGO.” But under conditions of an “environment of global risk” and competitive control, it does not matter whether identities are uni- or multidimensional; the prin- ciple of security applies and with it a decentered (global, yet local) version of protego ergo obligo. Mary Kaldor, “Identity and War,” Global Policy 4, no. 4 (2013): 344. 60. Kapusta, Gray Zone, 23. 61. Ibid. 62. Ibid., 24. 63. Ibid. 64. Ibid. 65. “States and non-states can ‘test the waters’ ” and “determine the relative strength of domestic and international commitment to an endeavor without resorting to the more lethal violence of war. In brief, gray zone conflicts are an immensely bet- ter alternative to full-scale wars.” Ibid., 25. 66. For a sample list of suggested names, see, for example, Jan Angstrom, “Intro- duction: Debating the Nature of Modern War,” in Rethinking the Nature of War, ed. Isabelle Duyvesteyn and Jan Angstrom (London and New York: Frank Cass, 2005), 6. For a more skeptical view on the newness of New Wars, see Jessica Wolfendale, “ ‘New Wars,’ Terrorism, and Just War Theory,” in New Wars and New Soldiers: Military Ethics in the Contemporary World, ed. Paolo Tripodi and Jessica Wolfendale (Farnham: Ashgate, 2011), 13–30. 67. Johann Wolfgang Goethe, “Analysis and Synthesis,” in Collected Works, vol. 12, Scientific Studies, ed. and trans. Douglas Miller (Princeton, NJ: Princeton University Press, 1995), 49. 68. Ibid., 50. 69. Theodor W. Adorno, “The Idea of Natural History,” Telos 60 (1984): 111. 70. Ibid., 117. 71. Ibid., 111. 72. Alfred Sohn-Rethel’s concept of exchange abstraction will be discussed in more detail in the section “Universal Monoculture: How to Avoid Theoretical Mime- sis and Mimetic Escalation.” For a full account of his views, see his Intellectual and Manual Labour: A Critique of Epistemology, trans. Martin Sohn-Rethel (London: Macmillan, 1978). 73. Gilles Deleuze, “Postscript on ‘Control Society,’ ” in Gilles Deleuze, Nego- tiations 1972–1990, trans. Martin Joughin (New York: Columbia University Press, 1995), 180.

16028-0303f-Finalpass-r01.indd 40 9/24/2019 12:03:28 PM Prolegomena to Any Future Attempt 41

74. Ludwig Wittgenstein’s example of “family resemblance” is the “activities that we call ‘games;’ ” these are characterized by “a complicated network of similari- ties overlapping and criss-crossing: similarities in the large and the small.” Ludwig Wittgenstein, Philosophical Investigations, 4th ed., trans. G. E. M. Anscombe, P. M. S. Hacker, and Joachim Schulte (Chichester: Wiley-Blackwell, 2009), §66. These similarities are like the twisted fibers of a thread: “And the strength of the thread resides not in the fact that some one fibre runs through its whole length, but in the overlapping of many fibres” (ibid., §67). 75. Of course, this still leaves the question of whether it is possible to draw a line between objects or practices that form part of warfare and those that do not. A Witt- gensteinian answer would be as follows: we may draw such a line, and where we draw it will depend on the purpose for which we draw it. But we cannot “tell others exactly what a game [or a weapon, etc.] is” because we do not know ourselves, and we do not know ourselves because no lines have been drawn. We may “draw a boundary—for a special purpose,” but this is not necessary for the concept to be usable (ibid., §69). 76. Saskia Sassen assumes that “we are seeing the making not so much of predatory elites [which always existed] but of predatory ‘formations,’ a mix of elites and systemic capacities with finance a key enabler.” Saskia Sassen, Expulsions: Brutality and Complexity in the Global Economy (London: Harvard University Press, 2014), 13. 77. Kaldor, New and Old Wars, 113. 78. Of course, what is claimed here for screens holds for any sensory interface. 79. As points out, both Schmitt, in his Theory of the Partisan, and Arendt, in On Revolution, already suggested that we live in an age of global civil war or, as he puts it, a war that “cannot be defined as an international conflict, yet which lacks the features of civil war.” Giorgio Agamben, Stasis: Civil War as a Political Paradigm, trans. Nicholas Heron (Edinburgh: Edinburgh University Press, 2015), 2. 80. As a medium, the screen is, paradoxically, unframed. Like the aerial view, it has only arbitrary edges. Within it, anything may appear, but nothing has a fixed location as the potential context is limitless. 81. Deborah Cowen describes how, in the wake of World War II, logistics moved from a military into an economic context, taking on a leading rather than subservient role in strategy formation and producing a “new framework of security” that “relies on a range of new forms of transnational regulation, border management, data collec- tion, surveillance, and labor discipline, as well as naval missions and aerial bombing.” Deborah Cowen, The Deadly Life of Logistics: Mapping Violence in Global Trade (London: University of Minnesota Press, 2014), 2. Cowen argues that, through its “instabilities,” the logistical system also “incubates alternative spaces and futurities” (ibid., 5). 82. Wolfgang Streeck, following Polanyi, declares that “labour, land, and money have simultaneously become crisis zones after ‘globalization’ endowed market rela- tions and production chains with an unprecedented capacity to cross the boundaries of national political and legal jurisdiction.” Wolfgang Streeck, “How Will Capitalism End?” New Left Review 87 (2014): 54. He identifies five disorders of the capitalist

16028-0303f-Finalpass-r01.indd 41 9/24/2019 12:03:28 PM 42 Chapter 2

endgame: “stagnation, oligarchic redistribution, the plundering of the public domain, corruption and global anarchy” (ibid., 55). A similar diagnosis was made earlier by Susan Strange, who identified three problems, loosely corresponding to Polanyi’s fic- titious commodities, as insoluble within the context of the Westphalian state system, namely failure to manage and control the financial system (money), failure to protect the environment (land or nature), and failure to preserve a socioeconomic balance between rich and poor (labor). Susan Strange, “The Westfailure system,” Review of International Studies 25 (1999): 345–54. 83. Although the title of Helmholtz’s treatise speaks of the conservation of force, it actually spells out the conservation of a quantity, energy: “What Helmholtz actu- ally did was to formulate clearly the principle of the conservation of mechanical energy and then show that all the various ‘forces of nature’ can be subsumed under this principle. He thus created the general concept of energy as the one entity that is conserved under all circumstances in a fundamentally mechanical world.” Yehuda Elkana, “Helmholtz’ ‘Kraft’: An Illustration of Concepts in Flux,” Historical Studies in the Physical Sciences 2 (1970): 263–64. 84. Yehuda Elkana, The Discovery of the Conservation of Energy (London: Hutchinson Educational, 1974), 9. 85. According to Herbert Breger, in the mid-nineteenth century, the model of the clock was superseded by nature “as a working machine. All natural forces were reduced to the measure of force used to determine the performance of machines, i.e. to the concept of mechanical work.” Herbert Breger, Die Natur als arbeitende Mas- chine: Zur Entstehung des Energiebegriffs in der Physik 1840–1850 (Frankfurt: Cam- pus, 1982), 228 (my translation). See, in particular, ibid., chapter 7, “Die Vorstellung von der Welt als einer arbeitenden Maschine” (The idea of the world as a working machine), which opens with a line ascribed to William Thomson (Lord Kelvin): “The great principle of the conservation of energy teaches us that the material universe moves as a frictionless machine.” Ibid., 129 (emphasis added). 86. ’s description of the difference between concrete and abstract human labor is an exact analogy. From the perspective of abstract labor, all that is left in the products of labor is “the same phantom-like objectivity; they are merely congealed quantities of homogenous human labour [bloße Gallerte unterschiedsloser menschlicher Arbeit], i.e. of human labour-power expended without regard to the form of its expenditure.” Karl Marx, Capital: A Critique of Political Economy, vol. 1, trans. Ben Fowkes (London: Penguin, 1976), 128. 87. Claude Shannon and Warren Weaver, The Mathematical Theory of Commu- nication (Chicago: University of Illinois Press, 1998), 31. 88. Shannon’s terminological choice was inspired by von Neumann: “I thought of calling it ‘information,’ but the word was overly used, so I decided to call it ‘uncer- tainty.’ When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, ‘You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one knows what entropy really is, so in a debate you will always have the advantage.’ ” Myron Tribus and Edward C. McIrvine, “Energy and Information,” Scientific American 224 (1971): 180.

16028-0303f-Finalpass-r01.indd 42 9/24/2019 12:03:28 PM Prolegomena to Any Future Attempt 43

89. The confusion can also be found in authors who are, to varying degrees, critical of the “information age.” Luciano Floridi sets up an “informational struc- tural realism,” according to which “the ultimate nature of reality is informational.” Luciano Floridi, Philosophy of Information (Oxford: , 2011), 340. The “Onlife Manifesto” demands “reflection on the way in which a hypercon- nected world calls for rethinking the referential frameworks on which policies are built.” The Onlife Initiative, “The Onlife Manifesto,” in The Onlife Manifesto: Being Human in a Hyperconnected Era, ed. Luciano Floridi (London: Springer, 2015), 8. But Floridi considers the “ ‘Infosphere’ (capital I)” as “synonymous with the whole reality,” explicitly equating it with “what call Being (again, capital B meant). ‘Infosphere’ is a very powerful concept. It means having a unified vocabu- lary to talk about DNA, computers, physical particles, avatars, social environments, humans, companies or webbots as agents, interactions as forms of communication, biosphere, ecosphere and cyberspace, and so forth.” Quoted in Vincent McBurney, “Professor Luciano Floridi on the Philosophy of the Infosphere,” Toolbox, accessed May 9, 2019, http://it.toolbox.com/blogs/infosphere/professor-luciano-floridi-on-the- philosophy-of-the-infosphere-23608. And for Tiziana Terranova, information (qua data) is “the milieu which supports and encloses the production of meaning. There is no meaning, not so much without information, but outside of an informational milieu that exceeds and undermines the domain of meaning from all sides.” Tiziana Terranova, Network Culture: Politics for the Information Age (London: Pluto Press, 2004), 9. She correctly writes that within a mathematical theory of communication, a given situation must be reduced “to a set of more or less probable states and alter- natives,” but then incorrectly continues, “as constrained by the interplay between a channel and a code” (ibid., 24). This interplay is the technically important one, but it is the translation of a given situation into a set of possible messages that are then encoded that constitutes the primary, interpretative step. As a possible source of resistance, she mentions “the extremely improbable,” “the possibility of a fluctuation that violates the organized space of the real and the possible” (ibid., 26). The spell of the informational paradigm holds: “What lies beyond the possible and the real” is not noninformational but “the openness of the virtual” in the sense of “invention” and “fluctuation,” something that irrupts and recedes: “a positive feedback effect of informational cultures as such” (ibid., 27). This mirrors the hope Cowen places on logistical “instabilities” (Deadly Life of Logistics). 90. Peter Janich, “Methodical Constructivism,” in Issues and Images in the Phi- losophy of Science, ed. D. Ginev and R. S. Cohen (Dordrecht: Kluwer, 1997), 182. When producing a Russian doll, for instance, the wood must be shaped first, before it is painted. A locked door is opened by turning the key first, then opening it. 91. Ibid., 183. 92. For an excellent, and subtly ironic, account of the metaphorical shifts and substitutions involved in this, see Evelyn -Keller, who concludes, “The body of modern biology, like the DNA molecule—and also like the modern corporate or political body—has become just another part of an informational network, now machine, now message, always ready for exchange, each for other.” Now machine, now message, but never biological organism. The machine/message ensemble may

16028-0303f-Finalpass-r01.indd 43 9/24/2019 12:03:28 PM 44 Chapter 2

have “liberated us from that odd locution ‘man has a body’ ” but only to substitute it with an “even odder set of locutions. Today, it might be more correct to say that the body—in the sense that the word has now acquired—has man. And this body may well have man in a grip tighter than any maternal body ever did.” The sense in ques- tion is the informational paradigm, and the “grip” is that of informational naturalism. Evelyn Fox-Keller, Refiguring Life: Metaphors of Twentieth-Century Biology (New York: Columbia University Press, 1995), 118. 93. Peter Janich, Was ist Information? (Frankfurt am Main: Suhrkamp, 2006), 12 (all translations from this text are mine). 94. Ibid., 20. In the structural sense, information is a purely functional term relating to technological processes of transmission and (digital or analog) coding and decoding as covered by Shannon’s famous schema. See Shannon and Weaver, Math- ematical Theory of Communication, 7. 95. The intersection between meaningful message and meaningless signal (data transported) is highlighted by cryptography and cryptanalysis. The aim of encrypt- ing is to make the code sufficiently complicated so that a third party cannot decode the signals to reveal the meaningful message. There can be no unbreakable code, however, for such a code would not allow the intended receiving party to decipher the signals either. 96. Peter Janich provides a methodical reconstruction of this dependency for the notion of “time.” Peter Janich, Protophysik der Zeit: Konstruktive Begründung und Geschichte der Zeitmessung (Frankfurt am Main: Suhrkamp, 1980). Weaver’s interpretation of Shannon’s theory does not distinguish between the intra- and extra- technical parts of the process. This is in line with a tradition going back to the times of Galileo, who interprets the functions of experimental apparatuses as a direct expres- sion of nature and its laws. What thus becomes invisible is the intentionality of the experimenter as expressed in the experimental arrangements. Instruments are seen “as natural objects.” Janich, Was ist Information?, 27. 97. Charles W. Morris, “Foundations of the Theory of Signs,” in International Encyclopedia of Unified Science, vol. 1, no. 2, ed. Otto Neurath, Rudolf Carnap, and Charles W. Morris (Chicago, IL: University of Chicago Press, 1938), 90. 98. See ibid., 81–82; and Janich, Was ist Information?, 43. 99. According to Morris, the term “meaning” should be avoided altogether. Man “must free himself from the web of words which he has spun,” and language is “in need of purification, simplification, and systematization.” He saw his theory as an instrument of “debabelization.” Morris, “Foundations,” 81. 100. I will use “world,” in inverted commas, to signify the world as system. 101. A model of an object abstracts features from it and can—potentially—be complete, that is, a complete replica. It is a model in the sense in which Norbert Wiener said that the best model of a cat is another cat. A model for something is some form of description that allows the construction of an object that may, or may not, have common features with a preexisting object. 102. Paul N. Edwards, “The World in a Machine: Origins and Impacts of Early Computerized Global Systems Models,” in Systems, Experts, and Computers: The Systems Approach in Management and Engineering, World War II and After, ed. Agatha C. Hughes and Thomas P. Hughes (Cambridge: MIT Press, 2000), 221.

16028-0303f-Finalpass-r01.indd 44 9/24/2019 12:03:28 PM Prolegomena to Any Future Attempt 45

103. Ibid., 223. The story has been told many times and from many angles. In addition to the volume containing the chapter by Edwards, see, for example, Philip Mirowski, Machine Dreams: How Economics Became a Cyborg Science (Cambridge: Cambridge University Press, 2002); Robert Leonard, Von Neumann, Morgenstern, and the Creation of Game Theory: From Chess to Social Science, 1900–1960 (Cambridge: Cambridge University Press, 2010), esp. part 3; Ronald R. Kline, The Cybernetic Moment, or Why We Call Our Age the Information Age (Baltimore, MD: Press, 2015); Thomas Rid, Rise of the Machines: A Cyber- netic History (New York: Norton, 2016), esp. chapter 8. 104. Jay Forrester’s career paradigmatically represents this development. From designing the Whirlwind computer and SAGE (Semi-Automatic Ground Environment), he moved to MIT’s Sloane School of Management, transferring the tools of wartime research into ever wider domains in his Industrial Dynam- ics (1961), Urban Dynamics (1969), and World Dynamics (1971). See Edwards, “World in a Machine,” esp. 237 and 247; Paul N. Edwards, The Closed World: Computers and the Politics of Discourse in Cold War America (Cambridge: MIT Press, 1996), chapter 3. The Club of Rome reports are based on methods devel- oped by him. Dennis Meadows was a student of Forrester, and Forrester acted as a consultant to the project. The ambiguity—and contradiction—between world and “world” also becomes apparent in Morse and Kimball’s classic Methods of Operations Research, which defines operations research as “a scientific method of providing executive departments with a quantitative basis for decisions regarding the operations under their control” and as helpful “in any field, industrial and gov- ernmental as well as military.” Philip M. Morse and George E. Kimball, Methods of Operations Research (New York: Dover, 2003), 1. At the same time, it stresses the importance of the theory of probability: “only when the results of a number of similar operations are examined does any regularity evidence itself” (ibid., 11). But although model simulations can be run a sufficient number of times to provide the necessary data, the world cannot. 105. Edwards, “World in a Machine,” 221–22. 106. Ibid., 234. 107. According to Edwards, Forrester saw the models as key to understanding: “Data could come later, in part because a systems model could help reveal which data might be most important” (ibid., 239). In other words, the “world” is constructed, and the world will then have to confirm or correct the construction. 108. Ibid., 247–48 (emphasis added). 109. Quoted in Andrew Pickering, The Cybernetic Brain (Chicago, IL: Chicago University Press, 2010), 19. 110. Ibid., 20. 111. Ibid., 31. 112. Ibid., 32. 113. Andrew Pickering, “Cyborg History and the World War II Regime,” Per- spectives on Science 3, no. 1 (1995): 39. 114. Ibid., 41. 115. The “world” fits with Jean-Luc Nancy’s concept of “globalization,” which refers to an “integrated totality,” rather than with his “mondialisation,” which is an

16028-0303f-Finalpass-r01.indd 45 9/24/2019 12:03:28 PM 46 Chapter 2

“expanding process” that retains the horizon of a world as a space for “human rela- tions” and “possible significance.” Jean-Luc Nancy, The Creation of the World or Globalization, trans. François Raffoul and David Pettigrew (New York: SUNY Press, 2007), 27–28. His later reflections on “struction” seem to suggest that this distinction will collapse either into world or “world,” either into endless immanent creation or an apocalyptic nothingness (see the section “Universal Monoculture: How to Avoid Theoretical Mimesis and Mimetic Escalation”). 116. Martin Heidegger, “The Age of the World Picture,” in Off the Beaten Track, trans. Julian Young and Kenneth Haynes (Cambridge: Cambridge University Press, 2002), 66. 117. Ibid., 71. 118. Ibid., 84. 119. Ibid., 81. 120. Ibid., 60. In the later “The Question Concerning Technology”, Heidegger indicates the limits of the practice associated with this picture. Even if physics were to give up on the representation of any kind of object, it would still “never be able to renounce this one thing: that nature report itself in some way or other that is identifi- able through calculation and that it remain orderable as a system of information.” Martin Heidegger, “The Question Concerning Technology,” in Basic Writings, ed. David Farrell Krell (London: Routledge, 1978), 229. This pushes further Werner Heisenberg’s thought—suggested in the lecture preceding Heidegger’s—that the object of modern science is no longer nature as such but the observer’s relation to nature, which puts mankind in the position “of a captain whose ship is built of so much steel and iron that the needle of his compass only ever points to the iron mass of the ship itself.” In other words, self-referentiality leads to loss of orientation: “With such a ship, no destination can be reached; it will only go in circles, hostage to the winds and currents.” Werner Heisenberg, “Das Naturbild der heutigen Physik,” in Die Künste im technischen Zeitalter, ed. Bayerische Akademie der schönen Künste (Darmstadt: Wissenschaftliche Buchgesellschaft, 1956), 46. 121. Martin Heidegger, The Fundamental Concepts of Metaphysics: World, Fini- tude, Solitude, trans. William McNeill and Nicholas Walker (Bloomington: Indiana University Press, 1995), 258. 122. Martin Heidegger, Parmenides, trans. André Schuwer and Richard Rojcewicz (Bloomington: Indiana University Press, 1992), 151–52. 123. Heidegger, Fundamental Concepts, 177. 124. Ibid., 259. 125. Ibid., 260. 126. Ibid., 263. 127. Mirowski, Machine Dreams, 1. 128. Short for “Semi-Automatic Ground Environment.” Developed in the 1950s, “SAGE logged the course, speed, altitude, and location of all aircraft flying over North America at any given moment, watching friend and foe alike.” Rid, Rise of the Machines, 77. See Mirowski, Machine Dreams, 350–51. 129. HoloLens technology is already being tested by the Israeli army for use in the training of soldiers and by military commanders. The obvious endpoint will be

16028-0303f-Finalpass-r01.indd 46 9/24/2019 12:03:28 PM Prolegomena to Any Future Attempt 47

the individual soldier operating in an augmented reality that is linked up with a GIG. “Mixed reality: Your world is the canvas” is the motto under which Microsoft invites everyone to purchase the Development Edition and start developing for Microsoft HoloLens. Microsoft HoloLens website, accessed May 9, 2019, https://www.micro soft.com/en-gb/hololens. Clearly, it is not too late to enter the Book of Genesis on the creator side. 130. Janich, Was ist Information? 85. 131. Even intentionality and moral agency are remodeled according to the properties of computer networks and thus made to be compatible with the “world.” See, for example, Luciano Floridi, “Distributed Morality in an Information Society,” Science and Engineering Ethics 19, no. 3 (2013): 727–43; Wendell Wallach, Stan , and Colin Allen, “A Conceptual and Computational Model of Moral Deci- sion Making in Human and Artificial Agents,” Topics in Cognitive Science 2 (2010): 454–85. 132. Karin Knorr-Cetina and Alex Preda, “The Temporalization of Financial Markets: From Network to Flow,” Theory, Culture & Society 24, no. 7–8 (2007): 116–38; Karin Knorr-Cetina, “From Pipes to Scopes: The Flow Architecture of Finan- cial Markets,” Distinktion 7 (2003): 7–23. 133. Knorr-Cetina and Preda, “Temporalization,” 130. 134. Ibid., 116. 135. Ibid., 117. 136. Knorr-Cetina, “From Pipes to Scopes,” 11. Here, too, the globalization/ localization dialectic holds: “microsocial structures and relationships are what instantiate some of the most globally extended domains.” Karin Knorr-Cetina and Urs Brueger, “Global Microstructures: The Virtual Societies of Financial Markets,” American Journal of Sociology 107, no. 4 (2002): 907. But the coherence, apart from computing requirements, depends on immaterial factors: “social liquidity is contin- gent on knowledge and information being traded among participants; knowledge appears to be the medium of relationships in global fields” (ibid., 915). 137. Knorr-Cetina, “From Pipes to Scopes,” 12. 138. Ibid., 13. 139. Ibid., 16. 140. For an overview of the “Revolution in Military Affairs” and information warfare, see John Arquilla and David Ronfeldt, In ’s Camp: Preparing for Conflict in the Information Age (Santa Monica, CA: RAND, 1997). 141. Zygmunt Bauman, “Reconnaissance Wars of the Planetary Frontierland,” Theory, Culture & Society 19, no. 4 (2002): 85. 142. This is the inevitable outcome of Arquilla and Ronfeldt’s “netwar” concept, as demonstrated by General Stanley McChrystal’s account of his experiences in Iraq and Afghanistan: “to defeat a networked enemy we had to become a network our- selves.” Stanley A. McChrystal, “Becoming the Enemy,” Foreign Policy 185 (2011): 67. The networked enemy possesses “a constantly changing, often unrecognizable structure,” is “self-forming” (ibid., 68), and without vertical hierarchy. Information flows in all directions at all times and may lead to actions at any time and in any place. Faced with this situation, McChrystal adopted the—mimetic—“mantra: It

16028-0303f-Finalpass-r01.indd 47 9/24/2019 12:03:28 PM 48 Chapter 2

takes a network to defeat a network.” In the process of implementing it, the network logic displayed its totalizing, escalating tendency: “Incomplete or unconnected net- works . . . are like firmly crafted gears whose movement drives no other gears” (ibid., 69). Turn this insight around, and the desirable endpoint becomes visible: a compre- hensive and coherent network for informational flows that is accessible exclusively by your “side” and provides you with “actionable data” in real time, that is, “F3EA: find, fix, finish, exploit, and analyse” and, of course, repeat the cycle. “The idea was to combine analysts who found the enemy (through intelligence, surveillance, and reconnaissance); drone operators who fixed the target; combat teams who finished the target by capturing or killing him; specialists who exploited the intelligence the raid yielded, such as cell phones, maps, and detainees; and the intelligence analysts who turned this raw information into usable knowledge. By doing this, we speeded up the cycle for a counterterrorism operation, gleaning valuable insights in hours not days” (ibid.). For an account of the fate of social network analysis in American coun- terinsurgency, see David Knoke, “ ‘It Takes a Network’: The Rise and Fall of Social Network Analysis in US Army Counterinsurgency Doctrine,” Connections 33, no. 1 (2013): 1–10. 143. Bauman, “Reconnaissance Wars,” 88. 144. Ibid. 145. Derek Gregory, “Drone Geographies,” Radical Philosophy 183 (2014): 12; see Rob Kitchin and Martin Dodge, Code/Space: Software and Everyday Life (Cam- bridge: MIT Press, 2011), 16–18. 146. Gregory, “Drone Geographies,” 13. 147. Amin Parsa, “Knowing and Seeing the Combatant: War, Counterinsurgency and Targeting in International Law” (PhD diss., Lund University, 2017), 21. 148. Ibid. 149. Ibid., 177. 150. Gregory, “Drone Geographies,” 14. 151. James A. Thomson, president of the RAND Corporation, quoted in Grégoire Chamayou, Drone Theory (London: Penguin, 2015), 55. 152. Ibid., 56. 153. Ibid., 57. 154. Stock market trading is to a significant extent automated high-frequency trading. The speed of information transmission is of the essence, see Michael Lewis, Flash Boys: Cracking the Money Code (London: Penguin, 2014). Knorr-Cetina and Preda (“Temporalization”) provide an informative sketch of the development from nineteenth-century stock markets to networked trading and flow markets that can serve to illustrate the shift from space and social connection to speed and electronic connection. This also highlights the importance of the interplay between networks and grids as abstract producers and distributors of information and energy and the physical infrastructure on which they rest. 155. Discussions of the “accountability of algorithms” or the ethical implica- tions of AI black boxes do not change this situation in any fundamental way, as is demonstrated by the suggestion that the solution might be programs that “can take an AI’s decision and work backwards through the program’s neural network to reveal

16028-0303f-Finalpass-r01.indd 48 9/24/2019 12:03:28 PM Prolegomena to Any Future Attempt 49

how a decision was made” and thus rule out the possibility of making the “wrong” decisions. Ian Sample, “Computer Says No: Why Making AIs Fair, Accountable and Transparent Is Crucial,” Guardian, November 5, 2017, https://www.theguardian .com/science/2017/nov/05/computer-says-no-why-making-ais-fair-accountable-and- transparent-is-crucial. The assumption remains that the translation process and algo- rithmic processing are potentially neutral regarding the outcome. This is the exact equivalent of the assumption that scientific instruments are neutral media that directly reveal natural laws. 156. For a critical discussion of this emphasis on exchange rather than abstract labor, see Anselm Jappe, “Sohn-Rethel and the Origin of ‘Real Abstraction’: A Cri- tique of Production or a Critique of Circulation?” Historical Materialism 21, no. 1 (2013): 3–14. 157. Alfred Sohn-Rethel, “Das Geld, die bare Münze des Apriori,” in Paul Mattick, Alfred Sohn-Rethel, and Hellmut G. Haasis, Beiträge zur Kritik des Geldes (Frankfurt am Main: Suhrkamp, 1976), 43 (all translations from this text are mine). 158. Alfred Sohn-Rethel, Soziologische Theorie der Erkenntnis (Frankfurt am Main: Suhrkamp, 1985), 39 (my translation). 159. Marx, Capital, 166. A more literal translation would be the following: “They don’t know it, but they do it.” 160. Sohn-Rethel, “Das Geld,” 46. 161. There is a complicated story to be told here, that of the development from value objectivity, to the value form, to universal equivalence (the money form as the separate bearer of real abstraction). Looking beyond the somewhat controversial details of this story, what matters in the present context is that the erasure of quali- tative differences leads to their subsequently being consigned to oblivion. Money, the physical representative of universal equivalence, would need to be “the material without qualities:” “The material of which money, strictly speaking, would need to be made, cannot exist in nature” (ibid., 62). The analogy to energy, a necessarily “unde- fined entity” (Elkana, “Helmholtz’ ‘Kraft,’ ” 9), and information, an abstract quantity of differences, should be obvious. 162. Sohn-Rethel, “Das Geld,” 69. 163. Ibid., 87. 164. Sohn-Rethel repeatedly hints at the role of military interests in the separa- tion between intellectual and manual labor: “The destruction of the unity of head and hand in the crafts can be measured by the intrusion of mathematics into production technology, including building practices, and especially military architecture. No other single phenomenon contributed as much to the need for mathematics in the crafts as the development of firearms” (ibid., 95). 165. Ibid., 102. 166. Ibid., 110. 167. Ibid., 111. 168. Ibid., 112. 169. See Helmut Reichelt, “Marx’s Critique of Economic Categories: Reflections on the Problem of Validity in the Dialectical Method of Presentation in Capital,” Historical Materialism 15, no. 4 (2007): 3.

16028-0303f-Finalpass-r01.indd 49 9/24/2019 12:03:28 PM 50 Chapter 2

170. Theodor W. Adorno, “Introduction,” in The Positivist Dispute in German Sociology, trans. G. Adey and D. Frisby (London: Heinemann Educational, 1976), 12. 171. Ibid. 172. Theodor W. Adorno, Negative Dialectics, trans. E. B. Ashton (London: Routledge, 1973), 295–96 (translation modified). 173. Adorno, “Introduction,” 5. 174. Ibid, 11–12 (translation modified). 175. Ibid., 25 (emphasis added). 176. The danger of such mimesis can be measured by the shifts in the use of the term naturwüchsig in Marx, Sohn-Rethel, and Adorno. A descriptive term in Marx, it is used by Sohn-Rethel both affirmatively (for “natural” social labor; cf. Sohn- Rethel, “Das Geld,” 88) and critically (for the “natural” genesis of abstract categories of thought; ibid., 82), and finally in Adorno it becomes a synonym for blindness and destructive tendencies. 177. Reichelt, “Marx’s Critique,” 6. 178. Bruce Sterling, Shaping Things (Cambridge: MIT Press, 2005), 11. 179. Ibid. 180. Michel Foucault, The Birth of Biopolitics: Lectures at the Collège de France 1978–1979, trans. Graham Burchell (Basingstoke: Palgrave Macmillan, 2008), 226. 181. Philip Mirowski, Never Let a Serious Crisis Go to Waste: How Neoliberal- ism Survived the Financial Meltdown (London: Verso, 2013), 59. 182. Foucault, Birth of Biopolitics, 241. 183. Catherine Waldby, The Visible Human Project: Informatic Bodies and Post- human Medicine (London: Routledge, 2000), 22. 184. See Philipp Schönthaler, “Vor Anbruch der Morgenröte,” in Philipp Schönthaler, Vor Anbruch der Morgenröte: Erzählungen (: Matthes & Seitz, 2017), 5–57, for an account of the life (and death) of Joseph Paul Jernigan and his transformation into the first visible human data set. The paradoxical combination of the mechanical paradigm and the “post-natural space of data” deserves detailed discussion (Waldby, Visible Human Project, 23). It would ultimately lead to thermo- dynamics and the computer as the places where the borders between reversibility and irreversibility are made passable. 185. Keith Tribe, “The Political Economy of Modernity: Foucault’s Collège de France Lectures of 1978 and 1979,” Economy and Society 38, no. 4 (2009): 680. 186. Jean-Luc Nancy, “Of Struction,” Parrhesia 17 (2013): 4. 187. Gilles Deleuze and Félix Guattari, “Treatise on Nomadology: The War Machine,” in Gilles Deleuze and Félix Guattari, A Thousand Plateaus (London: Bloomsbury, 2013), 409–92. 188. Neil Fligstein and Doug McAdam, A Theory of Fields (Oxford: Oxford University Press, 2012), 3. 189. Ibid., 10. 190. Ibid., 12–13. 191. Jean Baudrillard, Impossible Exchange, trans. Chris Turner (London: Verso, 2001), 7. 192. Ibid., 3.

16028-0303f-Finalpass-r01.indd 50 9/24/2019 12:03:28 PM Prolegomena to Any Future Attempt 51

193. Ibid., 8. 194. Ibid., 22. 195. Ibid., 199. 196. The text falls under the category of “post-foundationalism.” For a short critique and theoretical alternatives, see Daniel Steuer, “The Exception of Psycho- analysis: Adorno and Cavell as Readers of ,” in Psychoanalysis, Literature, and Culture, ed. Laura Marcus and Ankhi Mukherjee (Oxford: Wiley-Blackwell, 2014), 82–101. 197. Such a system becomes dysfunctional through expansion and intensifica- tion. The financial instrument of securitization illustrates this well. Securitization “involves the relocation of a building, good, or debt, into a financial circuit where it becomes mobile and can be bought and sold over and over in markets near and far.” (Sassen, Expulsions, 118). This second step drives the first, the creation of debt. The more that various aspects of life can be indebted, the greater the number of finan- cial instruments that can be created. Finance, as “a complex assemblage of actors, capabilities, and operational spaces,” develops on the back of an “epoch-making capability—the financializing of debt and assets of firms, households, and governments regardless of geopolitics, sovereign authority, legal system, state-economy relation, or economic sector” (ibid., 119). Interestingly, the “instruments” for financializing debt are developed by physicists, not economists: “It is the mathematics of physics and its models that are in play here, not the mathematics of microeconomic models. Exemplifying it all, Goldman Sachs’s backroom is stocked with physicists. The math- ematics of the backroom is mostly well beyond the understanding of the highly paid executives of the boardroom” (ibid.). 198. I would like to thank Megan Archer, Roland Begenat, Tim Carter, Paul Davies, Antonia Hofstätter, German Primera, Jeanne Riou, and Philipp Schönthaler for discussions and valuable comments on this project throughout its various stages.

16028-0303f-Finalpass-r01.indd 51 9/24/2019 12:03:28 PM 16028-0303f-Finalpass-r01.indd 52 9/24/2019 12:03:28 PM Chapter 3 Anthropokenosis and the Emerging World of War Howard Caygill

Let him behold and gauge The power of mankind, Whom this harsh nurse, when he is unaware, Can with the smallest flick in part destroy And with scarcely greater motion—once Again without warning, totally exterminate. —Giacomo Leopardi, La ginestra

PRINCIPLES AND PROLEGOMENA

Daniel Steuer’s “Prolegomena to Any Future Attempt at Understanding Our Emerging World of War” provides an indispensable aid to orientation in an increasingly bewildering and dangerous world. But while Kant intended his prolegomena1 to provide a didactic “plan” or map of the Critique of Pure Reason for the use of “future teachers,” Steuer’s prolegomena are more tenta- tive and their descriptions of an emerging world of war openly invite revision and amplification. But they share with Kant a concern with asking the “right question” in the right way. For Kant, this was the transcendental question of the conditions of possibility of experience and its objects, while for Steuer it is one of the conditions of possibility for an emergent world of war as well as our experience or understanding of it. The hypothesis informing his question- ing is succinctly stated at the outset of section two of his prolegomena, where he proposes that “conventional categories no longer apply” for this emergent object of experience. The emerging world of war presents a new kind of

53

16028-0303f-Finalpass-r01.indd 53 9/24/2019 12:03:28 PM 54 Chapter 3

object that calls for new categories, new distinctions, and new logics if it is to be understood, let alone critically contested and resisted. While sympathetic to the line of questioning pursued by Steuer’s pro- legomena, the following reflections accept the invitation to augment and extend them. They hold that the question of an emerging world of war cannot be separated from a larger question of a retreating world of the human, or anthropokenosis, currently framed in terms of the discourses of the “Anthro- pocene” and the sixth or “human mass extinction event.” The twenty-first- century discourses of warfare analyzed by Steuer and those of planetary crisis and mass extinction are conceptually and historically inseparable; they all attempt to rethink the governance of violence on planetary/geological and on human/political scales. However, there are also significant differ- ences between these complementary reflections on violence: the emerging world of war—as shown by Steuer’s prolegomena and elsewhere by Michael Dillon in his critiques of the military features of neoliberalism2—is organized around the principle of security while those of the Anthropocene and mass extinction theory are more ominously organized around emergent principles of catastrophe and survival.3 Steuer’s prolegomena are poised on the border between a waning hege- monic principle of security and new, emergent principles for the organization and understanding of violence, including most prominently that of survival. Consequently, the theorists of the emergent world of war that he describes and criticizes seem at once familiar and disquieting. They remain familiar in still subscribing to the contract of security formulated by Hobbes—that populations surrender their individual rights to the sovereign in exchange for security—but uncanny in reading the current shapes assumed by this exchange as pointing to the emergence of something unprecedented and still difficult to grasp with available categories and concepts. For in the emerging world of war the exchange between obedience and security is no longer conducted exclusively under the auspices of sovereignty and its principle of unity or the One—what Weber formulated as the state’s claim to a monopoly of violence, characteristic of unified sovereign states—but in terms of a plurality of bids to provide credible security in response to specific and often local dangers and risks. This has been accompanied by a manifest shift in the modality of violence since the end of the Cold War and the Yugoslavian Civil War from bipolar Clausewitzian models of war toward the preemptive containment of police operations.4 With this shift, the boundaries between a population being secured against and surviving violence begin to change and lose definition. This shift has received its most coherent theoretical expression in the dip- tych of Grégoire Chamayou, Manhunts: A Philosophical History and Drone Theory,5 where the move toward perpetual police is understood in terms of the combination of vast digital intelligence gathering operations with a cynegetic

16028-0303f-Finalpass-r01.indd 54 9/24/2019 12:03:28 PM Anthropokenosis and the Emerging World of War 55

strategy of hunting down and targeting real and potential adversaries. Cut- ting through much of the prevailing nostalgia for the Obama administrations, Steuer hints that its practice of killing by executive order marked the adoption by a sovereign state of partisan forms of warfare operating at or even beyond the limits set by the laws of war. His prolegomena complement Chamayou in maintaining that a police or manhunt strategic doctrine indicates an emer- gent modality of violence linked to related developments in information and weapons technology. A consequence of the apparent dispersal and subcon- tracting of violence that characterizes such “global partisan warfare” is that any realm of “civil society” traditionally thought of as spatially, legally, and ethically distinct from the state is now figured as a war zone occupied by potential global partisans and their supposed adversaries.6 Understanding the properties of this zone requires that the principle of security that ostensibly motivates this strategy be supplemented by new principles in an increasingly unsecured if not unsecurable world. Steuer’s prolegomena propose that we understand the emergent field of war in terms of a medium of “interconnected mechanisms for the exchange of money, energy, and information” that, consistent with his major hypoth- esis, “can no longer be analyzed according to conventional political, social, economic, or military categories.” He convincingly shows how the expansive character of capital accumulation introduces an element of escalation into this medium/system and puts it on a potentially unstable war footing driven by demands for the provision of security on the part of constantly endangered populations. However, it is here that the prolegomena may themselves prove to be complicit with the principle of security by following too closely the work of Theodor W. Adorno and Alfred Sohn-Rethel in understanding these “interconnected mechanisms” in terms of concepts of exchange, reification, and “real abstraction.” It is focusing on the first element of the triad of money, energy, and information that makes possible this gesture toward the theoreti- cal securities of the Marxist critique of capital, but its presence perhaps needs to be defended more explicitly and convincingly. The concepts of energy and information—engaged since Maxwell’s demon in a struggle between entropy and negentropy—seem to be of a different order to the derivative concept of money, which seems to describe an historically specific modality of the energy/information exchange. Perhaps we should take more literally the opening proposition of Marx’s Capital that the “wealth of those societies in which the capitalist mode of production prevails” only appears (erscheint) as a “monstrous accumulation of commodities” but actually entails the com- mand over energy and information that assumes the guise of monetized com- modity exchange.7 The concepts of energy and information are crucial to another emergent contemporary critique of violence, in which violence on a global or species

16028-0303f-Finalpass-r01.indd 55 9/24/2019 12:03:28 PM 56 Chapter 3

scale is increasingly understood in terms of the two separate but intersecting discourses of the Anthropocene and the sixth mass extinction event.8 When viewed from the perspective of Steuer’s prolegomena, the violence of the Anthropocene and the mass extinction event can be understood in terms of the claim that human interventions in planetary energy and information exchanges are becoming sufficiently obtrusive and irreversible to merit the designation of a new geological period, the “Anthropocene,” and/or the emergence of a sixth mass extinction event. This theoretical complement to the “emerging world of war” informs but is not explicitly thematized by Steuer in the notions of world and globality as well as indirect references to Jay Forrester’s methodology for the Club of Rome reports.9 Yet perhaps the discourses of the “emerging world of war” and the Anthropocene or mass extinction event are not simply chronological coincidences but complemen- tary discourses of violence able to throw light upon each other. Perhaps emer- gent Anthropocene and mass extinction discourses are not just closely related but even aspects of the emerging world of war. Perhaps Steuer’s prolegomena remain too attached to the principle of security, too invested in mediums and systems, to sufficiently face the prospect that the emerging world of war between humanity and its host planet will provoke a catastrophe that may not be survivable by human life. The emerging world of war may prove to be one less for security than for survival—it may continue the human stasis but may also become a war against humans one of whose outcomes might be anthropokenosis or the planet’s emptying itself of the human world.10

THE PRINCIPLE OF SURVIVABLE CATASTROPHE

Looking back at the early twenty-first century, future generations—should there be any—will wonder how it was possible to live with such a sense of impending catastrophe. They will wonder how we grew used to it, how we could face “catastrophe,” know it was coming, be able to read all the signs, and yet carry on. These future generations may even accuse us of criminal stupid- ity, of nihilistic indifference, of simply not paying attention, or even finding comfort and perverse pleasure in living catastrophically. They may even won- der how some of us—to cite the line from Pasolini’s “Europa” that Jean-Luc Nancy used as an epigraph to L’equivalence du catastrophe—became “pris- oners of regret for our innocence” by pretending not to know in order not to do or pretending that it all just happened to us, and we were innocent victims or observers of a coming catastrophe. I doubt any tribunal of the future would be impressed by these implausible pleas of innocence, but they might suspend judgment sufficiently to try to understand how it was possible for this epoch to carry on living—did we not really believe in the coming catastrophe? And

16028-0303f-Finalpass-r01.indd 56 9/24/2019 12:03:28 PM Anthropokenosis and the Emerging World of War 57

I think they might find the answer to their question in the peculiarities of our concept itself of catastrophe, or more precisely of survivable catastrophe, which issued from Cold War military and strategic thinking and was carried over into environmental debates.11 Maybe there is too much innocence attending the idea of catastrophe—this term we use for organizing historical and geological time and our subjective experience of it. Walter Benjamin would surely savor the irony of the term he proposed as an alternative to the principle of “progress” that dominated the late nineteenth and early twentieth centuries becoming its substitute in the late twentieth and early twenty-first centuries. Its emergence is insepa- rable from the historic capture of immense energy in nuclear research and technology—military and civil—that for Jean-Luc Nancy provides a stan- dard for the equivalence of catastrophe: “The ‘equivalence’ of catastrophes here means to assert that the spread or proliferation of repercussions from every kind of disaster hereafter will bear the mark of the paradigm repre- sented by nuclear risk.”12 The human access to immense sources of energy lends catastrophe a planetary dimension from which not even the future is immune; the effects of a nuclear catastrophe “spread through generations, through the layers of the earth: these effects have an impact on all living things and on the large-scale organization of energy production, hence on consumption as well.”13 However, Jean-Luc Nancy’s use of the term “nuclear risk” situates his thought within a paradigm of risk and accident that orients catastrophe according to the modality of possibility—it might happen; it might not—with the corollary that it is in principle survivable. The notion that a sudden global nuclear energy surge—civil or military, inadvertent or intended—might unleash a planetary catastrophe remains for Nancy at worst a risk or an avoidable possibility rather than an event already irreversibly engaged. The political uses of catastrophe or “climate terror” have been explored by Sanjay Chaturvedi and Timothy Doyle in their Climate Terror: A Critical Geography of Climate Change.14 Their two main claims provide an interest- ing complement to Steuer’s prolegomena. The first is that climate change, “far from being a moment of rupture or radical departure, is a continuum marked by an ever-shifting triad of statecraft and its political economies, nature and power,”15 and the second is that “climate change geopolitical discourse” is “about controlling the contestation arising out of longstanding resistance against environmental degradation in many parts of the global south.”16 In spite of making fascinating observations on the use of catastrophist military concepts such as WMD and MAD, Chaturvedi and Doyle nevertheless offer a largely civil account of domination and resistance, not explicitly articulating “statecraft” and “power” with military violence. Steuer’s focus on new modes of military violence offers a valuable supplement to Chaturvedi and Doyle

16028-0303f-Finalpass-r01.indd 57 9/24/2019 12:03:28 PM 58 Chapter 3

but one that continues to subscribe to a theoretical commitment to catastrophe avoidance or survival through risk management and ideology critique. Perhaps more attention needs to be paid to the historic matrix for our cur- thinking of catastrophe, along with its limits and effects, in the Cold War strategic planning that contemplated initiating and surviving military and environmental catastrophe. In the conception of catastrophe that emerged in U.S. strategic discussions of the 1950s and early 1960s, the risk or possibil- ity of nuclear catastrophe was secondary to the importance of surviving it. A major element in the proposition that catastrophe was survivable was the immunizing of civil populations against its threat; the strategic assumption that catastrophe was in principle survivable became a cultural and politi- cal fact that both encouraged the dissemination of the idea of catastrophe while immunizing us against its horror through the conviction that it was survivable. The dangerous legacy of this paradigm is evident in the sluggish response of state and international actors to the threat posed by systemic plan- etary instability to the survival of human life on this planet. Most genealogies of geological catastrophe begin in eighteenth- and early- nineteenth-century debates between catastrophic and continuist approaches to the fossil and stratigraphic records—such as Charles Lyell’s arguments against Georges in the early nineteenth century. However, it is impor- tant not to underestimate the importance of the more recent and uncanny mobilization of the idea of survivable catastrophe as an organizing principle of the strategic doctrine of total war during the Cold War. What Eisenhower criticized as the “military-industrial complex,” or the immense investment in military research and development central to U.S. strategy in the Cold War, was organized not only around the concept of catastrophe—especially how to inflict it upon adversaries—but also around how to survive catastrophe. It was not just confined to the closed doors and secret meetings of strategic debate but became a cultural fact whose legacy continues, even in ways that seem remote from its military sources. Jacob Hamblin has traced some aspects of this genealogy in his challeng- ing Arming Mother Nature: The Birth of Catastrophic Environmentalism. Hamblin makes a powerful contribution to an emergent counter-history of the environmentalist movement by tracing the notion of environmental catastro- phe to the strategic planning for environmental war. He shows that the dis- courses of world war and environmental catastrophe are closely related: “The language of the Cold War’s global crisis and that of environmental crisis are strikingly similar. That left room for alternative views that postwar affluence, dissatisfaction with pollution and a new understanding of environmental hazards were the most important factors.”17 In the emergent counter-history of environmentalism, the global theater of the Cold War and the destructive effects of nuclear weapons, along with civil/military initiatives dedicated to

16028-0303f-Finalpass-r01.indd 58 9/24/2019 12:03:28 PM Anthropokenosis and the Emerging World of War 59

survival, led to the elaboration of a planetary consciousness but one firmly rooted in a strategy of survivable catastrophe. Hamblin examines a wide range of military discussions dedicated to com- bining atomic bombs with natural forces, to extending the possibilities of bio- logical warfare, to gathering information and data about the coming planetary battlefield, and to achieving the satellite mapping of the planet. With respect to the last initiative, his analysis of the drive to gather planetary data during the International Geophysical Year in 1957 offers invaluable background to the circumstances that subsequently led not only to the measurement of global carbon dioxide levels but also to NATO’s discussions of environmental/eco- logical warfare in the early 1960s Von Kármán Committee. He shows how catastrophe, both how to inflict and how to survive it, is at the center of all these discussions: “Environmental cataclysms could become part of the alli- ance’s arsenal, with the help of a well-placed nuclear explosion.”18 Hamblin shows how this strategic attention to environmental war passed over—often through the same scientists—into a civil environmental critique of planetary degradation and became part of the intellectual legacy of strategic environ- mental warfare. The measurements and data collected under the aegis of a willed global conflict between human adversaries using the weapon of arti- ficially stimulated environmental catastrophe19 came to serve the emergent idea of an unintended global environmental catastrophe—induced by human actions, suffered by the planet, but also in principle survivable. The data col- lection, the programs for processing it, and the interpretative protocols for making sense of it all emerged from this matrix of survivable catastrophe. Hamblin’s account of the role of survivable catastrophe in environmental discourse may be supplemented with two further examples, one from the RAND Corporation20—the research and development associated with the U.S. Air Force and dedicated, at the height of the Cold War, to theorizing nuclear deterrence and first-strike capability—and the other from NASA’s earth systems theory, which became the condition of possibility for conceptualizing the Anthropocene. Both cases offer examples of survivable catastrophist discourses at work and give a sense of how deeply they inform contemporary attitudes toward the future, especially the blithe inaction in the face of acknowledged catastrophe. By the mid-1950s, strategic discussion in the United States was exploring two divergent views of nuclear catastrophe. One was nuclear deterrence, or the mutual avoidance of the catastrophic use of nuclear weapons, formu- lated by the mathematician John von Neumann (who, with von Kármán and Edward Teller, constituted a triumvirate of Hungarian émigrés who special- ized in catastrophist scenarios), which was largely adopted by President Eisenhower. The other was a view of survivable catastrophe associated with the RAND Corporation whose most prominent exponent was Hermann Kahn,

16028-0303f-Finalpass-r01.indd 59 9/24/2019 12:03:28 PM 60 Chapter 3

who described this position at length in his Clausewitzian On Thermonuclear War, published in 1960. In it Kahn presented the results of internal RAND Corporation discussions directed against the doctrine of deterrence and main- tained the prime strategic objective to be less the avoidance of nuclear catas- trophe than the preparation of ways to survive it. His underlying argument was that strategists should prepare the option of launching nuclear war in the proven knowledge that it is possible to survive a retaliatory strike. On Thermonuclear War presents a program of strategic planning dedicated to ensuring the survival of the United States after nuclear catastrophe:

Our study of nonmilitary defense indicated that there are many circumstances in which feasible cultivation of military and non-military measures might make the difference between our facing casualties in the 2–20 million range rather than in a 50–100 million range.21

The nonmilitary measures included “civil defense”—for example, fall-out shelters—that complement the military objective of assuring the survival of the “command-and-control” structure vital to ordering, sanctioning, and executing a nuclear counterattack. Kahn predicted that “the bulk of their blow will be directed toward destroying, crippling, or degrading the operation of our retaliatory forces,”22 primarily the system of command and control. Civil survival was thus closely linked to the survival of U.S. command-and-control structures able to launch retaliatory nuclear strikes. Part of Kahn and the RAND Corporation’s strategic solution for ensur- ing the United States could withstand a nuclear catastrophe survives as the Internet. A central feature of the noosphere that is increasingly regarded as the key to surviving the mounting ecological catastrophes of the Anthropo- cene emerges from the same matrix that generated the principle of survivable catastrophe. The contributions of RAND Corporation researcher Paul Baran are especially important in this respect. His research was dedicated to theoriz- ing a network capable of technically delivering Kahn’s strategic demand for a survivable network of command and control. In a paper from 1960 prepared for the U.S. Air Force—“On a Distributed Command and Control System Configuration”—Baran cites Kahn’s 1960 RAND Corporation paper “The Nature and Feasibility of War and Deterrence” as motivation for his invention of a survivable command-and-control network. Baran focused on the idea of decentralized networks, first linking AM radio stations bearing only two mes- sages—initiate and cease attack—then the telephone network, moving finally to theorize a distributed communication network with built-in redundancy and the ability to transmit discrete message packets. Baran later reflected,

If the strategic weapons command and control systems could be more surviv- able, then the country’s retaliatory capability could better allow it to withstand an attack and still function; a more stable position. But this was not a wholly

16028-0303f-Finalpass-r01.indd 60 9/24/2019 12:03:28 PM Anthropokenosis and the Emerging World of War 61

feasible concept because long-distance communication networks at the time were extremely vulnerable and not able to survive attack. That was the issue. Here a most dangerous situation was created by a lack of a survivable com- munication system. That, in brief, was my interest in the challenge of building survivable networks.23

In a series of RAND Corporation papers ranging from “Reliable Digital Com- munications Systems Using Unreliable Network Repeater Nodes” in 1960 to “On Distributed Communications” in 1964, Baran proposed a distributed, decentralized network as the structure of communications most resistant to enemy attack. He also proposed that it be used to transmit bursts of digital information (later called “packets”) that could arrive by any number of routes across the network to be reassembled at the receiving station. This would ensure that the network would be neither fatally compromised nor overloaded in the event of an attack. Both the network structure and the digital modality served to enhance the system’s capacity to survive catastrophe. The impor- tant point is that catastrophe is formulated in such a way that it can become survivable, that is, not entirely catastrophic. Whenever we use the Internet, we are inhabiting a time and space surviving beyond the strategic phantasm of catastrophic nuclear warfare. This remains an important dimension in the surviving world of war that perhaps needs more critical attention in Steuer’s account of emergent modes of military violence. We can see a similar genealogical pattern emerging in increasingly preva- lent environmentalist discourses of the Anthropocene, with their claim that the planet has entered a geological era in which the human is the dominant force for planetary change. The passage to the Anthropocene is manifest in climate change; drastic changes in the chemical composition of the atmo- sphere, geosphere, and the oceans; the extinction of biodiversity; and more positively the emergence of a noosphere or global communications network. Intimations of this discourse have existed since the 1950s, but it was formally named and announced by the climatologist Paul J. Crutzen in a manifesto, “The Anthropocene: Geology of Mankind,” published in the journal Nature in 2002. It describes the catastrophic violence of human interventions on the working and the composition of the planetary system and the life that depends on it.24 Crutzen’s basic premise is that humanity has become a “geological force” that is ushering in a new geological age—the Anthropocene, which succeeds the Quaternary epoch of the Holocene and the Pleistocene. In a number of communications but most concisely in the 2002 article, Crutzen proclaims a “human-dominated geological epoch,” an idea subsequently endorsed by Will Steffens and geologist Jan Zalasiewicz25 and following them a swelling procession of philosophers, social theorists, activists, and artists.26 This change is variously dated either to the last ice age and the extinction of megafauna such as the mammoths, giant sloths, and saber-toothed tigers of

16028-0303f-Finalpass-r01.indd 61 9/24/2019 12:03:28 PM 62 Chapter 3

Ice Age fame, or to the creation of the world market after 1492 (this is the critical, Capitalocene version), or to the industrial revolution driven by fossil fuels, or to the explosion of the atom bomb, or to the postwar “great expan- sion” of population and industrial and agrarian production.27 It is remarkable to observe how rapidly this catastrophist discourse developed and ramified— becoming at once a cognitive critique that would understand and measure anthropogenic change in the environment, a moral and political critique of its effects, and an aesthetic critique of its violent impoverishment of life and environment through the emergence of a world dominated by the human. It is also an exercise in survivable catastrophe since it advocates a number of political and technological strategies (including population control and cli- mate engineering) through which humans might survive their own geological period.28 The way Crutzen frames his influential 2002 manifesto for the Anthropo- cene is revealing, for beyond its presentation as a scientific discovery it is also a call for moral and political reflection. It begins,

For the past three centuries, the effects of humans on the global environment have escalated. Because of these anthropogenic emissions of carbon dioxide, global climate may depart significantly from natural behaviour for many mil- lennia to come. It seems appropriate to assign the term “Anthropocene” to the present, in many ways human-dominated, geological epoch, supplementing the Holocene—the warm period of the past 10–12 millennia.

And it ends,

Unless there is a global catastrophe—a meteorite impact, a world war or a pandemic—mankind will remain a major environmental force for many mil- lennia. A daunting task lies ahead for scientists and engineers to guide society towards environmentally sustainable management during the era of the Anthro- pocene. This will require appropriate human behaviour at all scales, and may well involve internationally accepted, large-scale geo-engineering projects, for instance to “optimize” climate. At this stage, however, we are still largely tread- ing on terra incognita.

Sandwiched in between this beginning and end of a one-page article is a damning critique of the catastrophist violence of human environmental dep- redations that are putting the survival of the planet at risk. Disarmingly, and with a degree of caustic irony, Crutzen seems to maintain that only a global catastrophe might save the planet from us. Failing that, hopes for survival rest with the engineers and managers who will require “appropriate behavior” from the human population to protect and secure them from the unintended consequences of their actions by assuming control over and a duty of care for

16028-0303f-Finalpass-r01.indd 62 9/24/2019 12:03:28 PM Anthropokenosis and the Emerging World of War 63

the running of the planet through geo-engineering projects such as seeding the upper atmosphere with sulfur in order to reflect sunlight and so mitigate

the greenhouse effect prompted by anthropogenic rises in methane and CO2. Yet the premise of surviving the Anthropocene is not only mobilized rhetori- cally; as a scientific fact, it depends on a specific construction of the earth and earth history that in its turn is closely related to the conceptual architecture of the Cold War. This construction of earth history has its roots in Crutzen’s fusing of the cybernetic definition of the earth system with the idea of the human as a “geological force.” Although he does not mention or cite James Lovelock, Crutzen presupposes a system similar to that of Lovelock’s hypothesis, but viewed from a different perspective, that of a climate scientist. This is significant because climate scientists tend to work with small evidentiary bases—short sequences and temporal series—when compared with geolo- gists, a temporal bias that favors the statistical effect of instability and maybe exaggerates the appearance of positive feedback. The extended temporal scales of the geologists tend to smother the oscillations prominent in shorter sequences and favor the effect of stability and maybe on their part exagger- ate negative feedback. So Crutzen’s definition points to an escalation of “the effects of humans on the global environment” over the past 300 years, in par- ticular the “anthropogenic emissions of carbon dioxide” that will affect global climate for millennia to come. The Anthropocene is thus defined as “in many ways human-dominated geological epoch” and its intellectual pedigree sup- plied by Antonio Stoppani’s notion of “a new telluric force which in power and universality may be compared to the greater forces of Earth.” Stoppani’s anthropozoic—which is to say not just the age but a new geological epoch of the human—joins Vernadsky and Teilhard de Chardin’s bio- and noospheres in the emergent concept of the Anthropocene. Crutzen agrees with Stoppani’s approach inasmuch as it points to the human as a source of information and energy outside of the world system—a negentropic force—and “the growing role of human brain-power in shaping its own future and environment.” He goes through the specific contribution of human energy and informatics to the various elements of the earth system, offering information worthy of Kant’s mathematical sublime. With respect to the biosphere, the projected human population of 10 billion this century brings with it the prospect of 30–50 per- cent of land surface being dominated by humans, the destruction of rainfor- ests, and sustained extinction events. With respect to the hydrosphere, more than 50 percent of all fresh water is used by humans and fish stocks have been depleted. With respect to the atmosphere, energy use has grown sixteenfold in the twentieth century, provoking increases in concentrations of greenhouse gases; carbon dioxide has increased 30 percent and methane 100 percent over the past two centuries, reaching the highest levels for past “400 millennia,”

16028-0303f-Finalpass-r01.indd 63 9/24/2019 12:03:29 PM 64 Chapter 3

which is a rhetorical way of saying 400,000 years—geological small change. All this of course is meant to justify the conclusion that a

daunting task lies ahead for scientists and engineers to guide society towards environmentally sustainable management during the era of the Anthropocene. This will require appropriate human behaviour at all scales, and may well involve internationally accepted, large-scale geo-engineering projects, for instance to “optimize” climate.

Although geo-engineering—planetary stewardship—seems to be a conclu- sion, I would argue it is in fact the premise of the Anthropocene argument. The proponents of the environmental catastrophe named as the Anthropo- cene have had very carefully to construct its object and the catastrophe befall- ing it—environmental violence or anthropogenic changes in the structure and functioning of planet earth. To do so they have implicitly reduced geological time and space to the parameters of the human in order to create a mise en scène in which the human can appear as a violent geological force. Crutzen’s work and that of others associated with Anthropocene discourse, such as Will Steffens and Tim Lenton, largely work within the parameters of a discipline developed in the 1980s under the auspices of the U.S. government known as earth system science.29 It was intended as a successor to the God-slaying dis- cipline of geology and is one of the outcomes of the postwar cybernetic revo- lution carefully cultivated by the U.S. and the Soviet governments.30 It is a discipline that historically constructed its own object: its founding document, “Earth System Science: A Programme for Global Change,” was published in 1986 by the NASA Earth System Sciences Committee and is remarkable for deliberately constructing a critical object—the earth system—through assembling the technology of satellites and observation posts necessary to measure and report on it. The conditions of possibility of the experience of the planet as a systematic whole was a condition of possibility of its coming into existence as an object on the verge of catastrophe. One of this text’s most fundamental achievements is its recalibration of geological time—ostensibly according to certain protocols of accurate measurement but effectively to reduce 4.5 billion to half a million years of planetary history. The sublime construction of the object and time of geological knowledge through stratig- raphy and paleontology inaugurated by Kant’s contemporary James Hutton, and carried through Cuvier to ’s inspiration, Lyell, and into the twen- tieth century, divided the earth history of 4.5 billion years into eons, eras, and epochs—Hadean, Archean, Proterozoic, and our current eon, the Phanero- zoic.31 With some notable exceptions,32 this reductive genealogy of the earth and its routinely monstrous and destructive history is self-consciously and deliberately confined to recent geology—at best the quaternary, but usually

16028-0303f-Finalpass-r01.indd 64 9/24/2019 12:03:29 PM Anthropokenosis and the Emerging World of War 65

the upper Pleistocene and Holocene, the geologically recent ice age and its interglacial intervals. Geological time is redefined by earth system science as spanning the last 500,000 years, which not so coincidentally corresponds to the inhabitation of the earth by hominoid and later human species, so adjust- ing geological time and the events measured in it to human time. Variations in climate, sea level, and ice coverage that are minor when viewed in terms of the deeper geological time scale appear catastrophic within the violently restricted humanized definition of geological time. Earth system science and its compression of geological chronology is a major condition of possibil- ity for Anthropocene discourse, allowing for the “transcendental illusion” or impression that humans are a catastrophic geological force and that they comport themselves violently with respect to a victimized planet. But Crutzen’s almost involuntary references to meteorite strikes, world wars, and pandemics—to mass extinctions and other catastrophes strik- ing our own private catastrophe of the Anthropocene—might appear as the return of a geological repressed. When viewed according to geological time measured in terms of billions of years, it is apparent that the earth routinely engages in acts of extreme destruction and for most of its history operated at temperatures and with sea levels incompatible with human life. For most of its existence, it was indifferently hostile not to life as such but to that peculiar Cenozoic variant that includes human life. But it would be unwarranted to describe terrestrial inhuman destruction as necessarily catastrophic for the planet since this would humanize the very dangerous—for us—place we inhabit. Any catastrophic “violence” we could inflict on the planet is well within its normal, that is to say, geological, ranges of variation. The melting of the ice caps—themselves a recent geological anomaly—and the rise of geologically low sea levels, the rise of carbon dioxide levels, and thus global warming are all adjusted to a human but not a planetary scale. The Anthropo- cene discourse’s worst-case scenario of achieving levels of 400–500 parts per

million of CO2 during this century contrasts with a geological mean of above 4,000 parts per million, a level unimaginably catastrophic for us but normal business for the planet. Anthropocene discourse achieves the subreption of viewing the survival of human life as the survival of the planet. However, the planet is already geologically traversing its sixth, or human, mass extinction event, and while it is perhaps the fastest it by no means threatens to be the worst in comparison with earlier mass extinction events in planetary history. By reducing geological time and space to a human measure, Anthropocene discourse and earth system science produce an illusion of planetary catastro- phe. But while much philosophical discourse on the Anthropocene remains precritical—Sloterdijk, Stiegler, Malabou, Chakrabartiy—there is also another philosophical response that violently criticizes the hubris of the Anthropo- cene’s understanding of environmental violence. This emerges out of Gaia

16028-0303f-Finalpass-r01.indd 65 9/24/2019 12:03:29 PM 66 Chapter 3

theory—the cybernetic precursor and uneasy companion of earth system science—with James Lovelock himself and more recently (2015) with ’s Face a Gaia: Huit Conferences sur le nouveau regime climatique. These texts step out of the limits of the Anthropocene critique of violence and claim that humans are not just the perpetrators but also the victims of the violence in an emerging planetary war. In perhaps one of the most extrava- gant humanizations of the planet, Gaia, our erstwhile host, becomes an enemy who acts violently against us, taking revenge for human depredations33—a theme already contemplated by the poet Leopardi in La ginestra, where “exterminatory nature” is at war with the human race. And there is not much question about who will win, for it is hubris not only to think that our capacity for inflicting and surviving catastrophe is significant at a planetary level or according to geological time scales but also to think the planet would deign to war with humanity.

MASS EXTINCTION: ANTHROPOKENOSIS OR CENOCENTRISM?

We can get some idea about the equivocations of Anthropocene discourse by reflecting on the suffix “-cene,” used in the terms Pleisto-, Holo-, and of course Anthropocene. It is supposed to refer straightforwardly to an age of the geological epoch of the Cainozoic—the epoch of the “new life” also referred to as the quaternary. If we put this into the context of geological time, then the Cainozoic is but the latest of three epochs making up the most recent of the five geological eons that divide the 4.5 billion years of earth history. This is the eon of the Phanerozoic or the eon of “visible life”—phanero/ zoe—that has lasted 550 million years so far and is divided chronologically into the Paleozoic (old life), the Mesozoic (mid-life), and our Cainozoic (new life). The Cainozoic dates from the fifth mass extinction event of 66 million years ago, caused probably by massive volcanic eruptions and possibly by a meteorite strike that left a 150-kilometer-diameter crater near the Yucatan peninsular, and famously coinciding with the extinction of the dinosaurs. This extinction event was the subject of one of Lovelock’s earlier and lesser- known popular books, cowritten with Michael Allaby, The Great Extinction: What Killed the Dinosaurs and Devastated the Earth? (1983), which ends with a chapter on “Survival.” Here Lovelock and Allaby, referring to ther- monuclear war, claim not only that “the planet is well able to survive the worst disaster we can imagine ourselves engineering” but also that “human activity is, or could be, life threatening to the human species generally.”34 Yet this position, later qualified by Lovelock in The Revenge of Gaia, underesti- mates the gravity of the human extinction event, just as does Anthropocene

16028-0303f-Finalpass-r01.indd 66 9/24/2019 12:03:29 PM Anthropokenosis and the Emerging World of War 67

discourse. The first part of Lovelock and Allaby’s proposition—the survival of the planet—is probably accurate, but the survival of the human species is by no means as assured. Although these comments on survival come at the end of a popular treat- ment of a prior extinction event, they are consistent with the cybernetic foundations of Gaia theory that ally it with earth systems science. The first of many statements of the Gaia hypothesis/principle, the 1979 Gaia: A New Look at Life on Earth, has a chapter called “Cybernetics” that concludes, “The only difference between non-living and living systems is in the scale of their intricacy. . . . [L]ike life itself, cybernetic systems can emerge and evolve by the chance association of events. All that is needed is a suffi- cient flux of free energy to power the system.”35 Thus, the Gaia hypothesis sought “evidence of planet-sized control systems using the active processes of plants and animals as component parts and with the capacity to regulate the climate”36 and recommended care in avoiding the “cybernetic disasters of runaway positive feedback or of sustained oscillation.”37 The cybernetic theory of nonlinear phase changes is applied to the planet and to planetary catastrophe; indeed, had the name Gaia not been suggested by Lovelock’s neighbor William Golding, he might have gone for “Biocybernetic Uni- versal System Tendency.” It encompasses the black boxes of the biosphere (Verdansky’s term), atmosphere, hydrosphere, upper lithosphere, and their feedback loops and other regulatory control instances. Gaia is a disturbing avatar of earth system science, especially in the way that it uses a mythical figure as a fig leaf for a basically cybernetic project. I suspect the Anthropocene might be doing the same work, except that the “cybernetic hypothesis,” as Tiqqun called it, has become far more prescrip- tive. That is to say, the cybernetic Gaia, earth system science, and the Anthro- pocene are primarily survivalist discourses in which geo-engineering and technical strategies for surviving catastrophe a priori come first—and the Anthropocene as a form of legitimation, afterward. As we have seen, geology’s “transcendental aesthetic” of deep time extinguished not only any divine role in the creation of the planet but also, preemptively, any human pretension to being a “geological force” on it. In the face of this, when Crutzen speaks of “millennia” to come or of the “last 10–12,000 years” or even 400 million years, his time frame is the geological contemporary. The implications of this reduction of geological time to the limited time frame of earth system science, made possible by the army sur- plus planetary surveillance technology of the Cold War, become apparent in popular versions of survivalist Anthropocene discourse such as Gaia Vince’s Adventures in the Anthropocene: A Journey to the Heart of the Planet We Made, published in 2014 and winner of the Royal Society Winton Prize for Science Books. The book, written in the optimistic spirit of Lovelock and

16028-0303f-Finalpass-r01.indd 67 9/24/2019 12:03:29 PM 68 Chapter 3

Allaby, exploits the compression of geological time familiar from Anthropo- cene literature as well as a view of survival as possible through information and other technologies: “The same ingenuity that allows us to live longer and more comfortably than ever before is transforming Earth beyond anything our species has experienced before. . . . Welcome to the Anthropocene, the Age of Man.”38 But she forgets to add that our species’ “experience,” even if defined generously as 250,000 years, adds up to a tiny fraction of earth’s history, which has displayed fluctuations and extremes of temperature and sea levels far in excess of the current worst prognoses. She also adds, inaccurately, “Geologists are calling this new epoch the Anthropocene, recognising that humanity has become a geological force on a par with the earth-shattering asteroids and planet-cloaking volcanoes that defined past eras.”39 Vince looks to an (unlikely) fusion of barefoot geo-engineering and embedded entrepre- neurship as a means of surviving an emergent inhospitable and hostile planet. The idea of surviving catastrophe that is part of the cybernetic legacy, Gaia theory, and earth system/Anthropocene theory begins to appear increasingly implausible. The very categories forged in the continuationist paradigm of nineteenth-century geology are themselves changing under a renewed and urgent appreciation of catastrophic interruptions such as mass extinction events. The previous five mass extinction events did not succeed in extin- guishing life, but there is no precedent in them for specialized and highly developed forms of life surviving. From this perspective, the history of the earth is not so much one of appearing as disappearing. The era of appearing life, the Phanerozoic, should perhaps be understood in terms of the aphane- ros, its disappearance. In this view, the human extinction event is literally the human extinction event, in which the planet empties itself of human life: anthropokenosis. It would take extreme confidence in human exceptionality to believe that humans will, through the technical manipulation of energy and information, find a way of avoiding their extinction. Geology indeed teaches that human extinction will prove a modest event, geologically. Zalasiewicz, in his philosophical speculation The Earth after Us, estimates that the sedimen- tary layer left by the human, “Anthropocene” era, a modest geological period of 100 million years, will be measured in centimeters and millimeters. Even the human mass extinction event, despite its unusual rapidity, will only have a modest impact on terrestrial life compared to previous catastrophes. Even if extinction is deferred for millennia, there is in terms of geological chronology very little statistical difference between the human event and the meteorite strike on the Yucatan peninsular millions of years ago that contributed to ending the Mesozoic and its megafauna—dinosaurs. So perhaps we should describe ourselves in terms of a blow that strikes the earth producing a rela- tively minor catastrophic impact rather than as a geological force capable of destroying the earth—we have destroyed ourselves but not the earth. Perhaps

16028-0303f-Finalpass-r01.indd 68 9/24/2019 12:03:29 PM Anthropokenosis and the Emerging World of War 69

this will prove a more appropriate ethical premise for thinking catastrophe than cybernetics and earth system science. This premise would also question the emergent radical ecological principle of “Cenocentrism,” or the “plea for a continuity of the ‘Cenozoic achieve- ment,’ for the continuity of a whole evolutionary and ecological community that reaches back into the distant origins of life on Earth, but that takes its primary form from the specific period after the last mass extinction event: Cretaceous—Tertiary (K–T) event.”40 This critique of human exceptionalism nevertheless envisages a prominent role for the human in the care for the Cenozoic legacy of life and warns against anthropokenosis as a possibility rather than an event already irreversibly engaged. The “cene” of “Cenocen- trism” as well as of “Anthropocene” is meant to refer to the latest stage in the classification of 540 million years of geological time according to forms of life—Paleozoic, Mesozoic, and Cainozoic—cainos being the Greek term for “new.” Until the 1970s, this etymology was respected as a way to describe the last 66 million years of life since the last, K–T, mass extinction event. But the advent of earth system science in the 1980s coincided with the contraction of caino to ceno, but while caino- means new, ceno- means empty, the contrac- tion inadvertently converting the new into the empty—the Anthropocene lit- erally referring to a world empty of humans. It is less the age of the planetary domination of the human than the age of its absencing, of anthropokenosis. Steuer’s attention to the threat to human autonomy posed by the violent implementation of systems of money, information, and energy in an emergent world of war needs to be supplemented by attention to wider changes in the composition of the global battlefield. It needs also to confront the potentially fatal strategic error of underestimating the threat of humanity not surviving its own civil wars and the implied struggle with the planet that hosts them. It could be that the lament for a drastically diminished autonomy and potential for human freedom overlooks the clear and present danger of species extinc- tion. Perhaps it is necessary to find a way to fold together deep geological time with the compressed chronologies of human history. Jay Winter’s classic account of the mourning of the dead of World War I, Sites of Memory, Sites of Mourning, has a beautiful account of Lutyen’s Cenotaph, originally intended as a temporary monument but made permanent by public demand. The Ceno- of the Cenotaph was meant literally: it was “an empty tomb, and by announc- ing its presence as the tomb of no one, this one became the tomb of all who had died in the war. In the heart of London, in Whitehall, in the middle of the street adjacent to the Houses of Parliament—the seat of government . . . it brought the dead of the 1914–18 war into history.”41 The emergent world of war analyzed in Steuer’s prolegomena is part of the more general emptying of the human world that is the Anthropocene. Steuer describes conflict taking place at what van Dooren calls the “dull edge of extinction”—an emerging

16028-0303f-Finalpass-r01.indd 69 9/24/2019 12:03:29 PM 70 Chapter 3

world of war all the more futile for its blindness to the growing powerless- ness of humans.

ACKNOWLEDGMENTS

My thanks to Daniel Steuer for the rare and welcome opportunity to reflect at length on his thinking about emergent worlds of war and to the participants at seminars on the Anthropocene at Luneburg, Nijmegen, Leuven, and Oporto as well as to students on the University of Paris 8, Department of Philosophy course in “Philosophies of the Anthropocene” in 2017.

NOTES

1. , “Prolegomena to Any Future Metaphysics That Will Be Able to Come Forward as Science (1783),” trans. Gary Hatfield, in Immanuel Kant, Theoretical Philosophy after 1781, ed. Henry Allison and Peter Heath (Cambridge: Cambridge University Press, 2002), 29–169. 2. Michael Dillon and Julian Reid, The Liberal Way of War: Killing to Make Life Live (London: Routledge, 2009); Michael Dillon, Biopolitics of Security: A Political Analytic of Finitude (London: Routledge, 2015). 3. In each case, the principle serves as an arche—an at once organizing and domi- nating principle for the constitution of objects and discourses. The history of regimes of principles has been provocatively described by Reiner Schürmann in Broken Hegemonies in terms of three overlapping principles or “hegemonic fantasms” of the One, Nature, and the Subject, which serve to constitute objects and the ways in which they are understood. Schürmann believed in the early 1990s that the history of the institution and destitution of “hegemonic fantasms” was undergoing a “diremption,” or fundamental break, that opened the possibility of new ways of thinking and acting. In the second decade of the twenty-first century, the emerging world of war and the Anthropocene seem less a break than an uncanny fusion of the hegemonic fantasms of the One, Nature, and the Subject under the new principalities of catastrophe and sur- vival. Reiner Schürmann, Broken Hegemonies, trans. Reginald Lilly (Bloomington: Indiana University Press, 2003). 4. Further analyzed in Howard Caygill, “Perpetual Police? Kosovo and the Eli- sion of Police and Military Violence,” European Journal of Social Theory 4, no. 1 (2001): 233–42. 5. Grégoire Chamayou, Manhunts: A Philosophical History, trans. Steve Rendall (Princeton, NJ: Princeton University Press, 2012); Grégoire Chamayou, Drone Theory, trans. Janet Lloyd (Harmondsworth: Penguin, 2015). 6. For further discussion of the properties of this zone, see Howard Caygill, “Arcanum: The Secret Life of State and Civil Society,” in The Public Sphere from

16028-0303f-Finalpass-r01.indd 70 9/24/2019 12:03:29 PM Anthropokenosis and the Emerging World of War 71

Outside the West, ed. Divya Dwivedi and Sanil V. (London: Bloomsbury Press, 2015), 21–40. 7. If Capital is the exposition of capital’s logic of essence through its Erschei- nung in terms of money, exchange, and commodities, then energy and information will constitute its logic of the notion. For a recent philosophical analysis of energy, see Michael Marder, Energy Dreams: Of Actuality (New York: Columbia University Press, 2017). 8. The two ecological critiques of violence—one designating an epoch, the other an event—are often confused with each other but have quite distinct provenances and implications for the emerging world of planetary catastrophe. The Anthropocene, as will be shown below, remains close to state-actor views that emphasize the carbon origins of ecological disturbance, while mass extinction event theory emphasizes the nitrogen imbalance at the roots of the destruction of biodiversity that is the cur- rent mass extinction event. While the management of carbon emissions provides a plausible strategic alibi for ensuring (some) human survival, nitrogen management is more problematic insofar as rising nitrogen levels are directly connected with the production of food for an expanding human population. The link between nitrogen production, warfare, the so-called green revolutions, and mass extinction has been forcefully argued for by Vandana Shiva, who shows how the “paradigm of industrial agriculture is rooted in war: it very literally uses the same chemicals that were once used to exterminate people to destroy nature.” Vandana Shiva, Who Really Feeds the World (London: Zed, 2017), 2. 9. The proximity of the new world of war to planetary critiques of violence is more explicitly evident in Max Liljefors’s comparison of satellite images of the earth in his article in this volume. 10. This option is considered to be in principle avoidable in Anthropocene dis- course, while for mass extinction theorists it is already ineluctably on course. Thom van Dooren’s Flight Ways: Life and Loss at the Edge of Extinction (New York: Columbia University Press, 2016) is as much a lament for a world emptying itself of humans as it is for the endangered species of birds whose plight he narrates. His use of the notion of the “edge of extinction” might be extended to the predicament of the human species, which even if its numbers are increasing is nevertheless engaged in a process of auto-extinction. 11. Elizabeth Kolbert notes that “Catastrophozoic” competes with “Anthropo- cene” as a term to describe the emerging world of environmental catastrophe. Elizabeth Kolbert, The Sixth Extinction Event: An Unnatural History (New York: Picador, 2015), 107. 12. Jean-Luc Nancy, After Fukushima: The Equivalence of Catastrophes, trans. Charlotte Mandell (New York: Fordham University Press, 2015), 3. 13. Ibid. 14. Sanjay Chaturvedi and Timothy Doyle, Climate Terror: A Critical Geogra- phy of Climate Change (Basingstoke: Macmillan, 2015). 15. Ibid., x. 16. Ibid., 43, 4. See also Rob Nixon’s now classic study Slow Violence and the Environmentalism of the Poor (Cambridge, : Harvard University Press, 2011).

16028-0303f-Finalpass-r01.indd 71 9/24/2019 12:03:29 PM 72 Chapter 3

17. Jacob Hamblin, Arming Mother Nature: The Birth of Catastrophic Environ- mentalism (Oxford: Oxford University Press, 2013), 9. 18. Ibid., 141. 19. One of the theaters of conflict in which this was applied was the Vietnam War. Hamblin’s discussion of environmental conflict by means of defoliation and dis- ruption of food chains provides a chilling backdrop to the brief episode in which Gaia theorist James Lovelock proposed the chemical detection of combatants concealed beneath tropical forest cover. For Lovelock’s Vietnam-related work and his contribu- tion to the broader field of environmental warfare in his work for the National Oce- anic and Atmospheric Administration, see John Gribbin and Mary Gribbin, He Knew He Was Right: The Irrepressible Life of James Lovelock and Gaia (London: Allen Lane, 2009), 100–2. For the context of the work for the National Oceanic and Atmo- spheric Administration, see Jacob Hamblin’s chilling Oceanographers and the Cold War: Disciples of Marine Science (Seattle: University of Washington Press, 2005). 20. For a more extended account, see Howard Caygill, “Strategic Intervention and the Digital Capacity to Resist,” in Interventions in Digital Culture, ed. Howard Caygill, Martina Leeker, and Tobias Schulze (Lüneburg: Meson Press, 2017), 45–60. 21. Hermann Kahn, On Thermonuclear War (Princeton, NJ: Princeton University Press, 1960), 98. 22. Ibid., 165–66. 23. Paul Baran cited in John Naughton, A Brief History of the Future: The Ori- gins of the Internet (London: Phoenix, 2000), 96. 24. Paul J. Crutzen, “The Anthropocene: Geology of Mankind”, Nature , Vol. 415 (3 January 2002), 23. Crutzen had earlier contributed directly to the debate around sur- vivability and the use of nuclear weapons with the concept of a nuclear winter, formu- lated in his article with John W. Birks, “The Atmosphere after a Nuclear War: Twilight at Noon,” in Paul J. Crutzen: A Pioneer on Atmospheric and Climate Change in the Anthropocene, ed. Paul J. Crutzen and Gunther Brauch (Switzerland: Springer, 2016), 125–52. He endorses the argument that “far more people could die from the climatic and environmental consequences of a nuclear war than directly because of the explosions” and calls for a complete international ban on nuclear weapons. Paul J. Crut- zen, “The Background of an Ozone Researcher: A Brief Biography,” in Paul J. Crutzen: A Pioneer on Atmospheric Chemistry and Climate Change in the Anthropocene, ed. Paul J. Crutzen and Gunther Brauch (Switzerland: Springer, 2016), 43. 25. See the contributions in the issue, Jan Zalasiewicz, “The Anthropocene: A New Epoch of Geological Time?” ed. Mark Williams et al., in Philosophical Transactions of the Royal Society 369, no. 1938 (2011). Jan Zalasiewicz’s nuanced position is perhaps best stated in his remarkable The Earth after Us: What Legacy Will Humans Leave in the Rocks? (Oxford: Oxford University Press, 2008). 26. For an introduction to the range and diversity of this discussion, see the collection Art in the Anthropocene: Encounters among Aesthetics, Politics, Environ- ments and Epistemologies, ed. Heather Davis and Etienne Turpin (London: Open Humanities Press, 2015). 27. The arguments for a Capitalocene are presented by the contributors to Jason W. Moore’s collection Anthropocene or Capitalocene? Nature, History and

16028-0303f-Finalpass-r01.indd 72 9/24/2019 12:03:29 PM Anthropokenosis and the Emerging World of War 73

the Crisis of Capitalism (Oakland: PM Press, 2016); a clear synthetic account of the position is provided by Ian Angus’s Facing the Anthropocene: Fossil Capital- ism and the Crisis of the Earth System (New York: Monthly Review Press, 2016), which may be read with Christophe Bonneuil and Jean-Baptiste Fressoz’s The Shock of the Anthropocene: The Earth, History and Us, trans. David Fernbach (London: Verso, 2016). The links between the Anthropocene and the “great acceleration” of postwar capitalism are described by J. R. McNeill and Peter Engelke, The Great Acceleration: An Environmental History of the Anthropocene (Cambridge, MA: Harvard University Press, 2014). Their focus on the threat to biodiversity is unusual for Anthropocene literature. 28. For critical accounts of geo-engineering, see Clive Hamilton, Earthmasters: The Dawn of the Age of Climate Engineering (New Haven, CT: Yale University Press, 2013); and Oliver Morton, The Planet Remade: How Geoengineering Could Change the World (London: Granta, 2015). 29. Will Steffen et al., Global Change and the Earth System: A Planet under Pressure (Berlin: Springer, 2004); and Tim Lenton, Earth System Science (Oxford: Oxford University Press, 2016). See too the proceedings of the important 2003 Dahlem Workshop, Earth System Analysis for Sustainability, ed. Hans Joachim Schnellnhuber et al. (Cambridge: MIT Press, 2004). 30. For the surprising links between cybernetic and catastrophic discourses, see the work of Jean-Pierre Dupuy, Aux origines des sciences cognitives (Paris: Éditions La Découverte, 1999); and Pour un catastrophisme eclaire: Quand l’impossible est certain (Paris: Seuil, 2004). 31. See Douglas Palmer, Earth Time (Chichester: John Wiley, 2005); and Stephen Baxter, Revolutions in the Earth: James Hutton and the True Age of the World (London: Weidenfeld and Nicholson, 2003). 32. Most notably Tim Lenton and Andrew Watson’s tour de force, Revolutions That Made the Planet (Oxford: Oxford University Press, 2011). 33. “If we fail to take care of the Earth, it surely will take care of itself by no longer making us welcome.” James Lovelock, The Revenge of Gaia: Why the Earth Is Fighting Back and How We Can Still Save Humanity (London: Penguin, 2006), 3. While Latour criticizes both Lovelock’s cybernetics and his later view of the vengeful war of the planet against humanity, he nevertheless views earth’s geohistory “as a generalised state of war.” Bruno Latour, Face a Gaia: Huit Conferences sur le nouveau regime climatique (Paris: Éditions La Découverte, 2015), 98. 34. Michael Allaby and James Lovelock, The Great Extinction: What Killed the Dinosaurs and Devastated the Earth? (London: Martin Secker and Warburg, 1983), 178–79. 35. James Lovelock, Gaia: A New Look at Life on Earth (Oxford: Oxford Uni- versity Press, 1979), 58. 36. Ibid. 37. Ibid., 123 38. Gaia Vince, Adventures in the Anthropocene: A Journey to the Heart of the Planet We Made (London: Chatto & Windus, 2014), 4.

16028-0303f-Finalpass-r01.indd 73 9/24/2019 12:03:29 PM 74 Chapter 3

39. Ibid., 4–5. 40. Van Dooren, Flight Ways, 42. 41. Jay Winter, Sites of Memory, Sites of Mourning: The Great War in European Cultural History (Cambridge: Cambridge University Press, 2004), 104.

16028-0303f-Finalpass-r01.indd 74 9/24/2019 12:03:29 PM Chapter 4 War by Algorithm: The End of Law? Gregor Noll

How do we imagine future forms of warfare? For some time now, we have been captivated by the image of “killer robots” roaming the battlefield “autonomously.” I believe we are wrong to focus on machines. They divert our attention from digitalized forms of warfare more broadly. The origins of these forms of war date back to World War II, and their use has increased markedly since the beginning of this millennium, both among Western pow- ers and elsewhere. A glimpse at international lawmakers’ ongoing deliberations about “Lethal Autonomous Weapons Systems” (LAWS) indicates that attempts at regula- tion are fraught with difficulty. Since 2014, there has been a series of diplo- matic meetings on LAWS within the framework of the Convention on Certain Conventional Weapons, a UN treaty banning or restricting the use of specific types of weapons considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.1 The reports from these meetings reflect the kinds of problems that lawmakers run into. The rapporteur of the 2014 meeting tells us that while “a number of experts and delegations mentioned . . . the potential of LAWS to be a real game changer in military affairs,”2 “fully autonomous weapons systems do not yet exist and . . . there [are] diverging views as to whether these weapons might be developed in the near or long-term future, or not at all.”3 “Some delega- tions indicated that there were no plans for developing such systems.”4 But how can major military powers afford not to develop a potentially game-changing weapons system? DARPA, the U.S. Defense Advanced Research Project Agency, has been running programs on unmanned systems and on the computerization of warfare since the Vietnam War.5 For quite some time, humans have been becoming more and more absent and machines more and more present. The real problem is our understanding of “autonomy.”

75

16028-0303f-Finalpass-r01.indd 75 9/24/2019 12:03:29 PM 76 Chapter 4

The LAWS experts appear to have stepped into an area of conceptual quicksand:

A number of delegations proposed considering LAWS in relation to human involvement. For example, the concept of “meaningful human control” was proposed by some delegations as a framework to assess the legal, moral and ethical aspects of LAWS. Although there was broad interest in this concept, it was noted that there would be difficulties in identifying its scope. Others suggested that “meaningful human control” should be considered at different stages of the use of LAWS, such as in weapon selection, deployment, target selection and attack. How- ever, some criticised the subjective nature of “meaningful human control” and expressed a preference for “appropriate human judgement” instead.6

To be acceptable in terms of law, morals, and ethics, autonomous systems must allow for “meaningful human control” or purportedly objective “appro- priate human judgment.” Would not such a system cease to be governed by itself, then, and become heteronomous instead of autonomous? The only LAWS that are lawful, moral, and ethical turn out not really to be LAWS at all.7 Let me illustrate how noxious the interaction between insufficiently defined autonomy and insufficiently defined control might be. In a 2016 arti- cle, Christof Heyns, the former UN rapporteur on the right to life, suggested that we abstain from using autonomous weapons that cannot be meaningfully controlled by humans, even if they can be used in compliance with the laws of war. He defined autonomous systems as systems that, unlike predictable automatic systems, “may sometimes respond in unpredictable ways, and are, to the extent this happens, outside human control.”8 Unpredictability, I would think, is not just a characteristic of AI-based weapons. The problem of unpre- dictability haunted the development of nuclear weapons and of particular methods of warfare, such as the massive aerial bombing of urban centers in World War II that, in 1943, prompted firestorms that took UK Bomber Com- mand by surprise. In both cases, military planners went out of their way to bring the behavior of weapons systems back under human control. In a second step, Heyns argues that meaningful human control demands “that autonomous weapons remain tools in the hand of humans” and sug- gests that we ask whether the weapons in question “complement or substitute human decision-making.”9 Given the overly broad definition of autonomous weapons, Heyns’ concept of meaningful human control struggles to get the genie back into the bottle. If the requirement of meaningful human control merely demands that autonomous weapons remain tools in the hands of humans, then this does not amount to much more than asking for the sin- gularity not to happen—the singularity being commonly understood as the

16028-0303f-Finalpass-r01.indd 76 9/24/2019 12:03:29 PM War by Algorithm 77

point at which runaway AI technology substitutes humans with machines as the masters of the world. Any military planner would agree with Heyns, and the human rights experts he represents, that the singularity is undesirable. In regulatory terms, however, Heyns inadvertently legitimizes the development and use of those digitalized weapons systems that cannot, as far as we can tell, bring about the singularity. What was intended as regulation turns out to be liberalization. So everything depends on how rigidly I define autonomy. If I believe that only full emancipation from its human inventors and operators qualifies a system for that label, I will agree that no state has yet developed such a sys- tem and that there is no need for new norms. If I believe that autonomy is a matter of degree, and that a certain level of automaticity is enough to qualify a system as autonomous, I might think that such systems are under develop- ment or even already in use today. And if I am skeptical about the efficacy of existing international law in regulating complex systems, I will call for new norms.10 These norms are supposed to change something governed by itself (the autos of autonomy) into something governed by the law. An autonomous weapons system subjected to the heteronomos of the law would no longer be an autonomous system at all. What unsettled us is dissolved through a linguistic trick. Even if we resist these tricks, lawyers foist on us a binary choice: either autonomous or heteronomous, either man or machine, either within or beyond contemporary law. Lawyers tend to reduce questions about novel technolo- gies to questions about hierarchy: Do humans ultimately rule over technology or the reverse? I believe that it is premature, if not wrongheaded, to pose these questions. For there is a third possibility: the rule of law may be related to the rule of algorithms in ways that cannot be reduced to a simple hierarchy. Is it possible to subject algorithmic forms of warfare to the rule of law? This is the question I will pursue in the present chapter. I start by asking how code rules (“How Does Code Rule?”). I do this by looking at a paradigmatic AI weapons system proposed in 2013. It possesses a trait that is representative of the integration of artificial agents and human agents: while the system suggests that the human operator is at the top of the decision hierarchy, exercising something akin to free will, she is really tethered to the system’s reductionist logic according to which truth emerges from the signal strength of neural connections, not from anything like con- scious human cognition. From there, I move on to the question of the rule of law (“How Does the Law Rule?”), for which this reductionism causes very real problems. As soon as we seek to apply contemporary legal rules to algorithmic weapons systems, standard legal questions about human intention turn out to be unanswerable. The section “Why Do Code and Law Rule in Such Different Ways?” is the centerpiece of this chapter. It shows that these

16028-0303f-Finalpass-r01.indd 77 9/24/2019 12:03:29 PM 78 Chapter 4

problems cannot be overcome by means of new legislation. Since the advent of monotheism, I shall argue, central to the law itself has been its study by humans. AI reconfigures this process in such a way that human study only begins once the system has already made a decision. This creates nothing less than an epochal rift, and our traditional understanding of law—indeed, the only understanding of law we currently have—is simply unfit to bridge it.

HOW DOES CODE RULE?

Recognition is central to warfare. When targeting, friend and foe, civilian and combatant, have to be distinguished. There are rules prescribing this. Making these distinctions is a cognitive task involving both perception and judgment. Militaries are primarily organizations that allow for a form of cognition. This cognition is organized in a particular way. When I read a book, I bring myself and that book together in a very specific manner. I combine a language, an alphabet, pages in a book, perhaps a pair of glasses or contact lenses, my eyes, my brain, and my mind. When I read something on the screen of a computer, I combine its hardware, the algorithms of the program I use, perhaps a pair of glasses or contact lenses, my eyes, my brain, and my mind. The way we organize this reading is significant. It gives us new cogni- tive capabilities and new cognitive limitations. This combination of enabling and disabling comes with every use of cognitive technology: perusing a book, telling the time, using a computer, aiming with the sight of a gun. None of these acts merely depicts reality; our use of technology is never neutral. It always entails a guided form of cognition. This guidance is analogous to the guidance of human behavior by the law. The law informs me about the consequences of my actions: when I read the law, I understand that I will be locked away in a prison if I intentionally take the life of another person. But the law also informs the consequences of my actions: the law and its institutions ensure that I am locked away once I have committed a murder. This, in turn, makes the law inform my conduct by dis- suading me from committing murder through the prospect of legal penalties. The intransitive usage of “to inform” is linked to the transitive one: over and above its enabling capacity, the law also has agentic capacities. Law and tech- nology come together in this double capacity, Hildebrandt suggests.11 Any use of technology is normative, as is any use of the law. Technically and legally guided cognition are both enabling and agentic. Earlier, I pointed out a deceptive binary at work in the debate on LAWS: either heteronomy or autonomy, either inside or outside the law, either a human or a machine subject. Here is a third option: the subject is a situation- specific constellation of human and machine. If we accept, for the time being,

16028-0303f-Finalpass-r01.indd 78 9/24/2019 12:03:29 PM War by Algorithm 79

that it is the intertwining of enabling and constraining capabilities that charac- terizes both law and technology, we seem to have a larger research program, of which this chapter forms part: to map in detail how the way in which a human, a machine, and a piece of law are brought together with the world produces normative consequences. Let me introduce four technologies that help us to understand the way normativity is at work in the emergent form of war:

Robotics is about the combination of mechanical constructions, power sources, and algorithmic code in a machine that replaces humans. Current military applications include small tracked robots accompanying infantry or special forces, unmanned aerial vehicles for attack purposes, and unmanned naval ves- sels engaging enemy submarines. Future applications might include humanoid robots that act as soldiers. The emphasis in robotics is on a mechanical body that replaces the human body. Algorithmic code forms part of robotics but is not nec- essarily the key feature. The development of humanoid soldiers, for example, is still hampered by limited power supply. In military contexts, artificial intelligence has a very wide range of uses. AI is a field in computer science that focuses on tasks that are easy for humans and hard for computers. AI assists in, inter alia, planning campaigns, making sense of intelligence, or finding and engaging targets. Examples stretch from the DART system for logistics planning, which has been in use since Operation Desert Storm in 1991, to the identification of enemies and non-supportive popu- lations in Operation Enduring Freedom in Afghanistan (2001–2014). AI also enabled the development of advanced simulation systems for military training and systems capable of visual recognition. One way of describing AI is that it revolves around the mental states of artificial neural networks. These networks take their cue from understandings of how the human brain’s neural networks function. The focus in AI is on replacing human cognition, or parts of it, with enhanced machine cognition. In military contexts, neurotechnology might enhance and speed up the target- ing process by combining unconscious human cognition with the processing power of computers. Experimental applications comprise, inter alia, threat detection and targeting systems, but also applications to assist veterans in recov- ering from PTSD. The idea is to combine algorithmic machines and human cognition in a way that leverages the advantages of each. In concrete terms, arti- ficial neural networks and the neuronal signals of the human brain are pooled. Recent developments in genomics have spawned a discussion of its poten- tial military uses. One possibility that is within reach is the selection of soldiers based on their genetic characteristics. This process opens up new ways of reading the human and her capabilities. A more distant possibility is the genomic enhancement of soldiers. That process is analogous to program- ming an algorithm inside a human or, metaphorically, writing her. The human genome is altered as a programmer would alter software to better achieve a particular end.

16028-0303f-Finalpass-r01.indd 79 9/24/2019 12:03:29 PM 80 Chapter 4

What do these forms of technologically driven warfare have in common? It is the amalgamation of machine properties with human properties in an open architecture in which none of them can be isolated from any other.12 In robot- ics, we encounter a move out of the human flesh into machine bodies. From a human perspective, robotics is excarnation. (I will come back to what this might mean in the section “Why Do Code and Law Rule in Such Different Ways?”) Similarly, AI is a move out of the human flesh and the human’s embodied mental states into artificial mental states created by computers. From a human perspective, AI is also excarnation, if we understand the mental states thus outsourced to be attributes of our corporeal existence. Con- versely, genomics is all about incarnation: it transfers the artifice of coding into the human flesh. Neurotechnology is the hard case: because it assembles artificial mental states and human neuronal signals, it wavers between excar- nation and incarnation. If we are interested in regulation, this case is the most sensitive one—and the most productive to think about. This is all the more so as the United States, a dominant actor in the development of advanced weap- onry, has emphasized man–machine collaboration in its use of military AI technologies,13 and Elon Musk has thrown his political and economic weight behind the development of “ultra high bandwidth brain-machine interfaces to connect humans and computers.”14 This bleeding of machine and human into each other is not that much of a surprise, though, if we consider that all four technologies are the products of cybernetic thinking. Cybernetics embraces the entire field of control and communication theory and makes no distinction between machine or ani- mal.15 Using the concepts of feedback and self-regulation, “cyberneticists claimed to be able to build the living agency of an organic being into artificial machinery.”16 Think of robotics and AI, and you are thinking of a discipline in which machines are acquiring human-like autonomy. Think of genomics, and you are thinking of a discipline in which humans are acquiring a machine-like adaptability to extreme circumstances. Think of neurotechnology, and you are thinking of an area in which machines are acquiring the human property of speedy recognition—or where humans are acquiring the machine property of speedy electric-signal processing. By contrast, it is much easier for noncyber- netic common sense to separate the gunsight from the soldier, the book from its reader, and the pair of glasses from its wearer. Here, we experience the normative repercussions of a model of thinking. If we opt for cybernetics, we opt for a mode of thought that makes no distinc- tion between humans and machines. What scares us about LAWS must then be assumed to be what scares us about man. And we find that the technology developed on the basis of cybernetic assumptions actually fuses humans and machines in a way that engenders regulatory problems. We try to solve these problems by limiting the development of technology (e.g., by arguing for

16028-0303f-Finalpass-r01.indd 80 9/24/2019 12:03:29 PM War by Algorithm 81

a ban on “autonomous” weapons), but the thinking that shapes them is left untouched. If a legal ban (or the absence of one) is normative, then cybernet- ics is a form of second-order normativity. Cybernetics is about “control” as such and therewith also about controlling control. It is about regulation and therewith also about the regulation of regulation. It is a regulatory thinking at a more foundational level than an ordinary international agreement banning a particular type of weapon. The question of “autonomous” AI applications is always already part and parcel of the question of cybernetics. The second- order character of this thinking means it slips from our sight, even as it holds us captive (see Figure 4.1).

Figure 4.1 Autonomous AI as part of cybernetics

As humans and machines bleed into each other, so too do two otherwise separable forms of normativity: that of the law, which enables and tempers human conduct, and that of technology, which enables and tempers machine conduct. At this point, we may become apprehensive about the issues this may give rise to further downstream in this inquiry. What does the framing of autonomous weapons by cybernetics mean for our ability to regulate them through the law? I think it best to study in greater detail the normative implications of algo- rithmic code by looking at the way it has developed. To start with, I shall offer a very simple account of how humans, algorithms, and the machines that algorithms are built into interact. Consider the programmer of a simple robot. This programmer’s challenge is to understand how the robot can be made to interact with the world in the way the programmer intends. Interac- tion between programmer and machine consists partly in the writing of code in a programming language (this would include algorithms) and partly in the programmer’s observations of the robot’s functioning. In simple terms, the process looks like this:

1. The human formulates a problem to be solved. 2. To solve the problem, the human builds an algorithmic machine.

16028-0303f-Finalpass-r01.indd 81 9/24/2019 12:03:29 PM 82 Chapter 4

3. Machine cognition of the world gives rise to machine behavior. 4. The human cognizes the machine behavior (testing). 5. The human rebuilds the algorithmic machine (debugging).

Steps 2 to 4 can be repeated again and again in an ongoing feedback loop. This is where cybernetic assumptions play out in practice in that the whole process consists of feedback and self-regulation. Would this suggest that human intention drives the process from beginning to end? The hierarchically superimposed normativity would then be fully human and could be constrained by law, ethics, and custom. In this example, the answer is an unequivocal “yes.” A simple algorithmic robot of this type would not be more of a problem for the laws of war than, say, a slingshot, a gun, or an anti-aircraft missile. There is a line connecting the human creator’s intention and the final workings of the algorithmic machine. The machine is completely open to human understanding. Today, AI has developed to a point where the conception of a machine working in a predictable, step-by-step fashion is no longer enough. First, machines have started to program themselves. “Programming computers to learn from experience should eventually eliminate the need for much of this detailed programming effort,” Arthur Samuel, the U.S. AI pioneer, wrote in 1959.17 Taking its cue from Samuel, a widely adopted understanding of AI is that it gives computers the ability to learn without being explicitly pro- grammed. This ability might be achieved through machine learning or the more recent development of deep learning. Both are instrumental in complex operations such as voice recognition, visual recognition, or any task that pre- supposes the extraction of patterns from huge amounts of data. Today, quite a few machine-learning applications are still deterministic. Had humans but world enough and time, they could understand every step a machine-learning program takes from its code and the set of data its training was based upon. A decisive shift in AI took place when connectionist approaches started to gain ground in the 1980s, largely building on an analogy to the brain’s neural networks. The characteristic of this type of machine learning is that the solu- tion is not scripted beforehand. The algorithm is trained to find patterns in one set of data, and on this basis it can go on to identify patterns in another set of data. This decouples the programmer to quite some degree from the final form the program takes. While the programmer obviously knows the untrained algorithm and deliberately chooses the set of data to use for its training, he or she is ignorant of the dispositions the algorithm generates during training and during its actual use after having been trained. There is an element of emer- gence in the way neural networks are supposed to produce particular states. But relying on emergence comes at a price for human understanding. “We frequently use philosophical, mathematical, and biologically inspired

16028-0303f-Finalpass-r01.indd 82 9/24/2019 12:03:29 PM War by Algorithm 83

techniques for building artificial, interactive, intelligent agents,” AI research- ers Theodorou, Wortham, and Bryson suggest. “Yet,” they go on to say, “despite these well-motivated inspirations, the resulting intelligence is often developed as a black box, communicating no understanding of how the under- lying real-time decision-making functions.”18 In this respect, an increase of data leads to a decrease of information.19 In a 2017 article, Wortham and Theodorou state, “As robot reasoning becomes more complex, debugging becomes increasingly hard based solely on observable behaviour, even for robot designers and technical specialists. Similarly, non-specialist users have difficulty creating useful mental models of robot reasoning from observations of robot behaviour.”20 Wortham and Theodorou propose to increase transpar- ency by offering users feedback on the decision making of the robot so as to decrease the “fear, anxiety and mistrust of robots and AI in general” prompted by advances in AI.21 Fostering trust between user and AI application is crucial in the military context. “The ability to understand the reasoning behind an intelligent agent’s actions can help to increase operator performance as the use of human-agent teams for military operations grows,” write the authors of a 2015 study com- missioned by the U.S. Army Research Laboratory.22 Improving understanding might simply mean increasing transparency—for example, through better displays. However, this could still leave human operators with overwhelm- ingly complex information concerning machine processing. Therefore, even more ambitiously, researchers try to make deep-learning machines actively explain themselves to humans. After all, the widespread use of the term “agent” underscores how important it is for humans to define the machine as something acting on their behalf. Why is it so hard for humans to reconstruct AI reasoning, and why do we need machines that actively explain themselves to us? In the area of artifi- cial neural networks, the “knowledge” involved in AI finds its expression in highly complex nonlinear mathematical formulae concerning what AI researchers term the “weight” of its artificial neuronal connections. A deep- learning application provides us with a result, yet the path to it is hard, if not impossible, for a human to reconstruct.23 The current DARPA program “Explainable Artificial Intelligence” (XAI) is an attempt to address this problem by making AI express itself to a greater degree in human language. Its aim is to make “future warfighters . . . under- stand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.”24 The XAI program funds more than ten projects. One of them is run by PARC, a Xerox company, which describes itself as being in the “business of breakthroughs.” The PARC project has teachers training AI systems just as they might have taught humans: start- ing with simple concepts before building deeper knowledge. “This shared

16028-0303f-Finalpass-r01.indd 83 9/24/2019 12:03:29 PM 84 Chapter 4

understanding, or ontology, between teacher and machine would provide the common knowledge needed to communicate,” as principal researcher Mark Stefik stated in a 2017 interview.25 This reminds us about the basic assumption of cybernetics: no distinction is to be made between man and machine. To assume, as Stefik does, that natu- ral and artificial intelligent agents may share an understanding of something implies that the natural and the artificial are similar enough that, in order to make machines able to explain themselves to humans, it is sufficient to teach them human concepts. In the XAI program and similar research ventures, understanding makes possible trust, which in turn allows for the full exploita- tion of machine agency for the purposes of war. However, explainable AI is still a machine, even though it is actively anthropomorphizing its output. Trans- lating machine-generated mathematical formulae into machine-generated human language displaces the problem rather than solving it. Liljefors points out an analogous displacement when military operators are tasked with “translating” machine vision displayed on their screens.26 In the case of XAI, the question “how did the system arrive at this solution?” is transformed into the question “how did the system arrive at this linguistic articulation of mathematics?” But the capacity of XAI to express itself in human language obviously has a deceptive potential, for a naïve user might take the machine’s use of human language to be an indication that this language is undergirded by something akin to human thinking. And what if AI gets strategic about its self-explanations, optimizing them so as to elicit maximal human trust and human acquiescence in AI decisions? There is an unaccounted element of good faith at work here. Is there a normative void here? As soon as we lift our gaze from the lone programmer and look at the whole field of AI, we see that a normatively significant analogy has stepped in to fill this space. As mentioned above, for complex applications a common AI approach is one based on artificial neural networks. These networks are modeled on the human brain and its workings and manifest themselves as adaptive systems that change their structure dur- ing learning processes. Practical applications of so-called connectionist AI have been extraordinarily successful since the mid-1980s and have enabled today’s military and security applications. Inspired by these successes, con- nectionist pioneers seem to have assumed that the analogy of human brain and computer works both ways. Not only does the analogy of the computer with the brain permit the development of powerful algorithms, but the anal- ogy of the brain with the computer allows us to model the actual workings of the human brain.27 This analogy between computing and the functioning of the biological brain adds a second-order layer of normativity to the programmer’s inten- tion. What the programmer’s knowledge and intention cannot account for, the

16028-0303f-Finalpass-r01.indd 84 9/24/2019 12:03:29 PM War by Algorithm 85

supposed nature of the brain can. The supposed nature of the brain delivers a second-order norm to the coding process, because it provides a norm that restrains our ability to posit norms. Logically, the supposed nature of the brain enables the programmer’s coding choices. To illustrate how powerful the normative implications of this bidirectional analogy are, consider the field of artificial life (AL), which has been part of the computer sciences since the 1950s. While AI models its programs on nature as brain, AL reaches further and models its programs on nature as evolution. Genetic programming is a “frontal assault” on “the problem of getting computers to learn to program” themselves.28 Computer programs are “bred,” meaning that the program itself selects the “best fit” from a plurality of algorithms “competing” for survival in an evolutionary setting. Genetic programmers draw on research results from geneticists when elaborating their algorithms: “The use of mechanisms found in nature can lead to solutions to complex problems that by far outperform any man-made approaches,” one researcher states.29 A 2006 research article proposes that genetic program- ming be used to avert cyberterrorist intrusions in real time, suggesting that the metaphor of the survival of the fittest is not lost on this school.30 On a macro level, a 2016 study proposes a genetic algorithm to integrate a plethora of weapons systems into a “system of systems” that would detect targets and direct lethal attacks against them. There is no question of full autonomy here; this is a (highly abstract) way of supporting decision making at an army-wide level.31 The assumed rationality of the nature of the brain guarantees what the intentions of the programmer no longer can; the DARPA projects on self- explaining AI illustrate as much. The system coauthors itself, a process underwritten by cybernetic assumptions about the rationality of nature. On the micro level, AI applications involving artificial neural networks depend on there being such things as emergent properties. On the macro level, these emergent properties are extended into a form of war. When it comes to the battlefield, this emergent form of war relies on the superiority of the cyber- netic analogy and the applications it spawns. Its users are assumed to owe their success in battle to its technological superiority over enemy systems. Yet they are fighting a war not only with it but for it, expanding the space of cybernetic domination in the world. But how does this combat take place more concretely? We need an exam- ple to demonstrate the normative effects that AI has in weapons systems under development. Neurotechnological targeting systems are a good case for this purpose. They tightly intertwine human and machine cognition and make them interact at very high speeds. In what follows, I will use a 2013 article by Jon Touryan, Anthony J. Ries, Paul Weber, and Laurie Gibson, “Integration of Automated Neural Processing into an Army-Relevant

16028-0303f-Finalpass-r01.indd 85 9/24/2019 12:03:29 PM 86 Chapter 4

Multitasking Simulation Environment,” as a test case. In their contribution, the authors describe the attempt to integrate automated neural processing of visual information into a U.S. Army simulation environment in order to detect threats and targets.32 In the central part of their research, the research- ers employ a simulation of a Manned-Ground Vehicle (a type of armored car) to test whether the process of screening its combat environment for targets can be made faster and more accurate through neurotechnology. “Instead of an operator manipulating the -tilt-zoom (PTZ) camera to scan the envi- ronment, images of the vehicle’s surroundings, containing potential targets, would be rapidly presented and subsequently sorted based on the operator’s neural response. The operator could then review the most relevant images for target confirmation.”33 Compared to the manual scan of the vehicle’s surroundings, the neurotech- nological solution resulted in faster detection of targets. This performance enhancement persisted even when the task was taken out of the laboratory and into a simulated operation scenario and secondary tasks were assigned to the operator, such as scanning the environment for IEDs and responding to radio calls. While neurotechnology speeded up the search, it did not render it more accurate.34 From this contribution by Touryan et al., we may extract the various com- ponents of the decision-making model presupposed by the brain–computer interface (BCI) application the article discusses. Two steps need to be taken before a hypothetical combat mission may be carried out by this weapons system.

1. A classification algorithm is coded. A computer vision algorithm adapted to the particular BCI system has to be developed. It must be able to prese- lect “regions of interest” (ROIs) that may contain a target. Touryan et al. replace the function of this envisaged computer vision algorithm with a manual selection. To create this algorithm, human programmers would have to define the target in whose detection it is to assist.35 2. The application learns how an individual functions. Each individual sol- dier manning the system is put through a “rapid serial visual presentation” (RSVP) so as to construct individual neural classification models. In this process, the RSVP function teaches the machine how a particular indi- vidual’s neuronal signals react to images of the target. This allows for a seamless incorporation of the eyes and neuronal signals of this particular soldier into the computer running the proper targeting software of the BCI.

In combat, the process would continue as follows:

3. The computer picks out interesting camera images. The computer vision prefiltering algorithm now processes images taken by cameras from the

16028-0303f-Finalpass-r01.indd 86 9/24/2019 12:03:29 PM War by Algorithm 87

environment of the vehicle and prescreens ROIs to identify those that may contain a target. 4. These are shown to the soldier very rapidly. The prefiltered ROIs are shown to the soldier in an RSVP for 200 milliseconds each to elicit her neural response, which is monitored through scalp electrodes. The pace is too fast for her consciously to perceive this presentation. 5. Images triggering an unconscious response are shown once more to the soldier. Those ROIs that elicit strong neural reactions are shown again to the soldier for final target selection or confirmation that no target is pres- ent. This is the second algorithm at work within the system. 6. The soldier takes a conscious decision about whether to fire at the target.

Now, where is the problem? Is it not the conscious soldier who takes the final decision, reviewing what the system has identified as a target? Not quite. What the soldier reviews is an amalgamation of her own neural responses and the logic of the algorithmic system. This is not a review in the literal sense of seeing something for a second time. It is actually the first conscious view the soldier has of her unconscious view of the images, made accessible to her by the machine. So when the soldier is confronted with the images in the last step, she actually confronts herself. This situation is fundamentally different from a situation in which someone is told that an informant has identified a particular person as a combatant. That informant is another person, external to myself. What I confront in the Touryan et al. system is my unconscious, my inner self.36 This is connectionism in action: a property emerges in a neural network that merges neural connections, both human and artificial. What we get is a system in which truth emerges through the unconscious. In the history of warfare, there has been nothing quite like it. At this stage, two thoughts suggest themselves. First, as humans, we are generally expected to be ourselves: to maintain the positions we have assumed.37 If I once assumed the position “this is a target,” then I ought to present good reasons for going on to assume a different position (“this is not a target”). This is a form of confirmation bias that grows out of the system’s peculiar self-referentiality. The burden of proof is placed on the consciously deciding human rather than the system that interprets her neuronal signals. Second, the system appears to provide me with a version of myself that is better prepared to advise me on what to do. As an operator, is not my uncon- scious reaction closer to the studied object than my conscious reflection on it? Is it not a less mediated way to access the reality studied? Should I not defer to it and avoid contradicting it? Is that unconscious reaction, presum- ably based in what is evolutionarily advantageous, not a kind of “hard evi- dence” for the fact that what I see really is a target? This is what is known as self-verification, again enabled by technology. Based on existing research findings, it is reasonable to expect that confirmation bias and self-verification

16028-0303f-Finalpass-r01.indd 87 9/24/2019 12:03:29 PM 88 Chapter 4

increase in situations of time pressure.38 In an armored car driven through a hostile urban environment, a soldier entrusted with observation tasks will feel intense time pressure. The context for which the system was designed means that it has an even greater propensity to make errors. Both the imperative of coherence with oneself and the idea of the uncon- scious having immediate access to reality are deeply ingrained in Western culture. This makes it hard to evaluate systems such as the one proposed by Touryan and his colleagues from within Western culture. It looks as if the human operator is on top of the decision hierarchy, exercising something akin to free will, while really she is tethered, with an onto-psychological loyalty, to the system, with all its technological limitations. Does this mean that AI weapons systems are invariably providing us with erroneous results? Not quite. They can get it right and identify exactly those combatants or military assets that the humans programming, ordering, fielding, and operating them want to see identified. But they can get it dra- matically wrong as well. Misbehavior of AI systems that emerges early and that is glaring enough to be perceived by human operators as a violation of legal rules will immediately be treated as error and eliminated. What about deviations from human intention that are subtler and less likely to be noticed? These are the dangerous cases, and the problem is that we do not know how many of them there are. As long as the particular combination of human and machine cognition is perceived as being more “objective” than a purely human form of cognition, and as long as it includes a learning function, it will be much harder for humans to stand their judgmental ground against the AI system. Intelligent applications that are able to learn, used over longer periods of time, will develop a practice of their own. This practice is norma- tive. Extrapolating further, it could morph into what international lawyers term “state practice” and might give rise to customary international law.39 We would also need to consider what happens when single AI applications mold the distinction between friend and foe as part of a wider AI network. Such “ambient computing” in warfare might amplify any bias inherent in the single application. But, after all, this is about science. And science is about repeatability and testability. Why not subject any system to a long-term test and see how it performs? The answer is related to the insufficiency of available testing time and space. Let us first consider an early precursor to today’s military AI: the computer-based anti-ballistic missile systems developed by the United States from the 1950s onward. The point of these systems was that their functional- ity in an actual combat situation could not be established: “Since we have no spare planets on which to fight trial nuclear wars, operational testing of a global ABM system is impossible,” computer scientists Greg Nelson and David Redell argued in a 1984 debate on the feasibility of STARS, a U.S.

16028-0303f-Finalpass-r01.indd 88 9/24/2019 12:03:29 PM War by Algorithm 89

space-based defense system.40 There was no sufficiently extensive space, fea- turing all the characteristics of our globe, for realistic testing. In AI warfare, which is a slow-motion analog of the high-speed chess of nuclear warfare, it is time rather than space we lack. In order to have an appropriate testing ground, it is not so much a parallel globe we would require as a parallel his- tory. As we lack one, we cannot be fully sure that we understand how the system works until we use it in real conflict. Parts of the world, parts of its population, and a particular period in history are put into the wager.41 For a more radical illustration of the problem, let us for a moment con- sider genomics. As I suggested earlier on, genomics provides for a range of potential military uses, of which some are more distant possibilities than oth- ers. Genomics may assist the military in identifying soldiers unfit for certain missions or soldiers particularly fit for others. This is all about selection: a technologically enabled reading of the human being and her abilities. Then there is the much more remote possibility of the genomic intervention in or engineering of soldiers. The very idea has met with weighty political coun- terarguments (for instance, that it would lead to “genetic discrimination” and the creation of a warrior class,42 that it amounts to the creep of enhancement technologies into the civilian sector,43 and that it gives rise to philosophical concerns about the risk of genetic determinism and genetic reductionism).44 Both selection and intervention would take place over a time frame that is much more extensive than in the case of robotics or neurotechnological tar- geting. This becomes particularly clear in the field of intervention. As human beings, genomically enhanced soldiers are protected by a layer of rights, including human rights, while machines are not. Any modification of their genetic setup takes on a life of its own—indeed, a life that is as protected as that of any human being. Any error, any undesired trait produced by genomic enhancement, will benefit from that protection. This further extends the historical period we would have to put into the experiment. We might “test” genomically enhanced soldiers, but we might not have the right to learn from the mistakes revealed by such tests. Any merger of humans and AI is marred by the massive unpredictability of outcomes over time. Compared to con- ventional weaponry, such systems require extraordinary amounts of faith to replace the knowledge we forgo because of the impossibility of proper testing. I have demonstrated that an act involving an algorithmic weapons system is distinct from an act not involving such a system. This distinction is due to four factors. First, there is the cybernetic assumption that animals, including humans, and machines are the same in terms of control and communication. This assumption provides for an integration of human and machine of a kind and degree without parallel in non-AI systems. Second, in more sophisticated military AI, this integrating system coauthors itself through a learning capa- bility. This brings to bear on the world norms that can no longer be traced

16028-0303f-Finalpass-r01.indd 89 9/24/2019 12:03:29 PM 90 Chapter 4

back to a human intention. Third, connectionist AI makes truth emerge. In the concrete application analyzed above, truth emerged in the human sub- conscious, and it required AI in order to become known. This truth cannot be validated outside or beyond AI. Fourth, its overall functioning over time cannot be subjected to realistic testing, and the human engaging with it will do so on the basis of faith rather than knowledge. In simple terms, the pres- ence of an algorithmic machine consumes space otherwise available for the responsibility of potential human authors. How does code rule, then? Through excarnation, normativity is drawn from assumptions about the human brain and made to act on the world.

HOW DOES THE LAW RULE?

As we have seen, some take the optimistic view that the current laws of war are sufficient to regulate the use of any type of weapon, including LAWS.45 This position rests on a view of law that sees everything as always already included within its purview. It is a fairly widespread assumption in the legal discipline, one that is not always properly interrogated. In this section, I shall consider whether algorithmic weapons systems actually can be subjected to the existing laws of war. The subjection of algorithmic weapons to the laws of war might mean, on the most general level, that this body of law can be applied to them. A lawyer would ask whether the law would apply in the particular place where a con- troversial weapons system was used (applicability ratione loci), to the par- ticular person using it (applicability ratione personae), and to the particular questions of law raised by its use (applicability ratione materiae). Formally, all that is required is that the conflict in which such weapons are used falls within the ambit of the laws of war and that the actors using such weapons are by it. But if this is all that is claimed by the optimists, then they are not claiming much. The more interesting question remains whether the law has been “implemented,” that is, whether it has made a difference to actual conduct. This brings us to the question of responsibility. Valid law is a kind of call—is it possible to make someone (or, indeed, something) respond to it? Let me now trace the issues raised by the question of responsibility when it comes to algorithmic weapons. If we think of law in war as a kind of call, it is also a call to use weapons responsibly. As things for human use, weapons are integrated into law’s demand for responsibility. The law stipulates hierarchies in which military personnel are always superior to what military language calls materiel and are tasked with performing this superiority by maintaining a sufficient degree of consciousness in relation to that materiel. I shall present this stipulation of

16028-0303f-Finalpass-r01.indd 90 9/24/2019 12:03:29 PM War by Algorithm 91

hierarchy and the possibility of a sufficient degree of consciousness in some detail. Let us start by asking who is responsible if an algorithmic weapons system is used in a way that, say, kills a disproportionately large number of civilians. The form of responsibility set out by criminal law presupposes that there is a human actor who commits a criminal act. We might consider the case of a soldier who uses an algorithmic system and ends up killing more civilians than is proportionate to the expected military advantages. We might consider the case of a commander ordering that such a system be used by a group of soldiers (this is called ordering responsibility). Or we might think of a general failing to control a soldier who has on her own initiative decided to use that system (this is known as command responsibility). While all three scenarios can entail individual responsibility under international criminal law as it applies today, it has also been proposed that in such cases the developer of such a system be held criminally responsible as well.46 This is normally not possible under the law as it stands today. Given the difficulties of lawmaking that we encountered earlier in this chapter, it might not become law in the near future either. In broad terms, legal responsibility requires, first, the presence of an act—the actus reus, in this case the killing of a disproportionate number of civilians—and, second, a state of mind within the acting person: the mens rea. The disproportionate killing of civilians is criminalized in international law, and the challenge would be to find evidence linking a person or group of persons to that act. Let us assume that the prosecutor has identified a person who has acted in a way that meets the definition of a crime. For a conviction, however, the prosecutor also has to show that the person was in the requisite state of mind when acting. This state is often expressed in terms of a person’s purpo- sive, knowing, reckless, or negligent commission of the act. Relating to the examples above, international humanitarian law sets out the requirement that, for the soldier or general to be guilty of a war crime, they must have commit- ted the act willfully.47 A person who issues an order with the awareness of a substantial likelihood that a crime will be committed in executing the order has the required mental state for them to be held responsible for ordering the crime under international criminal law.48 Assigning responsibility to individuals under international criminal law is a relatively recent phenomenon. While enormous sums have been invested in creating tribunals and courts for that purpose, only very few persons have been indicted and convicted. When the treaties that articulate the laws of war were written, the default locus of responsibility was the state, not the individ- ual. This is still the standard mode of liability under international law. For a state to be responsible for what international law calls “internationally wrong- ful conduct,” that conduct needs to be attributable to a state and wrongful

16028-0303f-Finalpass-r01.indd 91 9/24/2019 12:03:29 PM 92 Chapter 4

according to the rules of international law. Let us assume that a commander in a regular state army willfully launches an attack killing a disproportionate number of civilians. Here, both criteria are fulfilled. As the commander’s army is an organ of the state, the attack would be attributed to the state, and thus the legal requirement of willful commission of the act is fulfilled.49 Note how the human actor’s state of mind is a precondition for state responsibility and how any responsibility is limited by human consciousness. Beyond the criminal responsibility of individuals and the responsibility of states, there is another dimension of responsibility that we should briefly touch upon. Suppose a producer of an algorithmic weapons system sells it to a state knowing that it is programmed in such a way that it cannot be used to kill a disproportionate number of civilians in any attack. What if the system is used and does just that? Even here, questions of the actor’s state of mind are material. Is it required that the producer knows that the system malfunc- tions in certain situations? Or is it sufficient that the producer should have known about it? The wording of the particular contract in the light of contract law and tort law will provide answers in cases like this. While an analysis of military procurement contracting is beyond the scope of this chapter, suffice it to say that placing the responsibility for breaches of the laws of war on pro- ducers would obviously dampen the innovative capacity of the arms industry. The dual requirement that an act be both wrongful under the law and performed by an actor in a particular state of mind goes back a long way in legal history. Now, let us assume for a moment that we wanted a speedier assessment of legal responsibility, and we thought the state of mind analysis to be far too complicated. This was exactly the situation lawyers and lawmak- ers encountered in the process of industrialization. Methods of production became much more complex and risky, and the role played by single humans in them became increasingly opaque. This made legal responsibility hard to allocate. The response was to subject certain industrial activities to strict liability. When an operator is strictly liable for, say, the operation of a nuclear power plant, he or she is liable for damages caused by that process regardless of whether he or she has the intention of causing damage or is culpable in some way. All that is needed is a nexus between the process and the damage. The operator’s state of mind is no longer decisive. What if international lawmakers imposed strict liability on the users of algorithmic weapons systems? This would make the use of these systems very risky for militaries, and it would discourage their acquisition. In effect, it would be close to an outright prohibition of these systems. As we saw in the introduction to this chapter, a dominant group of states is averse to regu- latory intervention in the LAWS sector. Therefore, the application of strict liability seems unlikely in the near future, and traditional responsibility rules continue to apply, including a mens rea requirement. This offers a degree of

16028-0303f-Finalpass-r01.indd 92 9/24/2019 12:03:29 PM War by Algorithm 93

liberty from legal responsibility for of such systems and for the militaries that are, or might be, using them. In the preceding section on the rule of code, I gave a number of reasons why an act involving an algorithmic weapons system is distinct from an act not involving such a system. First, there is an integration of human and machine of a kind and degree that is obviously absent in non-AI systems. Second, this integrating system coauthors itself through its learning capac- ity, which means that it comes to possess a normativity that can no longer be traced back to any intention originating in a human designer. Third, its overall functioning over time cannot be subjected to realistic testing, and the human engaging with it will do so on the basis of faith rather than on the basis of knowledge. In simple terms, the presence of an algorithmic machine consumes space otherwise available for the responsibility of potential human authors. These three interrelated factors militate against criminal accountabil- ity and therewith legal responsibility in a number of ways. To begin with, it will be harder to demonstrate individual intention in cases in which a human interacts with a machine. This will make it more difficult to attribute both individual responsibility under international criminal law and state responsibility under the laws of war. First, it will be hard for any single human or collective of humans to fully understand the way this assemblage works and embrace this understanding with human intention. Second, as we are having a hard time defining the border between human and machine, we must allow for a learning machine to augment any intention its developer or user might have had when acting. Drawing on both reasons, any human defendant in their right mind would, in a courtroom reconstruction, inculpate the machine. It will be hard, if not impossible, for a judge to separate the evidence concerning the mens rea of the human–machine assemblage into what is attributable to the human and what to the machine. The benefit of the doubt enters at this point. To the extent that the judge cannot separate these two factors, the human involved in the act will go free. Human–machine assemblages also give rise, albeit to a lesser degree, to questions about the criminal act. Human authorship might be downgraded in various ways in cases in which an algorithmic weapons system coauthors its own acts. An algorithmic weapons system thus participating in a criminalized act transforms any possible human perpetrator into no more than an accom- plice. The human contribution might then be judged as less important and as meriting a less severe sentence. This will make it more difficult to establish individual responsibility under international criminal law. Where does this leave us? Law tends to individualize, and the rule of law assumes a self-contained human being. Law rules through a process of incarnation. More specifically, it incarnates itself in an individual human who is supposed to enact it in the world. To the degree that the human and

16028-0303f-Finalpass-r01.indd 93 9/24/2019 12:03:29 PM 94 Chapter 4

the machine bleed into each other, the law loses its creator and addressee. While the law continues to be valid, it is impossible to apply it to the human– machine assemblages that advanced algorithmic weapons systems are and will be. Abstractly, the law remains in force; concretely, the addressee of this force of law no longer exists. What more could a government with a technologically developed military wish for? This government could argue that algorithmic weapons systems come within the ambit of existing law and that no new regulation is called for. It could also emphasize that it will not use such systems in a fully automatic mode and will always keep humans in the decision-making loop.50 Integrat- ing humans and algorithmic systems might sound like the responsible thing to do from a commonsensical point of view. What it really does is reduce the legal responsibility of this government and its agents because it creates a gray zone in which the question of who is responsible for an action becomes more difficult to answer.

WHY DO CODE AND LAW RULE IN SUCH DIFFERENT WAYS?

We have now come to a point at which the rule of the algorithmic code and the rule of law appear to be incompatible. Law and AI appear to belong to different normative orders (see Figure 4.2). But is it just that the law is old-fashioned in its assumptions? The current resistance of states apart, can we at all envisage a law that would effectively address the question of human–machine assemblages in its rules? Even if AI remains tethered to cybernetic normativity and law to a particular human form of normativity, even if humans are the most privileged mental agent in the latter and just another kind of mental agent in the former, is there a way to unite the two in a common order? I do not think so. There is a fundamental difference in the way code and law relate to the human. It is about the sequence of steps taken to implement norms. To uncover this difference, I would like to employ the opposition of incarnate and excarnate law: two opposing concepts of the rule of law intro- duced by the German cultural theorists Aleida and Jan Assmann.51 Drawing on their work, we may distinguish incarnate from excarnate law. In certain realms of antiquity, the ruler was the law, and no codification could circum- scribe his normative sovereignty. This principle would be expressed with the concept of empsychos or lex animata. The king embodies the law—the law is literally incarnate—and it is through him that it “enters into force.” Codification, such as the Codex Hammurabi, would then only be a reflection, a written shadow of that rule of law.

16028-0303f-Finalpass-r01.indd 94 9/24/2019 12:03:29 PM War by Algorithm 95

Figure 4.2 Monotheism is to law what cybernetics is to AI

With the advent of monotheism, law took on a new form. The performative literality of the Torah turns this relation on its head. The law is given by the divine to humans in written form: it is excarnate, and it is in force because it is written. This is canonical law, writes Assmann; nothing may be added or subtracted, and the written law must constantly be studied and internalized in its totality.52 The individual, says Assmann, has to “re-incarnate” the language of the Bible so as to be able to enact it in his or her daily life.53 The combi- nation of written law and individual study is central to this process. Law’s rule moves from written scripture through the studying mind and into bodily compliance, from the excarnate to the incarnate:

(1) God → (2) excarnate codification → (3) human study → (4) incarnation: lawful human conduct in the world

Human study and the incarnation of the studied law in the life of its students is a circular and iterative process. We are supposed to learn not only from the law but also from our daily life, and we are supposed to treat the problems we encounter there and the mistakes we make as questions in our next round of study. Through study, monotheistic code was (and still is) internalized in a conscious and individual effort, organized collectively through religious com- munities such as the church. In a similar manner, Roman law was codified on Emperor Justinian’s order in a number of books, tellingly called the digests, on which no commentaries were to be made. The digests were to be the ultimate textual interface in the process of reincarnating the law through human study.

16028-0303f-Finalpass-r01.indd 95 9/24/2019 12:03:30 PM 96 Chapter 4

With the combination of printing technology and Protestantism in early modernity, the process of reincarnating the excarnate law proliferated among Christians, with less and less of a role for teachers and priests. With cheaper Bibles and increased rates of literacy, the letter of the law became accessible and the authority to incarnate it without an intermediary spread. In the secular domain, legal codifications of the modern period such as the Code Napoleon or the Bürgerliches Gesetzbuch testify to this aspiration of egalitarian access to printed law. The laws of war may serve as a contemporary example. From the early eighteenth century onward, there was a movement to codify these laws, first in academic works and military manuals addressing a nation’s armed forces and, somewhat later, also in international law treaties. This means that we do have a body of norms that may count as a code in the monotheistic model: excarnate, abstract, and of general reach. In treaties on the laws of war, states obliged themselves to teach international humanitarian law to their armed forces.54 This obligation to provide for the study of the law applies in peacetime as well as during armed conflict. This conforms well with the monotheistic model according to which the written, excarnate law needs to be studied to be incarnated in the body of the soldier and the practices emerging from her. Specific institutions such as the International Committee of the Red Cross and the San Remo Institute, as well as specialized courses at military academies, emerged around this need for the reincarnation of the law in war—equivalent to the yeshivot, madrasa, or to theological schools and seminaries in Christianity. International law experts who specialize in these laws assume the role of teachers and provide opinions for the updating of codifications, manuals, and legislation. In all these monotheistically structured processes, code and human are clearly separated. This changes with the advent of artificial intelligence. Side by side with humans, an artificial agent capable of studying norms in the form of algorithms and other code emerges, an agent that acts in the world and learns from its own mistakes. Consider an AI application with a learning function (such as the one proposed by Touryan et al. analyzed in the section “How Does Code Rule?”). Let us first isolate what we may tentatively call its nonhuman elements. We would get the following process:

(1) Nature → (2) code → (3) study of training data → (4) conduct: executes learned function on any data

In connectionist constructs, nature assumes the place formerly taken by God, and it is nature that emerges in and through cybernetic processes as AI.55 Again, code is central to its mediating role. This time, though, it is the code of the cybernetic paradigm. There is not a great deal of similarity between this process

16028-0303f-Finalpass-r01.indd 96 9/24/2019 12:03:30 PM War by Algorithm 97

and steps (2), (3), and (4) of the monotheistic process I adumbrated above. For step (2), there is a functional analogy between the code of AI and the codifica- tion of religious law, all other differences notwithstanding. Both types of code comprise commands, and both may be rendered in different languages. In step (3), however, it is not the law that is studied but a set of training data. What is so special about these data is that they are classified: at some earlier point in time, humans have sifted through them, attaching different labels, such as “this is a cat,” “this is a car,” or “this is an enemy combatant.” This gives training data a normative dimension, just as code possesses a normative dimension. In step (4), code and study result in conduct. Crucially, this conduct is based on a hybrid normativity resulting from the interaction of humans and machines. How does this process change when we consider humans and AI acting in conjunction when developing, operating, and using such an AI application? The number of steps increases:

(1) Nature → (2) human codes and assigns classified training data to the machine → (3) machine executes code and learns from training data → (4) machine presents learned function → (5) human decides whether to use the learned function in the outside world → (6) machine executes learned function on any data assigned to it

How does this process compare to the process brought about by monothe- ism and adopted within secular law? While some of the elements of the monotheistic process remain recognizable, their function has changed. The overarching purpose of incarnating the excarnate law is obviously gone. The element of human study no longer assumes the pivotal position it holds in the monotheistic model. True, we might imagine study to play a role at two points. First, training data are data on which a pool of humans has performed a simple classification task. (As I mentioned, this might be of the type “this is a cat” or “this is a car.”) While a person studying the law is conscious of the central role of that study in the process of living lawfully, there is no com- parable consciousness in the case of the classifying human. On the contrary, this is a collective, anonymous, and decontextualized classification, cut off from the concrete use of the data thus classified. Put otherwise, the world is unconsciously preconfigured by the classifying human. The second point is weightier. Suppose that the AI application is rolled out and used in the outside world. Surely, there would be some form of evalua- tion of how it functions. Would that not be a kind of study? Indeed, because we have a conscious human being trying to assess the actual conduct of the system as a whole against fundamental norms, including the laws of war, would that not be a study that most closely resembles the role of study in the monotheistic process?

16028-0303f-Finalpass-r01.indd 97 9/24/2019 12:03:30 PM 98 Chapter 4

But what would be the ultimate standard of such an evaluation? Here, the difference between monotheistic normativity and AI normativity is stark. In AI, it would be nature having become manifest through code—a nature that emerges by fusing human and artificial intelligence. This is what takes the place of law under the monotheistic form. Also, there is no way to separate human and artificial intelligence in an ex post facto evaluation. Human intel- ligence can no longer be isolated. If it is, it loses its ability to cognitively penetrate the functioning of the system. This is different from monotheistic normativity, according to which the law is written for humans, and the single human is able to incarnate it in her conduct. Steuer sees the real danger in the social—and that includes the legal— “becoming locked into the technological to the point of indistinction.”56 Here, my earlier point, in the section “How Does Code Rule?,” about the impos- sibility of the full-scale testing of AI-based weapons systems attains new relevance. In that section, I argued that testing is impossible unless we put a period of history into the wager. Even if we were prepared to do so, there is no normative position from which a human could evaluate the system, or there is no way that the excarnate laws of war could be incarnated when a weapons system that merges human and artificial intelligence is used. When human judgment based on human law has become impossible, it is the judgment of a nonhuman intelligence that remains. Where there is a judgment that is beyond appeal, there worldly law has come to an end.57

CONCLUSION

This answers the question that drove this text. It is not possible to subject algorithmic forms of warfare to the law, be it the law of war or any other form of law. This is so because subjection to the law presupposes a monotheisti- cally structured process for its incarnation. As I have shown, there cannot be such a process in the case of weapons systems that fuse human and artificial intelligence. If we want to know how such a system works in practice, we need to test it on the scale relevant for its future use. As I have shown, this is impossible. For that reason, we cannot know whether it will operate in conformity with the law.

NOTES

1. Merel A. C. Ekelhof offers a mapping of the semantic issues arising in this process. Merel A. C. Ekelhof, “Complications of a Common Language: Why It Is So

16028-0303f-Finalpass-r01.indd 98 9/24/2019 12:03:30 PM War by Algorithm 99

Hard to Talk about Autonomous Weapons,” Journal of Conflict and Security Law 22, no. 2 (2017): 1–21. 2. Meeting of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, Report of the 2014 Informal Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS), CCW/ MSP/2014/3, June 11, 2014, para. 34. 3. Fifth Review Conference of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, Report of the 2016 Informal Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS), CCW/CONF.V/2, June 10, 2016, para. 13. 4. Meeting of High Contracting Parties, Report of 2014 Informal Meeting, para. 34. 5. Seymour Deitchman et al., Air-Supported Anti-Infiltration Barrier (Alexandria, VA: Institute for Defense Analyses, Jason Division, 1966), 53. As early as 1966, a group of defense academics affiliated to the U.S. military proposed the large-scale use of electronic detection devices and computers directing aerial bombing to disrupt movements along the Ho Chi Minh Trail. 6. Fifth Review Conference, Report of 2016 Informal Meeting, para. 15. 7. A similar game can be played with the norm “Weapons systems may not be employed in a fully computerised manner” (Article 4 [4] of the Proposal for a Charter of Digital Fundamental Rights of the European Union, presented by civil society actors in 2016). It is hard to establish exactly what being “fully” computer- ized means. It might be so demanding as to be entirely unrealistic or so relaxed that the far-reaching dependence of many contemporary weapons systems on computing power would mean that they should already be prohibited today. 8. Christof Heyns, “Autonomous Weapons in Armed Conflict and the Right to a Dignified Life: An African Perspective,” South African Journal on Human Rights 33, no. 1 (2017): 48. 9. Ibid., 67. 10. Some delegations thought “that the current IHL rules are sufficient to regu- late the use of any type of weapon, including LAWS, other delegations questioned whether this would be the case.” Fifth Review Conference, Report of 2016 Informal Meeting, para. 16. 11. Mireille Hildebrandt’s research is a helpful point of departure for understand- ing the relationship between law and technology and their normative power. She develops a generic concept of normativity that she uses to account for the impact of both technologies and law on human interaction. Mireille Hildebrandt, “Legal and Technological Normativity: More (and Less) Than Twin Sisters,” Techné 12, no. 3 (2008): 169–83; “Law as Information in the Era of Data-Driven Agency,” Modern Law Review 79, no. 1 (2016): 1–30. 12. Geographers experienced a similar phenomenon when the space at the center of their research began to be increasingly affected by information and communica- tion technology. Rob Kitchin and Martin Dodge describe the emergence of a “code/

16028-0303f-Finalpass-r01.indd 99 9/24/2019 12:03:30 PM 100 Chapter 4

space,” where a space depends on software being constantly made anew in reiterative and transformative practices. Rob Kitchin and Martin Dodge, Code/Space: Software and Everyday Life (Cambridge: MIT Press), 11. We should think of code as capable of doing to humans what it does to space (according to Kitchin and Dodge), resulting in a form of code/human. 13. Geoff Dyer, “US to Deploy Robot Combat Strategists,” , April 27, 2016, https://www.ft.com/content/29b93562-0c5f-11e6-b0f1-61f222853ff3; Richard Waters, “Musk’s Brain-Hacking Ambitions Face Scientific Headaches,” Finan- cial Times, March 30, 2017, https://www.ft.com/content/64e70fac-155e-11e7-b0c1- 37e417ee6c76. Robert Work, former U.S. deputy secretary of defense, suggests that the United States “will use artificial intelligence in the sense that it makes human decisions better. . . . Human-machine collaboration will give humans better information upon which to help make decisions.” Quoted in Dyer, “US to Deploy Robot Combat Strate- gists.” Also, DARPA has invested $60 million for research into an implantable chip con- necting the human brain and a computer. Waters, “Musk’s Brain-Hacking Ambitions.” 14. Neuralink website, accessed May 9, 2019, https://www.neuralink.com/. 15. Norbert Wiener, Cybernetics, or Control and Communication in the Animal and the Machine (Oxford: Oxford University Press, 1948), 19. 16. Jessica Riskin, The Restless Clock: A History of the Centuries-Long Argu- ment over What Makes Living Things Tick (Chicago, IL: University of Chicago Press, 2016), 296. 17. Arthur L. Samuel, “Some Studies in Machine Learning Using the Game of Checkers,” IBM Journal of Research and Development 3, no. 3 (1959): 210. 18. Andreas Theodorou, Robert H. Wortham, and Joanna J. Bryson, “Designing and Implementing Transparency for Real Time Inspection of Autonomous Robots,” Connection Science 29, no. 3 (2017): 230. 19. Liljefors, this volume, 132. 20. Robert H. Wortham and Andreas Theodorou, “Robot Transparency, Trust and Utility,” Connection Science 29, no. 3 (2017): 242. 21. Ibid., 246. 22. Michael W. Boyce, Jessie Y. C. Chen, Anthony R. Selkowitz, and Shan G. Lakhmani, Agent Transparency for an Autonomous Squad Member (U.S. Army Research Laboratory, 2015), ii. 23. Can computers generate mathematical proof? There is an ongoing debate about this question, which is similar to the problem of AI applications that remain opaque to expert users, within the discipline of mathematics. 24. David Gunning, “Explainable Artificial Intelligence (XAI),” DARPA website, accessed May 9, 2019, https://www.darpa.mil/program/ explainable-artificial-intelligence. 25. Mark Stefik quoted in Richard Waters, “Intelligent Machines Are Asked to Explain How Their Minds Work,” Financial Times, July 20, 2017, https://www .ft.com/content/92e3f296-646c-11e7-8526-7b38dcaef614. 26. Liljefors, this volume, 146. 27. Jay F. Rosenberg, “Connectionism and Cognition,” in Mind Design II: Phi- losophy, Psychology, Artificial Intelligence, ed. John Haugeland (Cambridge: MIT Press, 1990), 293.

16028-0303f-Finalpass-r01.indd 100 9/24/2019 12:03:30 PM War by Algorithm 101

28. John Koza, The Genetic Programming Paradigm: Genetically Breeding Populations of Computer Programs to Solve Problems (Cambridge: The MIT Press, 1992), 1. 29. Peter Schmutter, “Object-Oriented Ontogenic Programming,” abstract, 2002, available at https://www.researchgate.net/publication/28351433_Object-Oriented_ Ontogenetic_Programming. 30. James V. Hansen, Paul Benjamin Lowry, Rayman D. Meservy, and Daniel M. McDonald, “Genetic Programming for Prevention of Cyberterrorism through Dynamic and Evolving Intrusion Detection,” Decision Support Systems 43, no. 4 (2007): 1362–74. 31. Gene Lesinski, Steven M. Corns, and Cihan H. Dagli, “A Fuzzy Genetic Algorithm Approach to Generate and Assess Meta-Architectures for Non-Line of Site Fires Battlefield Capability,” IEEE Congress on Evolutionary Computation (2006): 2395–401. 32. Jon Touryan, Anthony J. Ries, Paul Weber, and Laurie Gibson, “Integration of Automated Neural Processing into an Army-Relevant Multitasking Simulation Environment,” Lecture Notes in Computer Science 8027 (2013): 774–82. Two of the four authors are affiliated to the U.S. Army Research Laboratory. The other two state their affiliation with SAIC, a technical, engineering, and enterprise IT services busi- ness that derives approximately 75 percent of its revenue from the U.S. Department of Defense. 33. Ibid., 775. 34. Ibid., 780. 35. Ibid., 776, 778. In the comparative experiments discussed by Touryan et al., the target was simply the image of “a soldier with a gun” (ibid.). This goes to show that the system is quite far from deployment: in a counterinsurgency environment, uniformed enemy soldiers carrying guns are rare. 36. Of course, the assumption that the unconscious provides access to an inner and true self is questionable. Yet it still has considerable normative appeal. The con- siderable popularity of neurotechnological applications suggests as much. 37. This form of authenticity still works as a social norm, although its premises are questionable. As Theodor W. Adorno pointed out, it is flawed to think of the indi- vidual as fully transparent to herself and able to choose herself. Theodor W. Adorno, The Jargon of Authenticity (Evanston, IL: Northwestern University Press, 1973), 70. 38. Deborah Kelemen and Evelyn Rosset, “The Human Function Compunction: Teleological Explanation in Adults,” Cognition 111, no. 1 (2009): 138–43. Kelemen and Rosset found that study participants in accelerated conditions judged significantly greater numbers of unwarranted teleological explanations as correct than did partici- pants who considered possible explanations under less time pressure. 39. A judgmental element called opinio juris is required before a sufficiently uniform and widespread state practice is turned into customary international law. It is part of the nature of bias that biased human judgment takes itself to be legitimate. Therefore, an opinio juris requirement will not act as a bar to creeping changes con- cerning who is regarded as targetable. 40. Rebecca Slayton, Arguments That Count: Physics, Computing and Missile Defense, 1949–2012 (Cambridge: MIT Press, 2013), 184.

16028-0303f-Finalpass-r01.indd 101 9/24/2019 12:03:30 PM 102 Chapter 4

41. Testing demands not only adequate spatial and temporal conditions. Cyber- netic models are based on probability and large numbers. The smaller the number of objects to be modeled, the less reliable they are. The behavior of manned jet fight- ers in combat can be modeled for use in an unmanned fighter jet, as the number of objects—manned flights in this case—is high. This is not the case if we wish to model the difference the digitalization of a whole military campaign might make. Here, objects are very few in number. I am grateful to Daniel Steuer for pointing out this limitation to me. 42. Kenneth Ford and Glymour, “The Enhanced Warfighter,” Bulletin of the Atomic Scientists 70 (2014): 46. 43. Nicholas G. and Jonathan D. Moreno, “Yesterday’s War; Tomorrow’s Technology: Peer Commentary on ‘Ethical, Legal, Social and Policy Issues in the Use of Genomic Technologies by the US Military,’ ” Journal of Law and the Biosciences 2, no. 1 (2015): 79–84. 44. Jessica L. Roberts, “Good Soldiers Are Made, Not Born: The Dangers of Medicalizing Ability in the Military Use of Genetics,” Journal of Law & the Biosci- ences 2, no. 1 (2015): 94. 45. Owen Bowcott, “UK Opposes International Ban on Developing ‘Killer Robots,’ ” Guardian, April 13, 2015, https://www.theguardian.com/politics/2015/ apr/13/uk-opposes-international-ban-on-developing-killer-robots. This is what some delegations to the 2016 expert meeting pointed out, in line with a tradition of thought that believes the law to be more flexible in its relations to new technologies than is commonly assumed. The UK Foreign Office is on record stating that “at present, we do not see the need for a prohibition on the use of Laws, as International Humanitar- ian Law already provides sufficient regulation for this area” (ibid.). 46. Stephen White, “: Neurowarfare and the Limits of Interna- tional Humanitarian Law,” Cornell International Law Journal 41, no. 1 (2008): 209. 47. Article 85.3.b of Protocol Additional to the Geneva Conventions of August 12, 1949, and relating to the Protection of Victims of International Armed Conflicts, June 8, 1977, 1125 UNTS 3 (hereafter API). 48. Alexander Zahar, “Ordering,” in The Oxford Companion to International Criminal Justice, ed. Antonio Cassese (Oxford: Oxford University Press, 2009), 447; this draws on the ICTY Appeals Chamber case of Blaskic. 49. Article 85.3.b. API. 50. Robert Work, former U.S. deputy secretary of defense, states that “our vision of our battle network is where the human will always be the one who makes the final decision on lethal action, with the possible exception of some defensive capabilities.” Dyer, “US to Deploy Robot Combat Strategists.” 51. Jan Assmann, Monotheismus und die Sprache der Gewalt (Vienna: Verlag Picus, 2006). In the following, I draw on a text by Jan Assmann presenting both his own and Aleida Assmann’s contribution to what may be read as a joint argument. 52. Ibid., 48. 53. Ibid. 54. See, for example, Article 26 of the 1906 Geneva Convention and in Article 27 of the 1929 Geneva Conventions, in A47 FC, A48 SC, A127 TC and A144 FC, A 83 AP1, A19 APII.

16028-0303f-Finalpass-r01.indd 102 9/24/2019 12:03:30 PM War by Algorithm 103

55. Of course, humans are also part of nature, which is why discussions of “nature” in the context of considering the nonhuman elements of AI give rise to issues of delimitation. The resulting definitions thus have only a provisional character. 56. Steuer, this volume, 36. 57. Sohn-Rethel saw as inherent to capitalism the development of an ever less reflective science and the growth of automatization, leading to “automatism incar- nate.” Translated and quoted by Steuer, this volume, 31.

16028-0303f-Finalpass-r01.indd 103 9/24/2019 12:03:30 PM 16028-0303f-Finalpass-r01.indd 104 9/24/2019 12:03:30 PM Chapter 5 Law’s Ends: On Algorithmic Warfare and Humanitarian Violence Sara Kendall1

The laws of war have always answered two questions: When may one wage war? What is permissible in war?

And international law was always given two completely different answers to these questions, depending on who the enemy is.2

How international law might relate to new technologies and regulate their practices had been a pressing question long before the use of armed drones challenged conventional conceptions of warfare. In traditional accounts of armed conflict, the confrontation between enemies takes place on a terres- trial battlefield where the prospect of casualties is common to all parties. Technological developments produce asymmetry between parties, whether through new forms of ammunition, aerial bombardment by plane at the turn of the twentieth century, or contemporary drone warfare, and the effects of these asymmetries in a postcolonial frame have been widely documented.3 Emerging algorithmic and machine-learning technologies present further challenges, not only to the political dream of their regulation by law but also to the juridical form itself and its humanist presumptions. Law’s temporal horizon, which adjudicates past events while aspiring to regulate the future, presumes a human relationship to time that these technol- ogies bypass through parsing it in intervals that are not cognizable to human perception.4 When humans are unable to observe the phenomenon to be judged, human law lacks the optics to apprehend what lies beyond its reach. Writing in 1963, observed the anthropocentric way in which technological developments in space exploration made it unlikely “that man will encounter anything in the world around him that is not man-made and hence is not, in the last analysis, he himself in a different guise.”5 By contrast, the prospect of algorithmic warfare suggests a limit to the anthropocentrism 105

16028-0303f-Finalpass-r01.indd 105 9/24/2019 12:03:30 PM 106 Chapter 5

of human law, evoking a runaway creation that cannot be contained by the order that produced it. Against this backdrop, the title of Gregor Noll’s contribution poses a pro- vocative question: does algorithmic warfare suggest the end of law? Here war by algorithm is also understood materially, manifesting in the development of lethal autonomous weapons systems (LAWS), where machines, rather than a human “in the loop,” would be responsible for targeting decisions. Noll con- tends that focusing on machines diverts our attention from the broader frame of digitalized forms of warfare. We should not overemphasize the threat posed by the material form of LAWS, he argues, but instead critically consider the conditions through which LAWS emerged: “the thinking that shapes them,”6 including algorithmic forms and the phenomenon of code. Noll answers his structuring question of whether it is possible to subject algorithmic warfare to law in the negative, arguing that artificial intelligence (AI) cannot be brought within law’s normative order. This chapter begins from Noll’s philosophical claims about the underlying thinking that shapes AI, such as the self-learning algorithms that comprise it, as well as their consequences for closing the space of human judgment presumed by law. Building upon these concerns, I turn to the substance of the law of armed conflict or international humanitarian law, situating it histori- cally to illustrate that even if it could grasp the phenomenon, this law would replicate and perpetuate the asymmetries that have accompanied its histori- cal development. The humanitarian dimension of this body of law is applied biopolitically, securing particular populations to the detriment of others, as seen in practice through the allegedly “distinctive” and “proportionate” use of drones.7 Building upon critical accounts of international humanitarian law’s origins and practices, I address the distinct temporality of these emerging weapons systems. The logic that shapes algorithmic forms aims to condense time into intervals beyond the capacity of human response, and the objective of LAWS to supplement human cognition threatens to exceed the laws of their own creators. As a means of condensing time in response to perceived threats, algo- rithmic warfare reveals a preemptive rationality. I conclude with a possible alternative: to recast preemption as an ethico-political exercise of human judgment. In their autonomous lethality, LAWS would foreclose human judg- ment on the battlefield. As with the U.S. military’s “Project Maven,” consid- ered here, resisting this foreclosure calls for a collective political response to interrogate and unsettle the processes that contribute to the emergence of autonomous weapons. At the time of writing, war by algorithm appears as an imagined future, but one whose prospect becomes increasingly likely through incremental technological developments in intelligence gathering and analy- sis that may not treat autonomy as their objective. Enhancing warfare through

16028-0303f-Finalpass-r01.indd 106 9/24/2019 12:03:30 PM Law’s Ends 107

algorithms and machine learning in the name of humanitarian ends—such as more precise targeting, leading to more proportionate numbers of casualties— may in fact hasten the advent of a violence that law is unable to contain.

LAW’S ENDS

Considering “the end of law” as Noll does invites two questions: what do we mean by “end,” and what do we mean by “law”? We might think of law’s end as a kind of closure or termination, when law is incapable of diminish- ing or regulating forms of violence. But law’s end might also refer to its telos or objective. Here law’s telos may in fact be the production of a world where its enforcement is no longer necessary. In the political dream of per- fect compliance, law’s subjects could police themselves. Does it then matter whether its subjects are humans or machines or indistinguishable within a shared cybernetic relation? What would legal subjectivity look like beyond the human? When subjects inaugurate their own laws, are they more likely to comply with them? If autonomy signifies the unity between the law-giving and the law-abiding subject, then at an extreme of this logic, LAWS may no longer appear as a threat but rather as a perfect state of self-regulation, as auto nomos.8 Questions pile upon questions as we contemplate an imagined future that can only be addressed speculatively and from the limited horizon of the present. The first understanding of law’s end foregrounds the limits or failure of law, whereas the second suggests its ambitions. The scholarly literature on LAWS reveals a deep anxiety concerning law’s capacity to respond to these weapons: they are harbingers of a dystopian future that law must adapt to if it is to remain relevant; more specifically, they must be brought under inter- national humanitarian law; they should be banned or controlled by treaties; accountability mechanisms for LAWS must be established; they are ethically unacceptable; they must always be governed by “meaningful human con- trol.” Shared principles emerge across these different accounts. For example, one recent publication argues for adopting preventive security governance frameworks through international law,9 another for developing ethical guide- lines to ensure the presence of “meaningful human control.”10 Autonomous weapons must not operate as “LAWS unto themselves” but must instead be subsumed under a law not of their own making.11 Such approaches illustrate the paradox that Noll points out: “An autonomous weapons system subjected to the heteronomos of the law would no longer be an autonomous weapons system at all.”12 The ancient Greek heteros refers to the other of two, and here a difference is drawn: the nomos of autonomous weapons systems is not the nomos of law, and in this sense law’s end is its failure to subsume algorithmic

16028-0303f-Finalpass-r01.indd 107 9/24/2019 12:03:30 PM 108 Chapter 5

warfare under its own categories. Put another way, law offers no Grundnorm that governs beyond human cognition, as law and AI “appear to belong to different normative orders,” if AI can indeed be brought under any normative order.13 We seem to have arrived at law’s limits. The second understanding of “law’s end,” as its telos, is more open-ended and multiple. To remain with the examples above, where law appears as something of a deus ex machina brought in to resolve a dystopian narra- tive drifting beyond human mastery, its end is to constrain or contain or to regulate violence. A particularly provocative illustration is found in the short film Slaughterbots, widely disseminated on social media, which ends with a call for a ban treaty.14 This commonly shared presumption concerning law’s objective or telos undergirds the law of armed conflict, often tellingly referred to as international humanitarian law. The dystopian narrative of LAWS is futural and speculative, but in this imagined future law’s end would be to bring LAWS under its authority or, put another way, within its (humanitarian) jurisdiction. Noll’s reading of law’s end in this second sense, as objective or telos, builds upon the distinction between the incarnate and excarnate. In the mono- theistic normative frame supporting the legal order, he contends, the objective of law is to “incarnate” the external, a-corporeal, written command, code, or statute—“excarnate” law—through study and compliance. Incarnating law in this way takes place through human consciousness, as has been the case from the emergence of monotheism through secular codified law, includ- ing the laws of armed conflict. In this sense, the end of law as its ambition or objective—incarnation—also reveals its limits: law cannot be extended beyond human consciousness. Law’s two ends described above—as either ambition here or as limit above—seem to meet within this reading, where its imbrication with human consciousness through study and compliance reveals the outer borders of its jurisdiction. We arrive at the end of a law unable to achieve its end-as-telos. The meaning of law at stake in Noll’s account excludes what cannot be subsumed under this structure of incarnation. Law is understood as guiding (human) behavior; for example, in the context of armed conflict, it is “a kind of call . . . to use weapons responsibly.”15 Responsibility is tied to human cog- nition (as with state of mind, knowledge, and intent), which is complicated by the shared agency of algorithmic human–machine assemblages. If law is a call, it operates rhetorically as a mode of address, directed to particular subjects who must be capable of responding to it and, in turn, of being held responsible. This mode of address is severed by the logic of code, where excarnate commands are directed outward “and made to act on the world.”16 Yet even as law’s call is embodied by the human subject, there is no rule for the application of rules: what is required is the even more thoroughly human capacity for judgment and

16028-0303f-Finalpass-r01.indd 108 9/24/2019 12:03:30 PM Law’s Ends 109

the response to this call by way of interpretation. The space of human cogni- tion is arguably a space of judgment, where humans may respond to law’s call of proportionality by affixing ratios of civilian to combatant deaths and deem- ing them proportionate.17 There is a lingering question of whether the law at stake here—the law of armed conflict or international humanitarian law—can be properly humanitarian in the first place, protecting the bare human irrespec- tive of politics, history, and membership within a particular population. Noll illustrates how LAWS cannot be brought under law, but even if they could be, what law is this, and what are its ends?

EMPLACING LAW

Although law in general can be described as a mode of address requiring human cognition and uptake, it is also a product of culture, linked with partic- ular regions of production and application that have largely settled in relation to modern state forms.18 Monotheistic normativity locates law in a Western frame, from the Torah to Roman and canon law and through the modern period of legal codification.19 It does not appear to account for precolonial legal orders or other normative cosmologies, as it assumes a Judeo-Christian subjectivity that privileges individual consciousness in relation to the legal command.20 The figure of the sovereign sits behind the Western legal imagi- nary as a constituent and ordering power much like a monotheistic deity. This model of divine authority, positing the excarnate law that must be enfleshed through the subject, also serves as an analogue for secular sovereign author- ity, the structure underpinning contemporary legal systems around the world through a history of complex encounters between empires—an “interimperial legal politics”21—as well as through the export and imposition of European legal ideologies, practices, and institutions across extra-European territories. Locating monotheistic normativity in this way suggests that law’s call is a product of historical formations that shape how, by whom, and under what material conditions it is received. In international law, the subject respond- ing to law’s call has traditionally been the figure of the state—a composite of discrete and cognizing human subjects, yet with collective attribution of compliance or noncompliance to the state itself.22 Meanwhile, the theologi- cal analogue of the sovereign is unsettled by secular legal orders, which may entail more of a “partial, contested or shared sovereignty,”23 and much inter- national legal theory has addressed the tensions that attend the creation of a legal order derived from unruly and differentially sovereign states.24 Critical scholarship in international law has noted how the dual fictions of sovereign equality and territorial integrity sustain the field’s colonial inheritance in present dynamics among states.25

16028-0303f-Finalpass-r01.indd 109 9/24/2019 12:03:30 PM 110 Chapter 5

A contemporary instance of how international law is taken up by states in problematic and potentially neo-imperial ways may be illustrated through the critical lens of “contingent sovereignty,” a diagnosis of “the idea that in certain key circumstances—in particular, when states harbour terrorists or seek to acquire weapons of mass destruction—norms of sovereignty do not apply.”26 Here sovereignty is bound up with effective territorial control, and the international order is presented not as a constellation of sovereign equals but rather as ranked by the relative capacity of states to handle threats within their own borders. This ideological framing of threat becomes political with the question of who determines this capacity, whether an international insti- tution such as the United Nations, a set of strong states, or even an isolated hegemon. The “unable or unwilling” theory advanced by the United States in various policy documents and diplomatic circles is one such instance of “con- tingent sovereignty” used to justify defensive military interventions.27 For example, a leaked 2011 U.S. Department of Justice white paper addressing the legality of a proposed targeted assassination of a U.S.–Yemeni national asserted that

a lethal operation in a foreign nation would be consistent with international legal principles of sovereignty and neutrality if it were conducted, for example, with the consent of the host nation’s government or after a determination that the host nation is unable or unwilling to suppress the threat posed by the individual targeted.28 This passive grammatical construction leaves open the prospect that the United States may empower itself to make this determination, possibly in violation of the UN Charter.29 As a claim to military intervention based on self-defense, the “unable or unwilling” theory is precisely the kind of logic that might be used to unleash LAWS upon a territory that is deemed “ungoverned” by a weak sovereign authority.30 Here the pliant contours of the “everywhere war”31 seemingly operate out of alignment with the vision of Westphalian sovereignty under- girding the international legal system. As Elden claims, “The complete or partial absence of sovereign power has been rescripted as a global danger, justifying intervention.”32 Absent sovereign equality and territorial integrity, the present political geography of the international legal order shares parallels with the hierarchical thinking of the “standard of civilization” logic used to justify territorial incursions throughout the colonial period.33 Is it the end of (international) law to undo and equalize these imbalances, or is it perhaps too much a product of them to adequately provide redress? Even if LAWS could be brought under law, whose ends would this serve? If we grant that the structuring dynamics of law’s emergence continue to inform the present, then the history of the body of modern law governing

16028-0303f-Finalpass-r01.indd 110 9/24/2019 12:03:30 PM Law’s Ends 111

armed conflict is consequential, as it is most often invoked in the scholarly literature as a possible framework for governing LAWS. This specific sub- field of international law emerged during the late colonial period as a means of restricting the violence of armed conflict between European powers.34 What is now referred to as international humanitarian law or the law of armed conflict underwent an initial process of codification through treaties in the mid- to late nineteenth century. During roughly the same period, many of these powers were also engaged in finding potential treaty-based solutions among themselves for preempting resource disputes outside Europe in what were or would become colonial territories. The period of treaty making that sought to restrict warfare between European powers also overlapped with the use of international legal forms to secure colonial possessions. For example, the General Act of the 1884–1885 Conference of Berlin declares that African territories “belonging to” a signatory shall remain neutral in the event that the signatory is involved in an armed conflict.35 The relevant treaty article advancing a “declaration of neutrality” formed part of a larger objective of parceling out the African continent to European powers while preserving intra-European harmony and trading relations. Through the emerging law of armed conflict, as well as with international agreements around colonial possession, international law was produced in the interests of containing violence within and between certain entities while permitting its enactment elsewhere. This imbrication with colonial interests is consequential for the develop- ment of international humanitarian law in the late nineteenth and early twen- tieth centuries. Historical efforts to regulate emerging methods and weapons of armed conflict in the interwar period harbored presumptions about which populations deserved protection, and in this context “civilians” primarily designated populations within intra-European conflicts to the exclusion of colonized noncombatants.36 Aspects of this thinking have continued into the present, where individuals receive differential treatment based on categories such as population and territory. For example, scholars have noted how contemporary drone warfare has reinscribed colonial logics in the “verti- cal battlespace” above territory whose sovereignty is deemed conditional.37 Anthropologist Hugh Gusterson contends that drones “can be used only against countries that lack the technological sophistication to shoot down the slow-moving planes and whose internal affairs, conforming to Western stereotypes of ‘failed states,’ provide a pretext for incursion that is as persua- sive to liberal interventionists today as the white man’s burden was to their Victorian ancestors.”38 Just as with the development of aerial bombardment in the early twentieth century, first enacted by an Italian aviator outside Tripoli, the emergence of armed drones around the turn of the millennium illustrates the technological asymmetry that will accompany the development of LAWS

16028-0303f-Finalpass-r01.indd 111 9/24/2019 12:03:30 PM 112 Chapter 5

as well. Contemporary asymmetries develop in a context where conceptions of territory appear less attached to traditional conceptions of sovereignty, whether through notions of “contingent sovereignty” or the “responsibility to protect.” The challenge of thinking through LAWS is the challenge of speculative reasoning more broadly, but as a field that responds to the new by way of analogy, law would approach LAWS by considering relations of likeness in bringing them under its jurisdiction.39 The development of LAWS is meant to increase targeting precision and to mitigate the risk to a state’s own population, including its military personnel, which makes it analogous in certain respects to the use of armed drones. Recent scholarship notes how “unmanned or human-replacing weapons systems first took the form of armed drones and other remote-controlled devices,” enabling human absence from the battlefield.40 As with armed drones, however, the development of AI-based weapons systems would deepen the asymmetry of modern war- fare, as some states and their attendant populations are able to mitigate risk more readily than others through further technological development. Within states, it may be that the risk burden is shifted from the military to civilians, as Grégoire Chamayou points out in relation to armed drones: “The paradox is that hyperprotection of military personnel tends to compromise the tradi- tional social division of danger in which soldiers are at risk and civilians are protected. By maximizing the protection of military lives and making the inviolability of its ‘ zone’ the mark of its power, a state that uses drones tends to divert reprisals toward its own population.”41 At stake in practice is not only whether LAWS can be subsumed under law, a philosophical matter entailing what law requires as a cognitive response, but also the extent to which relevant law could be applicable and made to apply as a matter of (geo)politics. Noll’s argument stands with regard to law and the inhuman, yet against the backdrop of this uneven history and corresponding geographies of power, the human subject who incarnates the law appears as a privileged bearer of enforceable protections. If the law at stake is the law of armed conflict, as much scholarly debate around LAWS presumes, then the most important addressees of this law are strong states and their military personnel.42 The resulting hierarchical framing would seem to place military over civilians, as Chamayou notes; between civilians, the populations of sov- ereign states are prioritized over those whose sovereignty is “contingent” or otherwise compromised. It is inherent to this body of law that it inscribes these distinctions, as the law governing armed conflict notoriously enables a degree of violence even as it attempts to constrain it. As with humanitarianism more broadly, where beneficiaries are classified and managed according to particular governing logics,43 International humanitarian law categorizes its subjects in ways that

16028-0303f-Finalpass-r01.indd 112 9/24/2019 12:03:30 PM Law’s Ends 113

produce attendant hierarchies of life. The central principle of proportionality explicitly justifies the loss of civilian life as balanced against military neces- sity. This has led some commentators to observe how the law governing armed conflict in fact produces an “economy of violence” in which (state) violence is managed according to “an economy of calculations and justified as the least possible means.”44 The development of LAWS not only reflects an effort to improve upon fallible human systems, as its proponents claim, but also to minimize risk to certain actors, particularly citizens of powerful states or members of their militaries. As Sven Lindqvist darkly observes, “The laws of war protect enemies of the same race, class, and culture. The laws of war leave the foreign and the alien without protection.”45 While scholars of international humanitarian law might contest Lindqvist’s claim that discrimination is inscribed into the laws themselves, their selec- tive and discriminatory enforcement is widely noted. As with the “unable or unwilling” theory advanced by the United States, among other highly militarized states such as Canada, Australia, and Turkey, exceptions to the international legal framework have been asserted through the same legal ter- minology.46 Within this logic, the map of the world appears divided between states that are able to exert control over their territories and others that strug- gle, often for reasons tied to the residues of colonial governance structures and continuing economic exploitation. The experiment of LAWS will likely play out to the benefit of the former upon the territory of the latter, much as some populations are made to suffer the collective punishment of armed drone activity in their territory.47

PREEMPTIVE TEMPORALITY

The technological developments informing the emergence of new weapons systems for armed conflict are not only employed to minimize risk to par- ticular populations, as I described above. They also illustrate a particular relationship to time, one that and communications theorist Brian Massumi characterizes as an “operative logic” or “tendency” of preemption.48 Preemption emerges prominently in the United States with the administration of George W. Bush and the so-called war on terror, but Massumi contends that it is not restricted to this historical moment or location. As with Noll’s attention to algorithmic forms and code as the background thinking that shapes the turn to LAWS, Massumi is attuned to preemption as a temporal feature of our contemporary landscape. In nonmilitary applications such as high-frequency trading, algorithms are employed to hasten response time and “to get to the front of the electronic queue” in submitting, cancelling, and modifying purchasing orders.49 In military settings they also enable faster

16028-0303f-Finalpass-r01.indd 113 9/24/2019 12:03:30 PM 114 Chapter 5

data analysis, but an analysis oriented toward threat assessment, which brings them into a relationship with this preemptive tendency. Characterized by a concern for threats and security, preemption produces a surplus value of threat tied to an ominous sense of indeterminacy: “Being in the thick of war has been watered down and drawn out into an endless waiting, both sides poised for action.”50 The experience of temporality is of increasingly condensed intervals, accompanied by a will to preemptively modulate “action potential” and to draw out the risk-mitigating capacity of laying claim to smaller units of time. The political dream at stake is to “own” time in the sense of exerting increasing mastery over ever-smaller units of it. Massumi writes that in “network-centric” contemporary warfare,

the “real time” of war is now the formative infra-instant of suspended percep- tion. What are normally taken to be cognitive functions must telescope into that non-conscious interval. What would otherwise be cognition must zoom into the “blink” between consciously registered perceptions—and in the same moment zoom instantly out into a new form of awareness, a new collective consciousness.51

Such thinking illustrates the presumptive need to augment human capacity on the battlefield, whether through algorithmic enhancement of human cognition by machine intelligence or through neurotechnology’s combination of algo- rithms with human biological/neural capacities. This raises the question of the role for human judgment in relation to the nonconscious interval, the “blink” between the human capacity to perceive and act. If delegated to the machine, what arises is not comprehension and judgment but rather what Arendt called “brain power,” as distinct from the workings of a mind or intellect. “Elec- tronic brains share with all other machines the capacity to do man’s work better and faster than man,” she noted, yet carrying out their assigned tasks does not constitute the exercise of judgment.52 Writing over half a century ago, Arendt warned of the risk of losing sight of humanist considerations in the frenzied technological drive to secure an Archimedean point beyond the human, yet the human seems inescapable, “less likely ever to meet anything but himself and man-made things the more ardently he wishes to eliminate all anthropocentric considerations from his encounter with the non-human world around him.”53 It would seem that what is distinct here, in Noll’s diagnosis of the thinking that undergirds the prospect of algorithmic warfare, is the pros- pect of breaking free from the human through the singularity. While I noted at the outset that LAWS at this stage are speculative and futural, incremental steps have been taken in their development. Both AI and neurotechnological dimensions are apparent in a recent program of the U.S. Defense Department, initially known as the Algorithmic Warfare

16028-0303f-Finalpass-r01.indd 114 9/24/2019 12:03:30 PM Law’s Ends 115

Cross-Functional Team and informally as “Project Maven,” which was launched in April of 2017 with the objective of accelerating the department’s integration of big data, AI, and machine learning to produce “actionable intelligence.” Maven is the inaugural project of this “algorithmic warfare” initiative in the U.S. military.54 While this program is focused on intelligence rather than weapons systems, characterized by a human-in-the-loop rather than a human-out-of-the-loop form of LAWS, the underlying algorithmic thinking is the same. The use of drones for combat also evolved out of intel- ligence gathering, and critics of the integration of AI into military operations would have cause for concern about Project Maven paving the way—perhaps unintentionally—for future LAWS. The Algorithmic Warfare Cross-Functional Team emerged in the Office of the Under Secretary of Defense for Intelligence and was later brought under a new “Joint Artificial Intelligence Center” in the Defense Department.55 The project forms part of the “third offset” or “3OS” strategy to protect U.S. mili- tary advantage against rivals such as China and Russia, a strategy developed in 2014 to draw upon new technological capabilities in developing “col- laborative human-machine battle networks that synchronize simultaneous operations in space, air, sea, undersea, ground, and cyber domains.”56 What Massumi points out as a desire to maximize “action potential” in ever-smaller units of time is evident here: the concern with bringing operations into a simultaneous harmony among different parties to the assemblage helps the military to “own time” more forcefully and, with it, to gain advantage over its military competitors. The memorandum establishing Project Maven in 2017 emphasizes the need to “move much faster” in employing technological developments, with its aim “to turn the enormous volume of data available to DoD into actionable intelligence and insights at speed.”57 Deputy Secretary of Defense Robert Work describes relevant activities as ninety-day “sprints:” after the project team provides computer vision algorithms “for object detection, classifica- tion, and alerts for [full-motion video processing, exploitation and dissemina- tion],” he notes, “Further sprints will incorporate more advanced computer vision technology.”58 Among other things, Project Maven trains AI to recog- nize potential targets in drone footage by focusing on “computer vision” or the aspect of machine learning that autonomously extracts objects of interest from moving or still imagery using neural methods that are inspired by biol- ogy. Public statements of military personnel involved in the project distance it from autonomous weapons or autonomous surveillance systems, claiming instead that they are attempting to “free up time” so that humans can focus on other tasks: “we don’t want them to have to stare and count anymore.”59 The Department of Defense tells the narrative of Project Maven’s emergence as a story of augmentation: of supplementing the labor of an

16028-0303f-Finalpass-r01.indd 115 9/24/2019 12:03:30 PM 116 Chapter 5

overwhelmed, temporally lagging workforce with specialized entities that will help to speed up data processing. Speaking in July of 2017, the chief of the Algorithmic Warfare Cross-Functional Team claimed that AI would be used to “complement the human operator;”60 elsewhere machines are presented as “teammates” paired with humans to “capitalize on the unique capabilities that each brings to bear.” These teammates would work “sym- biotically” toward a shared end: namely, “to increase the ability of weapons systems to detect objects.”61 Figure 5.1, an icon appearing in a presentation by a Project Maven participant, oscillates between the benign and the absurd.62 Intent aside, this depiction of harmless machines employed “to help” appearing in a Defense Department presentation on Project Maven raises the question of who stands to benefit and who may suffer from this cybernetic experiment. That it unfolds incrementally rather than through the direct development of LAWS—on the grounds of assisting overworked employ- ees and with the objective of creating greater precision, a humanitarian end in line with the laws of armed conflict—does not diminish the pressing need to reflect upon the development of these practices through a machine- independent evaluation. As of December 2017, Project Maven’s machine augmentation of the slow human intelligence analyst was reportedly being used to support intelligence

Figure 5.1 The Project Maven seal. Image source: Tom Simonite, “Pentagon will expand AI project prompting protests at Google,” Wired, May 29, 2018 (available at https:// www.wired.com/story/googles-contentious-pentagon-project-is-likely-to-expand/)

16028-0303f-Finalpass-r01.indd 116 9/24/2019 12:03:30 PM Law’s Ends 117

operations in Africa and the Middle East.63 Such spaces of contemporary armed conflict are laden with histories of colonial intervention and techno- logical experimentation in warfare; here the smiling robots appear far more sinister. Bringing location and temporality together, the project seeks to pro- cess information more quickly than human consciousness in order to avoid delayed responses to changing circumstances on hostile and high-risk terri- tory abroad, where human inhabitants appear as the source of risk to remote populations on whose behalf the intelligence is being gathered. There is a lingering question of what constituency this project serves: in a statement shortly after its founding, the chief of Project Maven stated that the team was exploring “[how] best to engage industry [to] advantage the taxpayer and the warfighter, who wants the best algorithms that exist to augment and comple- ment the work he does.”64 Within this vision, the machine augments the human, and private enter- prise figures as a resource for the military. In 2015, the Defense Department established a Defense Innovation Unit in Silicon Valley, California, “to partner with private industry to rapidly source private industry AI solutions to military problem sets.”65 The initiative draws private-sector expertise into military development, as has long been the practice in the United States, but with apparently greater urgency. Robert Work’s memorandum establishing Project Maven makes no mention of private-sector assistance apart from an oblique reference to the need to “field technology” for augmenting existing operations. Yet according to military academics, forming partnerships with private-sector actors is regarded as “key to obtaining the technology required to implement the 3OS. Many of the advancements in AI and other emerging technologies are a result of significant investment by private industry for commercial applications.”66 By March 2018, the skilled “partner” referenced in various press releases was revealed to be Google.67 The disclosure prompted widespread protests among Google employees. Some employees resigned, and thousands of others signed a petition demand- ing termination of the Project Maven contract.68 In response, the corpora- tion not only decided against renewing their contract; it also disseminated “principles for AI” that state the company would not develop intelligence for weapons or surveillance. In contrast to the military’s urgent desire to hasten its conquest of ever-smaller units of processing time to preempt threats, the resistance is located in a different form of preemption: namely, preventing their complicity in producing an untenable future. The arc of this temporal horizon appears longer and more generalized, extending beyond the specifics of comparative military advantage gained by “owning” more of the “blink” between perception and response and looking instead to the risks that algo- rithmic autonomy might bring.69 Extending Massumi’s argument illustrates how the preemptive tendency produces the fear that leads to the prospect of

16028-0303f-Finalpass-r01.indd 117 9/24/2019 12:03:30 PM 118 Chapter 5

developing LAWS to combat future threats. But another preemptive response is possible: namely, an ethico-political preemption of the threat LAWS pose to the primacy of human judgment. What this response reveals is both a kind of military vulnerability and the power of (human, political) judgment. The military–private hybrid appears as a dystopian assemblage of for-profit warfare technology development, but it also seems to open a space for contestation through the power of laboring humans. Here resistance is not read as insubordination to be punished, as in the military, but rather as talent to be lost in a privileged sector of the econ- omy. Other contractors have and will engage with what Google abandoned, and the extent of the corporation’s withdrawal from military projects remains unclear.70 But the petition’s language of accountability beyond law—of morality and ethics, responsibility, and trust—sets terms for political resis- tance. To the internal corporate slogan adopted by the petition signatories— “don’t be evil”—the military would respond that its development of AI tech- nologies is in fact the lesser evil.71 But as we know from critical accounts of international humanitarian law, the logic of the lesser evil is embedded within this law, as it is within the principle of proportionality.72 In this sense, the military only builds upon a structure already present within the law itself, with its attendant forms of humanitarian sacrifice. When it comes to the question of whether to use international law to ban LAWS, the United States adopts a delayed approach to legal temporality: it wishes to proceed with “deliberation and patience” and to highlight how it is important “not to make hasty judgments about the value or likely effects of emerging or future technologies . . . our views of new technologies may change over time as we find new uses and ways to benefit from advances in technology.”73 It is too soon to judge, and yet it is not soon enough to develop the technologies that may later become unmoored from the power to judge and constrain them. Initiatives such as Project Maven are presented as work- ing in the pursuit of humanitarian ends, yet this is what Talal Asad might call a “humanitarianism that uses violence to subdue violence.”74 The law that we might seek to subsume LAWS under is complicit as well. The logic of preemption could be transformed into an ethical call, as a form of political resistance in the present. Legal solutions in the form of regulatory or ban treaties may come too late to integrate well into the already unfolding narrative. Turning the preemptive logic of the military strike on its head, this ethical preemption would seek to undo the hastening of present efforts to adapt algorithmic thinking for military ends. The political urgency is even more pressing as Project Maven continues to unfold, with further contracts awarded to a start-up firm whose founder, a former virtual-reality headset developer, described the future battlefield as populated by “super- hero” soldiers who “have the power of perfect omniscience over their area

16028-0303f-Finalpass-r01.indd 118 9/24/2019 12:03:30 PM Law’s Ends 119

of operations, where they know where every enemy is, every friend is, every asset is.”75 As Noll notes, the plurality of actors involved in this assemblage of military production makes it challenging to parse responsibility—both in a dystopian future where automated weapons make targeting decisions and in the present development of AI for military use. The relationships within the military–corporate assemblage will continue to push toward the singularity in incremental steps, whether intentionally or not. The exercise of human judg- ment through a politics of refusal may push back more forcefully than a law steeped in humanitarian violence.

NOTES

1. I thank Gregor Noll for the opportunity to respond to his contribution and Daniel Steuer for his engagement with the draft text. I am grateful to Hyo Yoon Kang for her careful reading and suggestions. 2. Sven Lindqvist, A History of Bombing (New York: New Press, 2000), 2. 3. Lindqvist’s text offers an extended account of these asymmetries; see also Yves Winter, “The Asymmetric War Discourse and Its Moral Economies: A Critique,” International Legal Theory 3, no. 3 (2011): 488–514; Munro, “Mapping the Vertical Battlespace: Toward a Legal Cartography of Aerial Sovereignty,” London Review of International Law 2 (2014): 233–61; Mark Neocleous, “Air Power as Police Power,” Environment and Planning D: Society and Space 31 (2013): 578–93. 4. Brian Massumi describes this as the “blink” of contracted time between con- sciously registered (human) perceptions in Ontopower: War, Powers, and the State of Perception (Durham: Duke University Press, 2015). Donald MacKenzie’s empirical work on high-frequency trading algorithms illustrates how this condensed temporal- ity manifests materially; see his “How Algorithms Interact: Goffman’s ‘Interaction Order’ in Automated Trading,” Theory, Culture & Society 36, no. 2 (2019): 39–59 (I thank Hyo Yoon Kang for this reference). 5. Hannah Arendt, “The Conquest of Space and the Stature of Man,” The New Atlantis: A Journal of Technology & Society (June 2007 [originally published 1963]): 43–55. 6. Noll, this volume, 81. 7. The principles of distinction—distinguishing between civilians and combatants— and of proportionality are fundamental to this body of law. Nasser Hussain observed how communities subjected to drone-based surveillance are made to bear risk on behalf of the security of remote populations elsewhere in “The Sound of Terror: Phenomenology of a Drone Strike,” Boston Review, October 16, 2013, http://boston review.net/world/hussain-drone-phenomenology. 8. Proponents of autonomous weapons systems contend that they have the capacity for “better-than-human performance” and may perform “more ethically” than humans. See, for example, Ronald Arkin, “The Case for Ethical Autonomy in Unmanned Systems,” Journal of Military Ethics 9 (2010): 332–41.

16028-0303f-Finalpass-r01.indd 119 9/24/2019 12:03:30 PM 120 Chapter 5

9. Denise Garcia, “Lethal Artificial Intelligence and Change: The Future of International Peace and Security,” International Studies Review 20, no. 2 (June 2018): 334–41. 10. Filippo Santoni di Sio and Jeroen van den Hoven, “Meaningful Human Con- trol over Autonomous Systems: A Philosophical Account,” Frontiers in Robotics and AI, 5 (February 2018): 1–14. 11. Gwendelyn Bills, “LAWS unto Themselves: Controlling the Development and Use of Lethal Autonomous Weapons Systems,” George Washington Law Review 83, no. 1 (2014): 176–208. 12. Noll, this volume, 77. 13. Ibid., 94. 14. This widely viewed film, which was posted online in November 2017, directs its viewers to a website, autonomousweapons.org, asking its readers to demand that their leaders support an international treaty banning autonomous weapons. See “Slaughterbots,” YouTube video, 7:47, posted by “Stop Autonomous Weapons,” November 12, 2017, https://www.youtube.com/watch?v=9CO6M2HsoIA. 15. Noll, this volume, 91. 16. Ibid. 17. Eyal Weizman describes the “necro-economy” at work in calculating propor- tionality in an exercise among IDF military lawyers: “Lacking any other criteria for measurement, death ratio is one of the gruesome ways in which proportionality is cal- culated and managed in practice. . . . [Each lawyer in the team of experts on law and military ethics] wrote down a number of civilian deaths they’d accept as legitimate under the principle of proportionality. The numbers were then counted and collated, and an average was calculated. It was 3.14—very approximately the mathematical constant π;” see Eyal Weizman, The Least of All Possible Evils: Humanitarian Vio- lence from Arendt to Gaza (London: Verso, 2011), 13. The exercise was described ear- lier by Yoram Feldman and Uri Blau in “Consent and Advise,” Haaretz, January 29, 2009, https://www.haaretz.com/1.5069101 (I thank Daniel Steuer for this reference). 18. Lawrence Rosen elaborates upon ways in which law is located within specific cultural settings in Law as Culture: An Invitation (Princeton, NJ: Princeton University Press, 2006). 19. Noll, this volume, 95. 20. The export of Western legal ideology through imperialism and colonialism, and particularly through a Roman law frame, has been widely documented; on the technical aspects of legal transplantation, see Jean-Louis Halpérin, “The Concept of Law: A Western Transplant?” Theoretical Inquiries in Law 10, no. 2 (2009): 333–54. 21. Lauren Benton, A Search for Sovereignty: Law and Geography in European Empires, 1400–1900 (Cambridge: Cambridge University Press, 2010), 6. 22. Within international law, the subfield of international criminal law is some- what exceptional in its capacity to attribute responsibility to individuals, as Noll addresses in discussing how law rules; see Noll, this volume, 92–93. The broader international legal field is concerned with state wrongs rather than individual (or state) crimes.

16028-0303f-Finalpass-r01.indd 120 9/24/2019 12:03:30 PM Law’s Ends 121

23. Benton, Search for Sovereignty, 7. Stuart Elden notes the “overlapping” sovereignty among some European states in the nineteenth century, where “many of the borders were still porous and ill defined;” see Stuart Elden, The Birth of Territory (Chicago, IL: University of Chicago Press, 2013), 324. 24. Antony Anghie, “Rethinking Sovereignty in International Law,” Annual Review of Law and Social Science 5, no. 1 (2009): 291–310. 25. Antony Anghie’s observations in this regard have influenced a large body of literature that builds upon his argument concerning the “dynamic of difference” between entities in international legal history; see generally Antony Anghie, Impe- rialism, Sovereignty, and the Making of International Law (Cambridge: Cambridge University Press, 2005). 26. Stuart Elden, “Contingent Sovereignty, Territorial Integrity, and the Sanctity of Borders,” SAIS Review 26 (2006): 14. 27. I elaborate upon this argument in “Cartographies of the Present: ‘Contingent Sovereignty’ and Territorial Integrity,” Netherlands Yearbook of International Law 47 (2017): 83–105. 28. U.S. Department of Justice, “Lawfulness of a Lethal Operation Directed against a U.S. Citizen Who Is a Senior Operational Leader of Al-Qa’ida or an Associ- ated Force” (draft November 8, 2011), available at https://www.law.upenn.edu/live/ files/1903-doj-white-paper (emphasis added). I explore this case study of the assas- sination of Anwar al-Awlaki in “Immanent Enemies, Imminent Crimes: Targeted Killing as Humanitarian Sacrifice,” in Criminals and Enemies, ed. Austin Sarat, Lawrence Douglas, and Martha Umphrey (Amherst: University of Massachusetts Press, 2019), 130–54. 29. Although the operation in Yemeni territory was carried out with the state’s permission, the then U.S. president Barack Obama made clear that the United States would have acted even without Yemen’s agreement. Kendall, “Cartographies of the Present.” 30. Ashley Deeks argues that a “state that has very limited military and police forces and no control over broad swaths of its territory almost certainly is ‘unable’ to suppress a large a sophisticated set of nonstate actors acting in that ungoverned area.” See her “ ‘Unwilling or Unable’: Toward a Normative Framework for Extraterritorial Self-Defense,” Virginia Journal of International Law 52 (2012): 505. 31. Derek Gregory, “The Everywhere War,” Geographical Journal 177, no. 3 (2011): 238–50. 32. Stuart Elden, Terror and Territory: The Spatial Extent of Sovereignty (Minneapolis: University of Michigan Press, 2009), 69. 33. See Kendall, “Cartographies of the Present.” Mark Neocleous argues that the “war on terror” has produced the return of “civilization” as a category of international power; see Mark Neocleous, “The Police of Civilization: The War on Terror as Civi- lizing Offensive,” International Political Sociology 5 (2011): 144–59. Ntina Tzouvala draws upon the late-nineteenth-century writing of James Lorimer in a reading of the “unwilling or unable” doctrine as a contemporary application of civilizational criteria; see Ntina Tzouvala, “TWAIL and the ‘Unwilling or Unable’ Doctrine: Continuities and Ruptures,” AJIL Unbound 109 (2015): 266–70.

16028-0303f-Finalpass-r01.indd 121 9/24/2019 12:03:30 PM 122 Chapter 5

34. For example, a standard textbook account moves from Hugo Grotius’s 1625 text On the Law of War and Peace, to the “great European Jurist” Georg Friedrich von Martens, to the Battle of Solferino and Henri Dunant’s activism leading to the signing of the 1864 Geneva Convention by the European powers; see Emily Crawford and Alison Pert, International Humanitarian Law (Cambridge: Cambridge University Press, 2015). 35. Article 11 of the General Act of the Conference of Berlin, February 26, 1885, in Barbara Harlow and Mia Carter, eds., Archives of Empire: Volume II, The Scramble for Africa (Durham: Duke University Press, 2003), 33. See also Matthew Craven, “Between Law and History: The Berlin Conference of 1884–1885 and the Logic of Free Trade,” London Review of International Law 3, no. 1 (2015): 31–59. 36. Christiane Wilke examines the racialized and gendered assumptions at work in the development of the 1923 Draft Rules on Aerial Warfare, noting how during the 1920s, despite the fact that the new technologies of aerial bombardment were primarily used in colonial territories, “discussions about regulations of aerial bom- bardment by UK and U.S. international lawyers and international relations scholars focused almost exclusively on examples of aerial bombardment within Europe.” See Christiane Wilke, “How International Law Learned to Love the Bomb: Civilians and the Regulation of Aerial Warfare in the 1920s,” Australian Feminist Law Journal 44, no. 1 (2018): 36. 37. On “aerial sovereignty,” see Munro, “Mapping the Vertical Battlespace.” On “conditional equality,” see Tanja Aalberts, “Rethinking the Principle of (Sovereign) Equality as a Standard of Civilization,” Millennium: Journal of International Studies 42 (2014): 767–89. Marcus Gunneflo observes that “targeted killing is deployed in an American homeland which is the planet;” see Marcus Gunneflo, Targeted Killing (Cambridge: Cambridge University Press, 2016), 82. 38. Hugh Gusterson, Drone: Remote Control Warfare (Boston: MIT University Press, 2016), 148. 39. Michael Horowitz makes a similar point in arguing that, in the context of questions of jus in bello, “it makes the most sense to think about autonomous weapons in comparison with existing weapons in realistic scenarios;” see Michael Horowitz, “The Ethics and Morality of Autonomous Warfare: Assessing the Debate over Autonomous Weapons,” : The Journal of the American Academy of Arts and Sciences 4 (2016): 29. 40. Nehal Bhuta et al., Autonomous Weapons Systems: Law, Ethics, Policy (Cambridge: Cambridge University Press, 2016), 4. 41. Grégoire Chamayou, Drone Theory (London: Penguin, 2015), 77. 42. Another possibility would be contract or tort law, as Noll notes in his chap- ter, though this appears unlikely and in any case would be uneven across domestic jurisdictions. 43. See generally Didier Fassin, Humanitarian Reason: A Moral History of the Present (Berkeley: Press, 2012). 44. Weizman, Least of All Possible Evils, 3. 45. Lindqvist, History of Bombing, 2. 46. See Kendall, “Cartographies of the Present.” The United States now refers to “unwilling or unable” as a “legal standard;” see The White House, “Report on the

16028-0303f-Finalpass-r01.indd 122 9/24/2019 12:03:30 PM Law’s Ends 123

Legal and Policy Frameworks Guiding the United States’ Use of Military Force and Related National Security Operations” (December 2016), 10, available at https:// www.lawfareblog.com/white-house-releases-report-legal-and-policy-frameworks- american-uses-military-force. 47. Nasser Hussain observed that “because drones are able to hover at or above 30 thousand feet, they are mostly invisible to the people below them. But they can be heard. Many people from the tribal areas of Pakistan (FATA) describe the sound as a low-grade, perpetual buzzing, a signal that a strike could occur at any time. The locals call the drones machar, mosquitos. Because the drone can surveil the area for hours at a time, and because each round of surveillance may or may not result in a strike, the fear and anxiety among civilians is diffuse and chronic.” Hussain, “Sound of Terror.” 48. See Massumi, Ontopower. Massumi describes ontopower as “a power of becoming whose force is maximally abstract . . . a power of emergence” (ibid., 223) or a force of life that manifests temporally as a “force-to-own time” (ibid., 73). 49. MacKenzie, “How Algorithms Interact,” 53. 50. Massumi, Ontopower, 60. 51. Ibid., 97. 52. Arendt, “Conquest of Space,” 46. 53. Ibid., 52. 54. “Project Maven is the first activity of an ‘Algorithmic Warfare’ initiative in the US military designed to harness the potential of AI and translate it into usable military capabilities.” Michael Horowitz, “Artificial Intelligence, International Competition, and the Balance of Power,” Texas National Security Review 1, no. 3 (May 2018): 42. 55. The JAIC “will have oversight over almost all service and defense agency AI efforts.” Sydney Freedberg, Jr., “Joint Artificial Intelligence Center Created under DoD CIO,” Breaking Defense, June 29, 2018, https://breakingdefense.com/2018/06/ joint-artificial-intelligence-center-created-under-dod-cio/. 56. Remarks by Deputy Secretary of Defense Robert Work at the Center for New American Security Defense Forum, as quoted in Lawrence Freedman, The Future of War: A History (London: Allen Lane, 2017), 244. 57. U.S. Deputy Secretary of Defense, “Establishment of an Algorithmic Warfare Cross-Functional Team (Project Maven),” Memorandum (April 26, 2017), available at https://www.govexec.com/media/gbc/docs/pdfs_edit/establishment_of_the_awcft_ project_maven.pdf (emphasis added). 58. Ibid. 59. Paul McLeary, “Pentagon’s Big AI Program, Maven, Already Hunts Data in Middle East, Africa,” Breaking Defense, May 1, 2018, https://breakingdefense.com/ 2018/05/pentagons-big-ai-program-maven-already-hunts-data-in-middle-east-africa/. 60. Marine Corps Colonel Drew Cukor, July 2017, as quoted by Kelsey Atherton, “Targeting the Future of the DoD’s Controversial Project Maven Ini- tiative,” C4ISRNET, July 27, 2018, http://c4isrnet.com/it-networks/2018/07/27/ targeting-the-future-of-the-dods-controversial-project-maven-initiative/. 61. Cheryl Pellerin, “Project Maven to Deploy Computer Algorithms to War Zone by Year’s End,” US Department of Defense News, July 17, 2017,

16028-0303f-Finalpass-r01.indd 123 9/24/2019 12:03:30 PM 124 Chapter 5

https://dod.defense.gov/News/Article/Article/1254719/project-maven-to-deploy- computer-algorithms-to-war-zone-by-years-end/. 62. In Simonite, “Pentagon will Expand AI Project,” Wired, May 29, 2018. The image appears in “Disruption in UAS: the Algorithmic Cross-Functional Team (Project Maven),” presentation by Lieutenant General Jack Shanahan, OUSDI Director for Defense Intelligence (Warfighter Support), March 20, 2018, also available at http://airpower.airforce.gov.au/APDC/media/Events-Media/ RAAF%20AP%20CONF%202018/1130-1200-Shanahan-Disruption-in-UAS-The- AWCFT.pdf. 63. McLeary, “Pentagon’s Big AI Program.” 64. Pellerin, “Project Maven to Deploy Computer Algorithms.” 65. Clarke Alan and Daniel Knudson III, “Examination of Cognitive Load in the Human-Machine Teaming Context” (Master’s thesis, Naval Postgraduate School, June 2018), 8. 66. Ibid. 67. Atherton, “Targeting the future.” 68. Mallory Locklear, “Ex-Pentagon Official behind Project Maven ‘Alarmed’ by Google Withdrawal,” Engadget, June 26, 2018, https://www.engadget .com/2018/06/26/pentagon-official-project-maven-alarmed-google-withdrawal/. 69. The petition’s signatories noted that a senior Google official had sought to reassure them that the technology would not be used for weapons or drones but added that “the technology is being built for the military, and once it’s delivered it could easily be used to assist in these tasks.” Letter to Sundar Pichai, available at https:// static01.nyt.com/files/2018/technology/googleletter.pdf. 70. Lee Fang, “Google Hedges on Promise to End Controversial Involvement in Military Drone Contract,” The Intercept, March 1, 2019, https://theintercept .com/2019/03/01/google-project-maven-contract/; Lee Fang, “Defense Tech Startup Founded by Trump’s Most Prominent Silicon Valley Supporters Wins Secre- tive Military AI Contract,” The Intercept, March 19, 2019, https://theintercept .com/2019/03/09/anduril-industries-project-maven-palmer-luckey/. 71. “ ‘They say, look this data could potentially, down the line, at some point, cause harm to human life,’ said Work. ‘I said, yes but it might save 500 Americans or 500 allies or 500 innocent civilians.’ ” Locklear, “Ex-Pentagon Official behind Project Maven ‘Alarmed.’ ” 72. Weizman writes that the principle of proportionality is “the clearest mani- festation of the lesser evil principle;” “IHL does not seek to end wars but rather to ‘regulate’ and ‘shape’ the way militaries wage them. . . . Western militaries tend to believe that by moderating the violence they perpetrate, they might be able to govern populations more efficiently.” Weizman, Least of All Possible Evils, 11. 73. U.S. Opening Statement at the Group of Government Experts (GGE) on Lethal Autonomous Weapons Systems, Geneva, April 9, 2018, available at https:// geneva.usmission.gov/2018/04/09/ccw-u-s-opening-statement-at-the-group-of- governmental-experts-meeting-on-lethal-autonomous-weapons-systems/.

16028-0303f-Finalpass-r01.indd 124 9/24/2019 12:03:30 PM Law’s Ends 125

74. Talal Asad, “Reflections on Violence, Law and Humanitarianism,” Critical Inquiry 42, no. 2 (2015): 390–427. 75. The founder of the firm, Anduril Industries, is also a financial supporter of the current U.S. president, Donald Trump, and the Republican Party. Fang, “Defense Tech Startup” (I am grateful to Hyo Yoon Kang for this reference).

16028-0303f-Finalpass-r01.indd 125 9/24/2019 12:03:30 PM 16028-0303f-Finalpass-r01.indd 126 9/24/2019 12:03:30 PM Chapter 6 Omnivoyance and Blindness Max Liljefors

We struggle here with imagery. — Murdoch

To say “a world of war is emerging” is paradoxical. That something emerges means it comes forth from and makes itself visible against a ground, like a line that transforms into a figure against the ground of the paper. That a world—a totality of all things—consists of some single material, on the other hand, means that the differentiation between figure and ground is in a way undone. This may make sense only about images. Art historians sometimes say that, after Manet, the task of painting was no longer to make pictures of things that looked like they were made of something other than paint. It is also said about certain old masters, like Titian and Rembrandt, that in their late works everything seems to be made of paint. Such paintings are worlds of paint as much as mimetic representations of men, horses, palaces, and so on. A few brushstrokes provide the cues required for these imagined objects to emerge from the ground, but they nevertheless admit, or declare, that they are made of paint in the final analysis. In fact, the paint emerges into view through the depicted things, interrupting their emergence, and it inflects it back toward the surface. The represented world is thus led back into the paint matter, but in the same process, the paint takes on something of a “world” quality. This makes the beholder’s gaze move back and forth between these two universes of the picture—motif and paint, image and reality. This is a circuitous way to begin a text about war, but as an art historian, my route to the topic of this book must go via the detour of images. If war is transforming, as my colleagues in this volume argue, into a perpetual state of simmering, low-intensity conflict between multiple parties—states, militias, 127

16028-0303f-Finalpass-r01.indd 127 9/24/2019 12:03:30 PM 128 Chapter 6

sects, corporations, private contractors, partisans, and so forth—often of vague legal status and in shifting alliances, then “War,” with a capital “W,” is blending into the background: it is becoming the ground. That is how war can disappear into a world “of war,” where everybody is fighting unending wars. From where I write, much of this revolves around vision and its oppo- site, blindness, both figuratively and literally. The Westphalian distinctions between state and nonstate actors, military and civilian targets, and legitimate and illegitimate acts of violence are getting difficult to observe. The spatial delimitation of war, its “contour,” is becoming ever more diffuse because of remote-controlled weapons, global terror networks, and cyber warfare. The “front” of war, so easily imagined in the form of a line, is increasingly difficult to define and predict. There is no location that cannot at any time turn into a front. War can thus no longer be explained as a particular set of conditions or be mapped within certain coordinates. Instead, we have fleeting levels of threat, conflict, and security in between what we used to call “war” and “peace.” Imaging technologies like those implemented in drone surveillance, bio- metrics, and visual artificial intelligence are at the center of this transforma- tion of war. I shall refer to such technologies here simply as “technovision.” Technovision increasingly penetrates both military and civilian contexts, con- tributing to the blurring of the line between those contexts. That also makes technovision more difficult to characterize. What appears to be its “nature” depends partly on the context, and it is likely, therefore, that the more ubiq- uitous technovisual systems become, the more amorphous they will seem. My argument about technovision is, in a nutshell, that the obverse of vision is blindness. This is a simple enough logic: to look in a certain direction means not seeing other sides; to survey something broadly means abandoning detailed scrutiny; and to behold a painting on a wall means that you do not see the wall behind it. Regardless of how self-evident this may seem, I think this argument deserves to be made in relation to technovision, because tech- novision is often shrouded by a rhetoric of superiority, of seeing “more” and “better.” This is largely true, I would say, of advocates as well as of critics of technovision. In contrast, I shall argue that to claim to see “more” or “bet- ter” is not simply to be less blind but also to be more blind in certain ways. The logic here is that of the “other side of the coin”—one side of the coin is exactly as big as the other. Hence, a visual capacity of a certain degree is necessarily accompanied by a blindness of the same degree. I shall describe here three levels of blindness that accompany technovision’s claims to visual superiority. Each of them is related to claims to a particular type of superi- ority, and all of them have to do in different ways with the problematics of distinguishing figure from ground and image from medium. I shall put forth my argument partly through images, because every image always tells two

16028-0303f-Finalpass-r01.indd 128 9/24/2019 12:03:30 PM Omnivoyance and Blindness 129

stories. One is about its subject matter or “content,” and the other is about the nature of vision or, phrased differently, about itself as image. Images always demonstrate some negotiation around vision, which constitutes their “hori- zon” (always figuratively, sometimes literally), which is the basis of and the limit to any gaze they represent.

FIRST BLINDNESS: TO SEE MORE IS ALSO TO SEE LESS

In his book The Vision Machine, Paul Virilio writes that, since the French Revolution, totalitarian regimes in the West have relied on surveillance, the “elucidation of details,” as their primary method of governing.1 Virilio gives the name omnivoyance to this principle of power. In a later text, he uses the same concept to describe military satellites orbiting the earth.2 The word “omnivoyance” links vision to fantasies of absolute power. In the Middle Ages, the term was used for full-face portraits of Christ that seem to look at each beholder of the picture regardless of where she stands, an optical illusion that, to medieval Christianity, symbolized the omniscience of God. The idea was that God not only sees and knows everything but that His gaze falls indi- vidually and differently on every person. Nikolaos Mesarites (ca. 1163/4— after 1216), in his “Description of the Church of the Holy Apostles at Constantinople” (1198–1203), says about a Christ Pantokrator mosaic in the south galleries upstairs in what is now the Hagia Sofia that the Pantokrator’s eyes are “wholly directed toward all at once and at the same time toward each individually.” To those who have “a clean understanding,” Mesarites writes, the Pantokrator’s gaze is kind and benevolent, but to “evildoers,” it is “wrath- ful, terrifying, stern and filled with hardness.”3 An enlightening illustration of the idea of omnivoyance is found in Hiero- nymus Bosch’s (or a follower’s) panel Seven Deadly Sins and Four Last Things (1505–1510), which once hung in Philip II’s bedroom in the royal monastery Escorial, outside Madrid, and is now in the Prado (figure 6.1).4 Here it is not only the physical figure of Christ, located in the middle of the painting, who beholds the beholder, but also the eye formed by the entire roundel itself, which stares at the world. In the pupil of this eye, Christ rises from the tomb and looks at the sinners who indulge in the cardinal vices that are represented in the surrounding scenes. Cave cave deus videt, “Beware, beware, God sees,” reads the Latin motto under Him. Joseph Leo Koerner has remarked that the painting portrays the sins like reflections in the periphery of God’s eye. Thereby it represents “a world picture where the Weltanschau- ung is God’s,”5 from whose transcendent viewpoint the world appears askew, distorted by sin. The Seven Deadly Sins thus presents omnivoyance as an attribute of absolute power emanating from a punctiform center beyond the

16028-0303f-Finalpass-r01.indd 129 9/24/2019 12:03:30 PM 130 Chapter 6

Figure 6.1 Hieronymus Bosch (or follower) The Seven Deadly Sins and the Four Last Things, 1505–1510 Oil on poplar panel 47 × 55 in. (119.5 cm × 139.5 cm) Museo del Prado, Madrid (Detail)

world of human affairs. The painting may also spur us to reflect on what it means to exist and dwell in the presence of the omnivoyant gaze. How do those exposed experience life under the omnivoyant eye? From their point of view, is not its uncompromising gaze the very force that decenters their world, pushes it out of joint? Is not their world distorted or, rather, pushed into becoming a picture, slanted and flattened out, by the pressure of the gaze? In this way, Bosch’s painting reveals the logic of omnivoyant power— the lifeworld can no longer remain itself under the scrutiny of an invisible yet all-pervading surveillance.

16028-0303f-Finalpass-r01.indd 130 9/24/2019 12:03:31 PM Omnivoyance and Blindness 131

Reports from areas that are monitored by military drone surveillance con- firm this logic. In such areas, people tend to become anxious and cautious about everyday activities, such as going to the market, attending weddings and other social events, talking on the phone, and so forth, because even the most mundane behavior might cause the technovisual intelligence to define them as a target.6 The omnivoyant gaze thus shapes the world it examines, and, therefore, it will always observe a distorted world. This is in itself a form of blindness. Virilio uses the term omnivoyance in relation to technovisual systems because their powers can indeed seem superhuman. The term also implies that technovision is, like the gods, in a certain way inscrutable, a point I shall comment on below. Should we accept this rhetoric? Is there not a risk that the notion of omnivoyance will fortify the “mythology of faultlessness” that already dominates the discourse? The military–industrial complex is all too willing to demonstrate the superiority of technovision, demonstrations that reverberate with the cultural imagination of a technological sublime. Let me quote just one example here, a 2004 test by the U.S. Air Force UAV Battlelab, in which a Pointer drone equipped with facial recognition software was able to detect and correctly identify a person who was sitting inside a car hidden under leafy trees from over three kilometers away.7 It is obviously impos- sible for a human observer to achieve such a feat. Such examples, given in abundance, add up to a rhetoric of “evidence,” which is further bolstered by expressions like “full spectrum dominance” and “visual primacy,” coined as epithets of military technovision, as well as by the very names given to technovisual systems. These names include Argus, in a hundred-eyed giant who never sleeps, and , a monster whose gaze has the power to petrify whoever looks into her eyes. When those names are applied to technovisual systems—ARGUS (Autonomous Real-Time Ground Ubiquitous Surveillance Imaging System) and the Gorgon Stare are both drone-carried camera systems developed by DARPA—and when they are supplemented with impressive technical facts (combined, these systems are said to cover an area of 100 square kilometers, stitching together feeds from 368 camera chips into a single, real-time composite video image with 65 trackable targets, resulting in an unprecedented 1.8 billion pixel image density8), then the vocabularies of the divine and of the machine merge into a rhetoric of infallibility and invincibility. What this rhetoric fails to articulate is the blindness, of various sorts, that makes up the counterpart of any claim to omnivoyance. The “first” level of blindness is technical in nature and the logical result of a constant growth in the size of data sets. Any increase of data input beyond a certain point will strain the system’s capacity to analyze data. In 2009 alone, U.S. drones over Iraq and Afghanistan are reported to have gathered a full twenty-four years of video footage, a figure that was predicted to increase thirtyfold during the

16028-0303f-Finalpass-r01.indd 131 9/24/2019 12:03:31 PM 132 Chapter 6

following two years.9 Despite the fact that the aforementioned ARGUS imag- ing system’s video feed has only half the frame rate of consumer video (12 frames per second instead of 24, 25, or 30), it is reported to produce 6,000 terabytes of data per day.10 Such enormous amounts of data are difficult to transfer wirelessly between aircraft and ground. They also overwhelm the capacity of any single human analyst. A technical way to mitigate this data deluge is to lower the information density, for instance by decreasing the frame rate or pixel resolution. Thus, an increase of data in one end of the sys- tem may be countered by a decrease of information at the other end. Another solution is to split the task of analysis among several analysts—which will generate problems of coordination, communication, and distribution of responsibility. All these kinds of negotiation about the capacities and limita- tions of technovision are forms of systemic friction. Friction, which is an unavoidable part of all systems that include machines or humans, constitutes the first level of blindness. Philosopher of science Don Ihde has described friction in technovisual systems with a formula: the “magnification-reduction structure of technology use.”11 Ihde postulates that any technology that magnifies an observer’s per- ceptual range will reduce the quality of perception in one way or another. His thesis formulates a general principle about all imaging technologies—to see “more” through them is also to see “less.” Ihde provides a simple and strik- ing image of the magnification-reduction structure: a person wants to pick an apple from a tree, but it is too high up for him, so he uses a stick to knock the apple down. In this illustration, the magnification aspect is expressed by the fact that the stick extends the person’s reach. The reduction aspect is expressed by the fact that the stick does not allow the person to feel the apple’s skin and assess its ripeness. Ihde’s image tells us something funda- mental about how technologies mediate experience, and it can also serve as an illustration of what I refer to as the first level of blindness of technovision. But it cannot do justice to the “deeper” levels of blindness I shall discuss here, as will become apparent below when we consider the growing automation of analysis in technovisual systems. Before moving on to that topic, however, I shall attend briefly to the question of which images are suitable as illustra- tions or metaphors of technovision.

SHOULD WE ANTHROPOMORPHIZE?

Attempts in philosophy to describe the relation between visual technology and human perception usually go in one of two ways. Certain philosophers warn against the anthropomorphization of technology. In his book Into the Universe of Technical Images, Vilém Flusser argues that all imaging

16028-0303f-Finalpass-r01.indd 132 9/24/2019 12:03:31 PM Omnivoyance and Blindness 133

apparatuses are ultimately blind, although we may erroneously imagine them to possess a human-like faculty of sight:

Apparatuses . . . should not be anthropomorphized, however convincingly they may simulate human thought functions. . . . What we find difficult to see (e.g., a magnetic field, unless we use iron filings) is, from [the apparatus’] standpoint, just another possible function. It transforms the effects of photons on molecules of silver nitrate into photographs in just the same way: blindly.12

Thus, according to Flusser, if we imagine that a machine “sees” as we do, we are committing the mistake of anthropomorphizing or humanizing tech- novision. Paul Virilio, in his book The Vision Machine, also stresses the discrepancy between human vision and machine vision. The latter, he writes, is “sightless.” He goes on to say: “The production of sightless vision is itself merely the reproduction of an intense blindness that will become the latest and last form of industrialisation: the industrialisation of the non-gaze.”13 Both Flusser and Virilio assert that machine vision lacks something essen- tial in human vision, and they articulate this lack by describing technovision in terms of blindness—the machine is “blind” and “sightless” and represents a “non-gaze.” When machines interpret reality for humans, Virilio says, there occurs a “splitting of viewpoint . . . between the animate (the living subject) and the inanimate (the object, the seeing machine).”14 Flusser and Virilio believe that technovision cannot be understood on the model of human sight. They insist on a categorical divide between the human and the machine, rejecting models that—falsely in their view—make the inner workings of technovisual apparatuses appear as essentially the same as those in human perception. In short, they portray technovision as impenetrable to human observers. In contrast to that view, in recent years the idea that there is a symbiosis between humans and technology has gained momentum. The proponents of this idea make a bold, epochal claim: that this symbiosis has ushered in the era of the “posthuman.” A detailed account of this discourse lies outside the scope of this essay.15 Suffice it to say that the notion of the posthuman seems to assume a closure of the human–machine divide that results in hybrid “life forms” such as the cyborg. If we confront the idea of the posthuman with the question of the prehuman, however, it becomes clear that there is a clash of time frames here. If machine vision has pushed the human sensorium into the posthuman era, which kinds of perceptual universes preceded (and, of course, in many cases still coexist with) the human? To answer that question we would need to investigate the evolution of mammal life over hundreds of mil- lions of years. (For instance, neuro-philosopher Jaap Panksepp hypothesizes an archetypical form of neuronal “self-schema” in the mammal brain, which

16028-0303f-Finalpass-r01.indd 133 9/24/2019 12:03:31 PM 134 Chapter 6

he aptly terms “SELF,” an abbreviation of Simple Ego-type Life Form.16) In contrast, theories that postulate the emergence of a “posthuman condition” from technologies that were invented in recent decades or centuries operate, obviously, within a much narrower timescale. My take on this asymmetry is as follows. I see no need to detach the notion of the posthuman from the notion of the human. In fact, I am inclined to regard the idea of conscious- ness “beyond the human” as the most human of all thoughts. The entire his- tory of religion testifies to that. As for warnings against anthropomorphizing technology, I agree we should not unreflectively attribute human qualities to machines. But I also think that the very notion of “human vision” in this context is already a product of a far-reaching and very common anthropo- morphization of the human being herself. Without this instance of anthro- pomorphism, human vision appears no less “mechanistic” than the “vision” of imaging technologies—it is just a system of sensors and electrochemical data processing. At the heart of this anthropomorphization of human vision lies a particular fantasy that has a firm grip on our imagination, namely the fantasy of the gaze, the idea that something immaterial—an invisible vehicle of sight—emanates from the eye as a token of the observer’s agency and intent, traversing the threshold between interior subjectivity and the external world. Without that fantasy, it is difficult even to begin to speak about social and relational aspects of vision, which is a topic I shall return to below. To sum up, instead of proclaiming an epochal divide between the human and the posthuman, I suggest that we think about sentience as a dark field, extend- ing in all directions, onto which we project the searchlight of “humanness” as a movable circle of light—a kind of gaze, as it were—the circumference of which we can expand or shrink. Whatever falls outside that circle will be marked as our Other, and the dark terrain also encompasses ourselves. I shall move on from here with what might be called an “illustrated myth” in an attempt to put anthropomorphism to work reflectively. Myths can con- dense convoluted issues into sharp yet multilayered images, and I shall draw here on a mythical motif of the evolution of the human. The image is this: an ape climbed down from the trees and stood up on two legs, and hence it became human. Please note that I am not debating the general veracity of this as an account of an evolutionary process. But the motif of a primate rising up from the ground into an upright stance is mythical nonetheless, because it condenses into a single image a complex development that involves, among other things, alterations to the curvature of the spine, the lengthening of the thigh bone, the strengthening of the knee, and so on—a physical evolution that stretches over millions of years that no single primate underwent. The image is powerful because it is so simple and at the same time multidimen- sional. The struggle against gravity is inscribed in it, as is the quest for wider horizons. Darwin probably sensed this when he wrote in The Descent of Man

16028-0303f-Finalpass-r01.indd 134 9/24/2019 12:03:31 PM Omnivoyance and Blindness 135

that man’s shift from a quadrupedal to a bipedal stance freed his hands from the task of locomotion so that they could be used to throw stones and spears.17 Freud believed that bipedalism reshuffled the hierarchy of the sense faculties so that sight predominated over smell and touch.18 With the upright position, then, followed the themes under discussion here—aggression over distance and the dominion of sight. How far might we take the mythic significance of this image? It has been hypothesized that the upright stance allowed for the back of the skull and, subsequently, the brain to grow. But the larger head made human birth painful—nature’s price for intelligence—and the evolu- tionary response was premature birth. Hence, homo sapiens, the “knowing human,” comes into this world unfinished, searching for what she feels is missing from her. Again, this is neither science nor the critique of science; it is myth feeding off science to paint a portrait of the human as a creature of vision prone to doubt and aggression. Let us imagine that the human ascent continues. Evolution does not halt; it does not stop at bipedalism. Instead, it continues to push the primate toward a higher point of observation at which yet wider, unseen horizons await her. In this next, extrapolated step in human evolution, the human will become more “knowing” but also more unfinished and formless. What can serve as a model, a picture, for this emergent life form (which we might be tempted to call the “posthuman”)? The answer is almost self-evident: it is the drone pilot. This is not only because the drone pilot, through advanced forms of technovi- sion and remote control, has access to an elevated, flexible vantage point and can unleash violence across global distances but also because she embod- ies a particular kind of uncertainty or formlessness that is a hallmark of the “emerging world of war” under discussion here. The drone pilot sits before a configuration of computer screens and launches strikes that materialize on the other side of the globe. She is removed from the dangers that soldiers face in combat. Nonetheless, studies indicate that post-traumatic stress symptoms beset her as much as they do regular soldiers. This raises the ontological question: Is the drone pilot there, in the theater of war? The ambiguity of that question is reflected in the difficulties that surround the acknowledgment of the drone pilot’s services. In 2013, the U.S. Defense Department introduced a Distinguished Warfare Medal for drone pilots that was ranked equal to certain battlefield honors. Military veterans protested, and the medal—dubbed the “Nintendo Award” by critics—was subsequently revoked. Later, the Pentagon opted to award drone pilots already existing medals but with a small “R” added to them, for “remote.”19 I read this hesitancy about recognizing the drone pilot’s services as a sign of a more general difficulty in distinguishing her, in clearly seeing her. She appears blurry and amorphous because she has not yet fully evolved. She has not completely arrived here, or, to phrase it dif- ferently, her world, where her form of life will find its shape, is still emerging.

16028-0303f-Finalpass-r01.indd 135 9/24/2019 12:03:31 PM 136 Chapter 6

DUAL HORIZONS

In a photograph from 1993 by Mexican artist Gabriel Orozco, titled Island within an Island, a line of concrete obstacles encloses a desolate parking lot against the skyline of lower Manhattan under a cloudy sky (figure 6.2). For a beholder today, the photo might seem eerily to forebode the al-Qaeda terrorist attack eight years later—the two American Airlines planes could materialize at any moment from the clouds, it seems, on a collision course with the World Trade Center, symbol of capitalism, towering over the adjacent buildings in the middle of the picture. That would, of course, be an interpretation of Orozco’s photo with hindsight. Back in 1993, beholders would rather have associated the picture with the attack that took place on February 26 of that year, when a bomb in the World Trade Center garage killed six people and injured a thousand. In Orozco’s photograph, a small heap of detritus that mirrors the mighty skyline has been assembled against a concrete barrier—broken boards mimic the skyscrapers; a dirty puddle resembles the surrounding Hudson River;

Figure 6.2 Gabriel Orozco Island within an Island (Isla en la Isla), 1993 Silver dye bleach print (Cibachrome) 16 × 20 in. (40.64 × 50.8 cm) Courtesy of the artist and Marian Goodman Gallery

16028-0303f-Finalpass-r01.indd 136 9/24/2019 12:03:31 PM Omnivoyance and Blindness 137

and the rough surface of the barrier is the cloudy sky. This visual echoing between foreground and background resounds across the deserted expanse that separates the remote financial center from the worthless garbage up front. The empty area not only represents the socioeconomic divide between the rich and the poor but also facilitates “the image in the image” indicated in the photograph’s title. It makes visible, inside the photo itself, the simul- taneous alterity and sameness that is the heart of all images. lies the privileged financial enclave and over here, the raked-up pile of discarded rubble—in between, the fenced-off bareness that apostrophizes enclosedness and separation and, nevertheless, makes the two opposites reflect each other. Imagine now a primate, a hominid, crouching before the scraped-together rubble, examining it by smell and touch. Something causes the creature to cast its gaze upwards. (Perhaps the hand of an invisible deity silently reor- chestrates the synapses firing in its brain.) Its body follows, until the creature is standing up, balancing on two legs only—the human being has emerged. What does she see? When her gaze rises above the concrete barrier, an empty plain extends before her, and beyond that, the distant skyscrapers stand like colossal mountains. A wider horizon embraces her. Perhaps faintly aware that she has metamorphosed, she drinks in the view. New impulses rush through her, a will to conquer, to accumulate riches, to hurl stones. But also: Must not the wider horizon and the shifting back and forth of her gaze between the near and the far instill in her mind a doubt about the reality of what she beholds? Is the detritus in front of her simply a pile of muck, or does it harbor a reflec- tion of that larger world that has unfolded—a premonition, perhaps, that the city will lie in ruins? And if she were to cover the expanse, would the distant world, now a mere image, a promise, prove to be as solid as the detritus by her feet? And if so, would its remote allure hold true? From this point, the creature’s world will be a hall of mirrors. Two horizons confuse the measure- ment of distance and cast relationality into doubt. The far horizon divides the sky from the ground and allows the gaze to take flight. The near horizon is made of concrete, and it blocks the line of sight. The separation of the two is a manifestation of the dual function of every horizon, which is to open up and to delimit a field of vision. Echoes and resemblances project across that field, weaving a world inside it. Thus, distance gives birth to mirroring and subsequently to the children of mirrors—desire and theory.

SECOND BLINDNESS: LOSS OF REFLEXIVITY

I shall now return to Ihde’s magnification-reduction formula. As I mentioned earlier, Ihde’s pedagogical example of a man using a stick to knock down an apple from a tree does not do justice to the deeper levels of blindness that

16028-0303f-Finalpass-r01.indd 137 9/24/2019 12:03:31 PM 138 Chapter 6

come with claims to omnivoyance. The reason for this is that Ihde’s stick- wielding man sees the apple from the outset. He has already identified the familiar object when he reaches for it, and therefore he will be aware that his perception of the apple is “reduced” when it is mediated through the stick, compared to how it would be if he had held the apple in his hand (which we must assume he has done previously with other apples many times). In that sense, the stick—his technovisual system—is transparent to him. That is often not the case with advanced forms of technovision, in which visual artificial intelligence plays an increasingly important role. In response to the massive growth of data acquisition, technovisual systems are being designed to perform the task of analyzing visual data themselves. An example of such a system is a DARPA project called the “Mind’s Eye,” which is described as being capable not only of identifying objects but also of inter- preting the meanings of actions and events. Mind’s Eye is portrayed as learn- ing about reality in the same way a human being acquires a language:

Machine vision . . . has made continual progress in recognizing a wide range of objects and their properties, what might be thought of as the nouns in the description of a scene. The focus of Mind’s Eye is to add the perceptual and cognitive underpinnings for recognizing and reasoning about the verbs in those scenes, enabling a more complete narrative of action in the visual experience.20

The meaning of the above quote is not entirely easy to grasp. What does the metaphor of learning a language connote here? First, the comparison anthropomorphizes the technovisual system—we are encouraged to think about Mind’s Eye as an intelligent child who gradually familiarizes herself with the world of human conduct and thereby learns to behave properly in it. Second, the comparison gives the impression that Mind’s Eye learns an ideal language that stands in a symmetric and transparent one-to-one rela- tion with reality—one word standing for this object, another word for that action, and so forth. The system’s command of the meaning of human reality is presented as a matter of it being able to correctly name things and deeds. I am skeptical about the appropriateness of that image. To learn a language is not just to memorize a list of vocables but also (among other things) to internalize sociocultural frameworks for interpreting and expressing experi- ences—including the experience of language not being transparent vis-à-vis reality, a reality that language is a part of but with respect to which also seems oddly external. It is more adequate to say, I think, that visual artificial intelligence systems like Mind’s Eye construct a language (rather than learn a preexisting one), which the human operator of the system must attempt to master. The operator is then in the position of a translator between the system of visual artificial intelligence and the realm of human language. To the latter

16028-0303f-Finalpass-r01.indd 138 9/24/2019 12:03:31 PM Omnivoyance and Blindness 139

belong not only proper names but all the nested linguistic complexities of taking decisions, settling accountabilities, and making political and ethical assessments. What are the conditions of translation here? On the one hand, it is true that in the final analysis every algorithm stems from a human mind— technovision is built and calibrated by human intellect. From this point of view, an autonomous, operator-independent technovisual system is difficult to imagine outside of science fiction. On the other hand, as visual artificial intelligence becomes more adept and increasingly complex, it is reasonable to expect a divide to grow between the machine and the human mind. The divide might manifest itself as outright malfunctioning, but it might also take the form of a gradual, less perceptible drift of system behavior away from the operator’s intentions. As Gregor Noll points out in this volume, such a drift will be particularly difficult to notice if “the particular combination of human and machine cognition is perceived as being more ‘objective’ than a purely human form of cognition, and includes a learning function.”21 That is to say, it is when technovision is thought of as a guarantor of objectivity—for instance, by being portrayed as mastering a language that is transparent vis- à-vis reality—that technovision will in fact be most opaque, most difficult to see through. Perhaps a good name for this impenetrability of technovision is “blinding clarity.” By that I mean an impression of transparency, the nature of which is itself opaque. One can assume that there can be many conscious and unconscious reasons for actors to such an impression in themselves and in others—to protect investments, to conceal collateral damage, wishful thinking, and so forth. Blinding clarity then means that critical assessment is replaced by faith in the assumed objectivity of the system. This constitutes the second level of blindness—a blindness to the opacity of the machine that helps us to see. I have argued above that that blindness is not represented in Ihde’s image of a man who knocks down an apple with a stick, because the man has already identified his “target”—the apple—before reaching for it. For visual artificial intelligence, by contrast, identifying the target is the primary task. I shall now propose an alternative image that better illustrates the dilemma we are faced with at the second level of blindness. We find it in Denis ’s early tract “Letter on the Blind for the Use of Those Who See,” from 1749. In this philosophical investigation of vision, carried out through a study of blindness, Diderot asks a congenitally blind man about how he conceives of the functioning eye. The man—who has never experienced sight and who uses a stick to examine his surroundings—replies that the eye must be “an organ on which the air has the effect this stick has on my hand.”22 When the philosopher asks him if he would not have liked to have had functioning eyes, the man answers, “I would just as soon have long arms. It seems to me my

16028-0303f-Finalpass-r01.indd 139 9/24/2019 12:03:31 PM 140 Chapter 6

hands would tell me more of what goes on in the than your eyes or your telescopes.”23 Diderot concludes that the blind man conceives of vision on the basis of experiencing the world by touch—sight must seem to him as “a kind of touch which extends to distant objects.”24 The Enlightenment philosopher is impressed by the man’s ability to reason about a sensory faculty he has never possessed, but ultimately, Diderot asserts, the nature of sight is beyond his grasp—for instance, the man struggles to understand how a mirror image is not tactile. The congenitally blind man’s dilemma is not that different, it seems to me, from that of an operator of a technovisual system. Both must attempt to infer the logic of a system of perception that is inaccessible to them on the basis of another system that is accessible to them. Must not the understanding of the operator in that situation, like that of the congenitally blind, be constrained by her own perceptual horizon? It seems to me that the human operator as a translator of the “language” of technovision is forced into a cul-de-sac: either she must anthropomorphize technovision or, alternatively, she must abandon human sight as a frame of reference altogether (if that is at all possible for her). Neither, I think, qualifies as “translation” in the common and etymo- logical sense of the word—as “carrying across” a content of meaning over a language barrier. Is technovision, then, in the final analysis, a black box, impossible to pen- etrate? The question is a version of a millennia-old philosophical inquiry into vision. Since antiquity, philosophers have understood vision as a particular form of touch. The atomists Leucippus (fifth century BCE), (c. 460–370 BCE), and (341–270 BCE) developed theories about vision according to which things are visible because they continuously secrete their outermost layer of atoms. These atom-skins, named eidola in Greek25 and similare in Latin,26 were believed to sail through the air into the eye of the beholder. Later materialist scholars built on the atomists’ theory. For instance, the French physician Claude Quillet (1602–1661) pondered how the life-sized simulacra could enter through the small pupil—might the body, he speculated, in fact see with the pores of the skin rather than only through the eyes?27 Contemporary philosophers have borrowed the materialist argu- ment to accentuate a certain intimacy in vision, especially in the viewing of pictures. likens the traces of light inscribed in a photograph to an “umbilical cord” that connects, through time, the photographed object to the beholder’s eye.28 Jean-Luc Nancy says that every image “is à fleur, or is a flower,” because an image is something you very lightly skim against (effleure).29 In reference to , Nancy—here writing about the human sensorium as a whole—states that “touch [le toucher] is nothing other than the touch of sense altogether and of all the senses. It is their sensuality as such.”30 Nancy’s thought is, in turn, referenced by science and technology

16028-0303f-Finalpass-r01.indd 140 9/24/2019 12:03:31 PM Omnivoyance and Blindness 141

scholar Sacha Loeve, who describes how a technology like scanning-force microscopy registers atoms through haptic rather than optic detection.31 These thinkers subsume vision under the broader category of sensing, and they define sensing as touch, because some form of physical connection between object and sensor must occur in every instance of perception. On the one hand, this paradigm allows technovision to be likened to human sight, since they both seem merely to belong to different sections of a continuum of sensing—for instance, radar detectors can register electromagnetic radiation in the bandwidth of 300 MHz to 3 Hz while the human eye can perceive it in the 400–790 terahertz range, which is that of visible light. The very attribute of being visible is thus relativized, because an object may be detectable on one wavelength but not on another.32 On the other hand, the philosophers ref- erenced above subsume vision under touch in the course of diverging trains of reasoning. The atomists searched for an objective account of vision based on a materialist perspective. Barthes and Nancy, in contrast, employ a haptic vocabulary to capture an element of subjective intimacy in looking. They write from a phenomenological position, according to which the embodied- ness of perception is the ground of our existential being-in-the-world. We can conclude from this that the divide between human vision and machine vision is not something out “there” in the world. It is, as philosophers say, observer- dependent and exists only if we regard it on a certain philosophical “wave- length.” What is it, then, more precisely, that drives a wedge in the continuum of sensing, separating human from machine perception? It is, I believe, first and foremost the imagination of the gaze, this phantasmagorical object that constitutes the skeleton of so much of our thinking about vision—not only in theory (it is prevalent in Barthes and Nancy) but also in our day-to-day inter- actions with and through sight. Indeed, I hesitate to refer to it as a “concept” because I doubt it is rooted in culture or thought; think, for instance, about how animals can seem to search out and react to our gaze. It goes deeper than reason. We may very well know that nothing charges forth from the eye in the act of looking, and that the eye is just a dark cavity that we turn toward the world and that the world fills with light; we may know this, but is it at all pos- sible to live as if that were true? Is it possible, individually and collectively, to do without that imaginary bridge between inner and outer life? I see two trajectories here, depending on where you stand. On the one hand, an instrument like an eye scanner can map the minute crypts, furrows, and striations of the iris, but the gaze is undetectable to it because it belongs to a different ontology. If the human translator of technovision wishes truly to adopt the perspective of the machine, she must abandon the imagination of the gaze and come to regard her own faculty of sight as gazeless, as simply one more kind of electromagnetic detector. Then she can turn into a fully integrated component of the technovisual apparatus. She will have left behind

16028-0303f-Finalpass-r01.indd 141 9/24/2019 12:03:31 PM 142 Chapter 6

the anthropomorphization of the human that is expressed through the fantasy of the gaze. On the other hand, in war, technovision serves to identify human targets that are then “touched” very intensely by different kinds of lethal mis- sile—from the point of view of those targets, a reinforced and fully incarnated version of the fantasy of the gaze is resurrected, as if it were seeking revenge over any doubts about its limitless power.

OLD WARS

The human primate stands upright now, swaying slightly, still unaccustomed to her new perspective. Before her, there are two horizons—that of knowl- edge and that of dreams. All the coordinates are in place for the next phase of her evolution. Something continues to push her upward. Now, her feet leave the earth. She starts to ascend through the air, looking downward on the receding ground below her. In January 1946, the German photographer Walter Hahn took the picture displayed here as figure 6.3 from the observation tower of the City Hall in

Figure 6.3 Walter Hahn Dresden, view from the city hall tower on the ruined city center, 1945 Negativ 5 × 7 in. (13 × 18 cm) Courtesy of SLUB Dresden/Deutsche Fotothek

16028-0303f-Finalpass-r01.indd 142 9/24/2019 12:03:31 PM Omnivoyance and Blindness 143

Dresden, showing the city in ruins after the British and American bomb- ings a year earlier, in February 1945. Before the bombing raid, Dresden was ’s seventh-largest city, with large industrial areas in the suburbs. Those areas were mostly spared, however, while the bombs and the ensuing firestorm consumed the densely populated city center, killing approximately 25,000. Imagine the airborne primate pausing in her ascent on the observation platform. The large city, once on the distant horizon, now lies at her feet as a giant pile of rubble. Its dreams have vanished from it, and its remains are held down by their own weight—the very condition that evolution is pushing the human creature beyond. The divide between horizons is no longer horizontal, no longer between the near and the far. Technologies of warfare have closed that distance, and widespread devastation can now be launched with a single strike from above. In that process, the divide has become vertical, between those above, who look down on those below, who take cover. Their respec- tive horizons are diverging from each other rapidly. To the right in Hahn’s photograph, a draped female figure, frozen, beholds the field of destruction. She represents Güte, the virtue goodness, sculpted in sandstone by Dresden artist August Schreitmüller. Her compassionate expression looks empty and inadequate, as if the suffering that she is witnessing below has left her petri- fied and incapable of helping. Imagine now that the primate’s mind merges with hers, for a moment, and that they look out together through her eyes of stone—what do they see? Perhaps her eyes well up with tears so that the contours of the city blur like those in Gerhard Richter’s aerial perspective painting of Paris from 1968 (figure 6.4). Richter has remarked that the series of cityscapes to which this painting belongs reminds him of photographs of bombed-out Dresden.33 The city dissolves through the pastose brushstrokes into a world of paint, where every stroke represents something—a facade, a window, a roof—but where the motif nonetheless is flattened out in the pastose paint. It is like a strange chemical reaction that produces an image both unnaturally sharp and eerily blurry and forces the beholder’s perception to micro-oscillate between motif and paint, image and matter. These oscillations, in turn, trigger an incessant shifting between two drastically different viewing positions—one the aerial outlook over the city, the other the view of the paint on the surface. When the motif of the city emerges, the beholder is thrust upward. When it falls back into paint, she is pulled down close to the painting. The city motif is never allowed to manifest itself entirely, and the painting stubbornly remains a “paint-thing.” But the paint stuff is never allowed to rest because the image keeps stirring it from within. The restless character of this effect becomes more apparent when contrasted with the play of resemblances in Orozco’s photograph, Island within an Island (figure 6.2). As discussed earlier, in Orozco’s picture there is a play of resemblance between the distant skyline and

16028-0303f-Finalpass-r01.indd 143 9/24/2019 12:03:31 PM 144 Chapter 6

the pile of rubble in the foreground, which is made possible by the fact that both are present simultaneously in the beholder’s field of vision. That in itself instills in the photograph a certain calmness, in spite of the tension within the play of resemblance between identity and difference. Richter’s painting is more disruptive in that regard. The shifts between image and matter are more absolute in the sense that one aspect dispels the other completely—the emergence of the image dematerializes the paint, and the return of the paint erases the image, momentarily, across the entire pictorial field.

Figure 6.4 Gerhard Richter Townscape Paris, 1968 Oil on canvas 78.7 × 78.7 in. (200 × 200 cm) Froehlich Collection, Stuttgart © Gerhard Richter 2019 (0045)

16028-0303f-Finalpass-r01.indd 144 9/24/2019 12:03:31 PM Omnivoyance and Blindness 145

In hindsight, art historians have seen a connection between abstraction in post–World War II painting and the individual and collective traumas that war inflicted. In particular, the complete or partial abandonment of figuration for an accentuation of the materiality of the paint, in the work of Jean Fautrier, Jackson Pollock, and others, is perceived, from this perspective, as a token of a general breakdown of meaning—of the credibility of ideals and truths, of faith in society and in the human being—in the wake of genocide, the atomic bomb, and mass destruction. This is especially true of the Nazi geno- cide of European Jewry. American artist Robert Morris has remarked about the Holocaust: “It is from this charred source that all post-Enlightenment appeals to Truth and Reason become covered with ashes.”34 The term “ashes” is, of course, a direct reference to the crematoria in the death camps and to the meaning of the Greek holókaustos—a sacrifice completely consumed by fire. But the word has also a more general significance in this context. Ash is the result of violent decomposition and can be seen to represent a state in which distinctions have been erased, the endpoint of a fierce form of entropy. In this sense ash signifies a common denominator between the cre- matoria of the camps and the bombed-out cities, namely the collapse of net- works of relationships between interconnected yet discrete entities—bodies,

Figure 6.5 Aerial view of the destruction after the atomic bomb over Hiroshima, August 1945 U.S. Navy National Museum of Naval Aviation Commander Francis N. Gilreath Collection

16028-0303f-Finalpass-r01.indd 145 9/24/2019 12:03:31 PM 146 Chapter 6

buildings—into an inert state of indistinct, homogeneous matter. Such is the world after the old wars, the world Güte looks down at, petrified. Things are left “stranded,”35 washed up on the shore of materiality, where meaning can no longer catch on to them—but where the specter of them having once been “read” and understood still lingers. For a minute, the human primate rests at the Rathaus platform, taking in the scene of destroyed Dresden below. But evolution keeps pushing her upward, and soon her ascent continues. Now her feet lose touch with the platform. As she rises through the atmosphere, still wider fields of devasta- tion—Auschwitz-Birkenau, the Gulag, Hiroshima—enter her horizon, but they diminish as she soars upward, now faster. The wind whistles in her ears, and she shuts her eyes.

THIRD BLINDNESS: LOSS OF PERSPECTIVE

The first level of blindness I discussed was captured in Ihde’s magnification- reduction formula, which states that technologies that extend our perceptual reach also reduce the quality of perception in some way or another. The second level of blindness consists in the difficulty of translating the “vision” of visual artificial intelligence into the terms of human perception—and the difficulty of seeing this difficulty. The third level of blindness is broader, because it is not a blindness only to the observed object (like the first blind- ness) or to the technovisual apparatus (the second blindness). Rather, it can be described as a blindness toward one’s perspective—“perspective” here mean- ing a fundamental delimitation of one’s visual field. “Blindness” is meant, of course, metaphorically. There are many forms of real blindness in the world, with various neurological, ophthalmological, and psychological causes. What I call the third level of blindness most closely resembles the form of real blindness known as the Anton–Babinski syndrome or visual anosognosia. A person with this rare diagnosis has lost the neurological basis of vision completely, often as a result of brain trauma. Because the parts of the brain that process visual stimuli no longer function, she has also lost all memories of ever having been able to see. Even the memory of vision is gone. Paradoxi- cally, precisely because anosognosia is such a “complete” form of blindness, the afflicted person is sometimes unaware of the loss she has suffered and remains convinced she can still see and that her perceptual apparatus is intact. For this reason, neurologist Oliver Sacks describes the Anton–Babinski syn- drome as a “blindness to the blindness.”36 This third level of blindness is related to the claims and aspirations about technovision being able to see all there is to see. A telling example of such an ambition is the so-called Global Information Grid, or GIG, which is being

16028-0303f-Finalpass-r01.indd 146 9/24/2019 12:03:31 PM Omnivoyance and Blindness 147

planned and developed by the U.S. Department of Defense (DoD). The GIG is described as a planet-wide information infrastructure consisting of four layers on different altitudes: a terrestrial layer of ground-based operators, a tactical layer of low-altitude aircraft, a layer of high-altitude aircraft, and an outer, satellite-based layer with global reach. The DoD envisions the GIG as forming an “infosphere,” an all-encompassing information environment that will seamlessly integrate all forms of intelligence on all technical platforms, in war and in peacetime, thus guaranteeing uninterrupted “information supe- riority, decision superiority, and full-spectrum dominance.”37 The internal structure of the GIG is described by the DoD as net-centric. It thereby departs from the classical surveillance structure of Jeremy Bentham’s circular panopticon building, famously analyzed by Michel Foucault as the paradigm for modern disciplinary institutions.38 In a prison of panoptical design, what the DoD calls “information superiority” rests exclusively with the guard in the centrally positioned watchtower from which the surrounding prison cells are monitored—a structure corresponding to the composition of Bosch’s Seven Deadly Sins, discussed above. The cells are lined up along the circumference like so many Vitruvian visual pyramids, each of them offering a perspectival view for the guard’s gaze. Omnivoyance in the panopticon is thus incarnated in the figure of the guard in its center. The distributed, net-centric structure of the GIG, on the other hand, has no such central observer position. The system is intended to be accessible from any location, but no single user will have access to the system in its entirety. Therefore, omnivoyant power is not incarnated in the single figure of a central observer but is instead excarnated, made fleshless and abstract. For that reason, it can be all the more pervasive. Its might may manifest itself anywhere, at any time, in highly palpable ways, but its overall organization is invisible and intangible. In the panopticon, the asymmetric distribution of visual power is transparent to the guard and the prisoner alike, whereas in the GIG, the distribution of power is impenetrable from all sides. This also means that we again find that the anthropomorphic fantasy of the gaze is insuf- ficient to represent the internal logic of technovision. The very idea of the gaze implies a beholder located at a specific point of observation from which her line of sight extends in a particular direction. Intrinsic to this idea is also that a perspective accompanies the orientation of the gaze, that is, a particu- lar delimitation of its field of vision. Therefore, we can say that net-centric macrosystems of surveillance such as the GIG lack a perspective. They have abandoned the visual-spatial paradigm of the central perspective, the basis of the panopticon and the dominant model of vision in the West since the Renaissance. This is what philosopher Byung-Chul Han is referring to when he writes that digital networks constitute an “entirely new, aperspectival pan- opticon [that] no longer conducts surveillance from a central point.”39 This

16028-0303f-Finalpass-r01.indd 147 9/24/2019 12:03:31 PM 148 Chapter 6

first “loss of perspective,” when accompanied by claims of omnivoyance, implies in turn a loss of insight at another level—an erasure of the awareness that one’s field of vision is, after all, necessarily limited. In sum, to say that technovisual systems lack a perspective—from the Latin perspicere, “to see through”—is to say that those systems are impenetrable to the humans who are caught up in them, whether they belong among “the surveillants” or “the surveilled.” These very terms must now be bracketed, because when part of an impenetrable, omnivoyant system, a surveillant can never be certain she is not also being surveilled herself. We can therefore expect that, within an aperspectival surveillance structure, the psychic dimension of the panopti- con—that the inmates internalize the gaze of the unseen guard and start to surveil their own behavior (for Bentham, the main advantage of the panopti- con)—will diffuse into a general paranoiac frame of mind. On the basis of the above argument, we are justified in equating “aper- spectival” with ahuman. I intend this term to be understood neither as a direct opposite to “human” (which I would instead term “non-” or “anti-” human) nor as a continuation of the human in a new form (the “posthuman”). The ahuman sits in between those two meanings. The prefix “a-” is here an agnostic sign, signaling that the relation between human perception and technovision is inherently unmappable, that the distance between them can- not be definitely established. The ahuman, understood in this sense, brings us again to the original association between the concept of omnivoyance and the divine. More than anything else, it is the image of God that offers itself as a substitute for the human figure and its failure to serve as a model for the conceptualization of technovision. It should be clear that I do not speak here of any particular religious confession. “God,” in this context, stands for an idea that aims to reconcile radical unknowability with faith. By investing faith in the unknowable power of the divine, the believer can transform her not-knowing into an experience of transcendent clarity. That leap appears to me as a model of the dynamic that drives the discourse of omnivoyance that surrounds technovision. The following story might clarify this further. In Chaim Potok’s novel The Gift of Asher Lev, the protagonist, the painter Asher Lev, is given an explanation by his father of the verse in Genesis that reads: “And God saw everything that He had made, and behold, it was very good.”40 According to this explanation, which Asher Lev’s father had been given in turn by his own father, God sees the world as perfect and complete because He never blinks—in contrast to man, who can only see “between the blinks of his eyes” and therefore must perceive the world “in pieces, in fragments.”41 His father’s point is that an artist must try to learn to see like God, to see during the blinks and to record what he calls the “betweennesses in the world.” In the present context, however, the story can be interpreted differently: it is

16028-0303f-Finalpass-r01.indd 148 9/24/2019 12:03:31 PM Omnivoyance and Blindness 149

precisely because God observes the world uninterruptedly, without blinking, that His vision is marred by an even greater blindness than that which besets the blinking human with her fragmented vision. God’s vision lacks limitation; His omnivoyance is without “perspective,” and therefore He is blind to one thing only, but to that thing He is radically blind—namely to the human out- look and its limitedness. A similar characterization of God’s view as simulta- neously all-seeing and blind to the human perspective is given, from another angle, by the philosopher John Searle in his book The Construction of Social Reality. Searle argues that, to a transcendent, infinitely elevated observer who looks down at the world, everything in the world must appear as a matter of “natural fact,” as expressing the world’s intrinsic, natural properties. For this reason, Searle pictures God as being unable to distinguish between things that humans classify as natural objects, like mountains, molecules, and so forth, and those artifacts and institutions that are created by humans. God could not even see a simple screwdriver (Searle’s example) as humans see it, that is, as a tool with a preordained function (or “status function” in Searle’s terminol- ogy). It would simply be an object that is handled in such and such a way by humans. He might discern its physical structure down to the subatomic level, but He would be blind to the status function of “screwdriver” that humans have ascribed to it.42 (The word “screwdriver” would, of course, also appear to Him as intrinsic to the world, that is, as a sound or mark that humans occa- sionally make or write.) In short, Searle’s God is a radical super-positivist who registers everything but misses everything at the same time—He sees all things in impeccable clarity but never sees them as bearers of observer- dependent meaning. Like sufferers of Anton–Babinski syndrome, His blind- ness is absolute, and therefore His perceptual field must appear complete to Him. To Him, everything seems exposed, without any blind spots or lacunas where secrets might hide.

GLOBAL VIEWS

The human primate soars upward, but suddenly there is only silence. No wind, no gravity, no pull. She opens her eyes. In front of her, planet earth floats luminous and still in black space. The creature who once dwelled on the ground, sniffing and touching the soil, now encompasses the entire globe with her gaze. In this moment, when she can see everything, her ascent has stopped. This is also the point at which her evolutionary myth must come to an end, not only because she has reached an omnivoyant vantage point but also because here, beyond the earth’s gravitational pull, there can be no “up” or “down”—these concepts have lost their meaning. While the primate fought gravity, she seemed to rise upward, but outside the planet’s gravitational pull,

16028-0303f-Finalpass-r01.indd 149 9/24/2019 12:03:31 PM 150 Chapter 6

that perception is revealed as “subjective”—from her new observation point, there never was an actual upward trajectory. The narrative about her ascent breaks down. From here, what does she see? Let us compare two pictures. Figure 6.6 shows the first picture ever pub- lished of the whole earth as seen from space. It was captured with a color scanner on board NASA’s Applications Technology Satellite 3 (ATS-3) in November 1967, and it was printed the following year in the photo book Exploring Space with a Camera.43 Figure 6.7 shows NASA’s now iconic “Blue Marble” photograph of the earth, taken five years later, in Decem- ber 1972, with an ordinary camera from Apollo 17, the last manned expedi- tion to the moon. What can the differences between these two photographic variants of the same motif tell us? I would say that they speak of an aesthetic ideal behind which there is at work a desire for a particular kind of “view”—a totalizing, harmonizing gaze that can hold within its compass “everything,” in a unified world image beyond friction and doubt. The differences may seem small, but they are univocal. In the 1967 picture, the seas are almost as dark as the surrounding black space, making the planet look somewhat ravaged or torn. In the Blue Marble image, the motif has been aesthetically refined so that the color hues within the contours of the planet blend more harmoni- ously and the contrasts are subdued, while the contrast between the planet as a whole and the surrounding space is accentuated. As a result, the earth stands out more strongly as a singular, more coherent form against the emptiness beyond it. At the same time, the clarity and detail is greater. We should not regard this aesthetic difference only as the consequence of the superior image quality of Apollo 17’s camera over that of the ATS-3’s on-board scanner. It also carries meaning; it is a reflection of an ideal of perfected vision and a desire for a “total view.” In the Blue Marble picture, the earth is portrayed as if beheld by a gaze that does not waver, does not doubt, indeed, does not blink. It rests on its object firmly, with a commanding calmness that holds it in place. Let us also revisit the question of the horizon, this token in the image of the presence and position of its beholder. In Orozco’s Manhattan photograph, the height of the line that divides the ground from the sky signals that the camera was positioned above the ground at ordinary eye level. In Hahn’s Dresden picture, taken from the higher vantage point of the City Hall’s observation platform, the horizon is higher up—in fact, outside the upper edge of the picture. The question is, is there a horizon in the earth image at all? The answer is somewhat tricky: the line between ground and sky is now circular and coincides with the very contours of the planet. It is no longer positioned at any specific height in the picture field, and hence it gives no indication of the position of the beholder in that regard, other than that she is “very distant.” The horizon has thereby lost perhaps its most important role

16028-0303f-Finalpass-r01.indd 150 9/24/2019 12:03:31 PM Omnivoyance and Blindness 151

as a pictorial device, namely, to be a marker of the spatial relation between the beholder and the depicted scene and of the limitation to the beholder’s field of vision—in short, a marker of a perspective. Instead, the horizon here seems to encompass everything there is to see; only nothingness, empty black space, exists beyond it. From a purely technical viewpoint, of course, it is not true that observer distance does not affect the visual field. For instance, in space pictures taken closer to earth, a continent in the middle will appear larger than in pictures taken from further away. We also know, obviously, that the pictures only show us one hemisphere and that the planet’s other side is hidden from us. But that is precisely why the motif of the earth from space is so suitable for illustrating what I call the third blindness—its visual rheto- ric asserts to the beholder, reassuringly, in spite of any rational objection,

Figure 6.6 NASA, first color photograph of the whole earth, shot from the ATS-3 satel- lite, November 10, 1967 Wikimedia Commons

16028-0303f-Finalpass-r01.indd 151 9/24/2019 12:03:31 PM 152 Chapter 6

Figure 6.7 NASA, the “Blue Marble” photograph of the earth, shot from Apollo 17 spacecraft, December 7, 1972 Wikimedia Commons

“You see all; nothing is left out.” That is why the Blue Marble image looks so reassuringly absolute and complete, as if all doubt—the prerequisite for interpretation—has vanished from it. The myth of the human ascent ends in this image. The primate has reached the point at which she can visually encompass the world, which now rests in calm, intoxicating clarity before her. The meanings of proximity and distance have dissolved, because she is neither near nor far from anything.

METAPHORS

We struggle here with metaphors, especially those of distance. I shall con- clude by bringing together two thinkers, Hannah Arendt and Byung-Chul

16028-0303f-Finalpass-r01.indd 152 9/24/2019 12:03:32 PM Omnivoyance and Blindness 153

Han, who in different ways have reasoned about blindness—or, at least, about matters that are affined with what I call the third level of blind- ness—through metaphors of distance. In the texts to which I shall refer, neither of them writes specifically about war, but they diagnose—with accuracy, I think—broader tendencies in our time that bear on my argument. In her 1963 essay “The Conquest of Space and the Stature of Man,” Hannah Arendt reflects on the widening divide between the descriptions of reality that advanced technoscience produces and the world that the human mind and senses inhabit. Scientific instruments today, Arendt writes, col- lect data that do not represent phenomena, appearances, because they are derived from a world that does not “appear.” It is a world that makes itself known only through the effects it has on measuring instruments. Data turn up, she says, quoting Max Planck, “like ‘mysterious messenger[s] from the real world.’ ”44 These messengers make it known that there exists a physi- cal yet invisible world that is more real than that of human experience. Science is driven, Arendt continues, by a desire to conquer an ideal point of observation from which it can behold this hidden, real world—a posi- tion she refers to as the “Archimedean point.” She likens it to a position “outside the earth from which it would be possible to move, to unhinge, as it were, the planet itself.”45 Arendt wrote these words during the space race between the Soviet Union and the United States but several years before the release of the first earth photograph and the first manned expedition to the moon. Her phrase “outside the earth” does not, however, refer to an actual position in cosmic space or to any real, spatial position at all. The “point” is metaphorical—it stands for the measuring of reality through instruments, which applies, Arendt remarks, to “the infinitely small no less than the infinitely large.”46 What does the scientist see from the Archimedean point? Arendt describes her gaze in terms that resemble Searle’s account of a God’s eye view. The scientist’s telescopes and microscopes allow her to probe the farthest and minutest aspects of reality, but her instruments are blind to the world as it is experienced by humans—perceived by the senses, understood by the mind, and expressed through language. From the Archimedean point, we see reality— including ourselves—outside a human perspective:

[It means] a radical elimination of all anthropomorphic elements and principles, as they arise either from the world given to the five human senses or from the categories inherent in the human mind. . . . If we look down from this point upon what is going on on earth and upon the various activities of men, that is, if we apply the Archimedean point to ourselves, then these activities will indeed appear to ourselves as no more than “overt behavior,” which we can study with the same methods we use to study the behavior of rats.47

16028-0303f-Finalpass-r01.indd 153 9/24/2019 12:03:32 PM 154 Chapter 6

To equate human activities with the “behavior of rats” is not to degrade the human, because anthropocentric hierarchies belong to what has been purged from the Archimedean observer’s viewpoint. The critical term is “behavior,” which simply means any movement that is observable from the outside. All that is observed, then, will appear as intrinsic features of reality, as natural facts about the world. It is not that the Archimedean observer is blind to some things but not to others; like Searle’s God, she is entirely blind to a particular dimension of all things, namely the dimension of human-created meaning. She may very well detect human expressions of meaning, but these expres- sions will appear to her as mere “things,” that is, as overt behavior. Arendt expands on this, citing physicist and Nobel laureate Werner Heisenberg’s 1958 book The Physicist’s Conception of Nature:

Seen from a sufficient distance, the cars in which we travel and which we know we built ourselves will look as though they were, as Heisenberg once put it, “as inescapable a part of ourselves as the snail’s shell is to its occupant.” All our pride in what we can do will disappear into some kind of mutation of the human race; the whole of technology, seen from this point, in fact no longer appears “as the result of a conscious human effort to extend man’s material powers, but rather as a large-scale biological process.” Under these circumstances, speech and everyday language would indeed be no longer a meaningful utterance that transcends behavior even if it only expresses it, and it would much better be replaced by the extreme and in itself meaningless formalism of mathematical signs.48

Arendt’s term for the idealized observer position of science refers to Archimedes’ discovery of the principle of leverage, according to which our power over things increases with our distance from them.49 Influenced by the space race, she likens the Archimedean point to a remote position in cosmic space from which the scientist can look “down” on earth. It should be noted, however, that Arendt acknowledges that if humans actually reach such a point, thereby claiming mastery of the planet, that position would still remain relative to the earth, and the scientific observer would therefore need a new Archimedean point beyond that one, and then another, leading to an infinite regress. Arendt concludes, “The only true Archimedean point would be the absolute void beyond the universe.”50 Her metaphorics of distance is thereby taken to its logical conclusion—the Archimedean point is always posited further away. Byung-Chul Han, writing some fifty years after Arendt, employs an oppo- site metaphorics of distance when he comments on a contemporary society perfused with digital networks, overloaded with information, and obsessed with producing and consuming data. He ascribes several names to it—the Society of Transparency, the Society of Positivity, the Society of Control,

16028-0303f-Finalpass-r01.indd 154 9/24/2019 12:03:32 PM Omnivoyance and Blindness 155

and other terms, all of which signal an obsession with visibility and expo- sure. Late capitalism is the motor of this society. For things to slide into the global circuits of information and money, Han writes, they must be purged of all their inherent negativity. “Matters prove transparent when they shed all negativity, when they are smoothed out and leveled, when they do not resist being integrated into smooth streams of capital, communication and informa- tion. . . . Actions prove transparent when they are made operational—subor- dinate to a calculable, steerable, and controllable process.”51 Han sees a form of blindness spreading under the reign of transparency. It consists in the loss of a capacity to linger with negativity, with the secrets and the depths of exis- tence—a capacity Han regards as the foundation of theoretical reflection and aesthetic contemplation. Transparency forces things and beings out into expo- sure and transforms them into commodities for consumption, measurement, or destruction, Han claims. Their nature, their otherness, does not “vanish in the dark, but through overexposure.”52 In a chilling and somewhat apoca- lyptic formulation, Han defines the Society of Transparency as “an inferno of the same.”53 Thus, like Arendt, he describes a form of omnivoyance—a desire and a claim to see everything—paired with an absolute and inescapable blindness. But whereas Arendt links the vision-blindness syndrome to an ide- alized observation point that is utterly remote, Han reverses the metaphor of distance. To him, the Society of Transparency is characterized by a complete collapse of distance, which, however, should not be understood as proximity.

The society of transparency views all distance as negativity to be eliminated. Distance represents an obstacle to the acceleration of the flows of communica- tion and capital. . . . Lack of distance is not proximity. If anything, it destroys it. Proximity is rich in space, where distancelessness annihilates space. . . . Transparency re-moves [ent-fernt] everything into uniform de-distantiation that stands neither near nor far.54

“Neither near nor far”—those words describe a collapse of relational space, the empty interval between beholder and object, and between objects, that simultaneously keeps them connected and holds them apart. It is this in- between space that I have constantly returned to in this essay, as a space for negotiations between matter and meaning, or paint and motif, for doubt and interpretation, a gap where things neither merely are nor completely mean, and where, therefore, an imaginary human gaze can take flight and see. In its collapse, Arendt’s and Han’s opposed metaphors of distance meet. The former’s Archimedean point, as ultimately “the absolute void beyond the uni- verse,” and the latter’s concept of “de-distantiation” are both expressions of a collapse of relationality between the observer and the observed or, to return to an oft-used term in this essay, of the elimination of “perspective.” They meet

16028-0303f-Finalpass-r01.indd 155 9/24/2019 12:03:32 PM 156 Chapter 6

on a plane of leveled, flat visibility. Arendt insists that, from the Archimedean point, everything human appears as “overt behavior.” Han asserts that the “compulsion for transparency flattens out the human being itself.”55 They both describe a condition in which the human being is ex-plained—mapped entirely as information and, in that process, emptied of depth, leveled out.56 Another tricky metaphor here is that of the single beholder. Arendt’s Archimedean scientist and, in the present essay, the “human primate” and the “drone pilot” are anthropomorphic idealizations, prisms of a kind, construed so as to bring into focus some larger systemic and abstract issues about technovision. I have expressed skepticism about the adequacy of anthropo- morphic metaphors to serve that purpose, and yet the little evolutionary myth I have presented about the “human ascent” from primate to pilot is nothing if not an affirmation of that metaphor. As mentioned earlier, the rationale for using that figure is that the drone pilot seems to embody a particular prob- lematics of distance—is she inside or outside the theater of war? Is her power to kill across large distances unethical? Of course, many forms of weaponry, from the longbow to the machine gun, have initially been deemed immoral because of their range.57 But the range of those earlier weapons systems is nonetheless limited. They are employed over distances that are measurable and therefore, in an elemental sense, relational. In the case of the drone pilot, perhaps it is the sheer intangibility of the electronic-visual circuitry that links her to her target that is the source of her “blurriness.” The crucial point is, then, not that she sits far away from her target but that she is, again, “neither near nor far”—that distance is inconsequential. She has been accused of dem- onstrating a “PlayStation mentality,” pathologically devoid of empathy, as if killing is like a video game to her.58 And yet she appears to suffer from post- traumatic stress disorders to the same degree as soldiers who have been in war zones.59 Grégoire Chamayou therefore advises us to distinguish between different kinds of distance. The drone pilot, he argues, might be geographi- cally remote from her targets, but she may nonetheless feel psychologically close, because optical telecommunications allow—or force—her to see and empathize with them.60 At this point, however, I shall abandon the issue of what and how the drone pilot sees and return to asking what we see in the image of the drone pilot. Let me approach that question via a final juxtaposition of pictures—two pictures that, of those discussed in this essay, may seem to be the most remote from each other. One is a painting that in itself presents an image of human vision much like that which I have opposed to the notion of omnivoyance. In Caspar David Friedrich’s Wanderer above the Sea of Fog, from around 1817, today at Hamburg Kunsthalle, a wanderer resting on an alpine height looks out over a mountainous landscape draped in mist (figure 6.8). Leo Koerner has remarked that the painting itself seems to radiate from the eye and heart of

16028-0303f-Finalpass-r01.indd 156 9/24/2019 12:03:32 PM Omnivoyance and Blindness 157

this beholder, who, his back turned toward us, remains anonymous and face- less.61 The figure, called the Rückenfigur, is a recurring trope in Friedrich’s art, a beholder in the image who serves as a placeholder for us, the viewers of the painting, mediating our gaze upon the vastness of the depicted landscape. Through the Rückenfigur, the painted world appears to us as seen, indeed as

Figure 6.8 Caspar David Friedrich Wanderer above the Sea of Fog, ca. 1817 Oil on canvas 37.3 in. × 29.4 in. (94.8 cm × 74.8 cm) Kunsthalle Hamburg, Hamburg

16028-0303f-Finalpass-r01.indd 157 9/24/2019 12:03:32 PM 158 Chapter 6

existing for the very purpose of being beheld. Koerner rightly recognizes here an almost programmatic unfoldment of Kantian aesthetics, in which the sub- limity of nature originates not from nature itself but from the viewing subject and is afforded to nature from within that subjective interiority.62 And because the depicted world in this sense emerges only with the beholder’s gaze, the terrain beheld is also her own self—but a self that is never fully given over to her. In this lies the paradox of the Rückenfigur. The figure stands before us, and for us, but its presence remains oddly empty, like a marker of absence. Turned away, the figure mediates our gaze but also blocks our line of sight, a blind spot in the image, hiding from us the very instrument of vision, the eyes. Let us move from Friedrich’s painting to a photograph by Gilles Mingasson of two drone operators sitting before a bank of monitors—which has quickly become the standard way of depicting that occupational group (figure 6.9). Could a shift be more drastic than from the mountainous landscape to the confined space of the control station, cramped with instruments and with a wall of screens towering over the seated operators? And yet, with her back turned toward us, and facing an image of the world unfolding in the shape of maps, drone footage, and other data constantly updating on the monitors

Figure 6.9 Gilles Mingasson A drone pilot and a drone sensor operator practice on a simulator at Holloman Air Force base in New Mexico, 2012 Courtesy of Gilles Mingasson

16028-0303f-Finalpass-r01.indd 158 9/24/2019 12:03:32 PM Omnivoyance and Blindness 159

in front of her, the drone pilot constitutes a Rückenfigur of sorts, a negative screen for the projection of new models of sight in the era of technovision. Both these idealized observers take in a world that is composed of fragments. Friedrich’s wanderer can only infer the landscape’s topography from the few peaks and slopes that emerge from the mist, disconnected from each other. At the control station, the drone pilot can gain situational awareness only by stitch- ing together the various forms of information that are scattered across a constel- lation of separate screens. To neither of them is a unified world simply given. If that constitutes their common denominator, let me also point to one principle difference between their situations. In Friedrich’s painting, the wanderer’s gaze pierces through the rising mist in a way that is aligned and continuous with the landscape’s horizontal flight toward the horizon. A distance thus opens up for the gaze to traverse, which sets in motion a dynamics that can be described in abstract terms as a series of “phases” leading into one another (figure 6.10).

Figure 6.10 The “phases” of a perspectival gaze Max Liljefors

The flight of the drone pilot’s gaze, on the other hand, is cut off by the vertical surface of the screens, onto which a continuous flow of data from the technovisual system is projected. The perpendicular, technovisual gaze thus eliminates the horizon from the visual field and replaces it with a wall of screens. What the photograph shows, then, is the collapse of relational distance that Han refers to as “de-distantiation,” the condition of standing neither near nor far. Human and machine vision are here portrayed as dis- continuous. We can formulate this discontinuity in an alternative way. In Friedrich’s painting, Koerner notes, the “painted world turns inward on the beholder.”63 The landscape gazes back into the depths of the one who beholds it, depths that stand in a homologous relation to the landscape’s own vastness. In the photograph, by contrast, the flat screens stare back at the drone pilot, and in that process ex-plain her, flatten her out, into one more channel for the circulation of data. As Han formulates it, she is turned into “a functional ele- ment within a system.”64 But as we fix our gaze on the drone operators in the photograph, we must ask ourselves: Have they really become an integral part of the technovisual system? Is their scope of vision so entirely taken over by the apparatus that no distance remains between the system and its operators?

16028-0303f-Finalpass-r01.indd 159 9/24/2019 12:03:32 PM 160 Chapter 6

Or is the discontinuity of the human and the machine gaze itself a generator of such distance? If the latter, we might hope for the growth of a critical skepti- cism toward the myth of a human ascent to absolute, omnivoyant power, even if the operator is hooked up to technovisual systems that enlace the globe. But if the former is true, must we not imagine the metaphoric figure of the drone pilot to still dwell inside that myth, unaware that she has never left the ground?

NOTES

1. Paul Virilio, The Vision Machine, trans. Julie Rose (Bloomington: Indiana University Press, 1994), 32. 2. Paul Virilio, Desert Screen: War at the Speed of Light, trans. Michael Degener (London: Bloomsbury Academic, 2005), 46. 3. Nikolaos Mesarites, “Description of the Church of the Holy Apostles at Constantinople,” trans. Glanville Downey, Transactions of the American Philosophical Society 47, no. 6 (1957): 870. 4. Joseph Leo Koerner, “Hieronymus Bosch’s World Picture,” in Picturing Sci- ence Producing Art, ed. Caroline A. Jones and Peter Galison (Abingdon-on-Thames: Routledge, 1998), 317. 5. Ibid. 6. International Human Rights and Conflict Resolution Clinic (Stanford Law School) and Global Justice Clinic (NYU School of Law), Living under Drones: Death, Injury and Trauma to Civilians from US Drone Practices in Pakistan, Septem- ber 25, 2012, available at https://law.stanford.edu/publications/living-under-drones- death-injury-and-trauma-to-civilians-from-us-drone-practices-in-pakistan/; Amin Parsa, “Knowing and Seeing the Combatant: War, Counterinsurgency and Targeting in International Law” (PhD diss., Lund University, 2017), 200–10. 7. Jefferson Morris, “UAV Battlelab Experiments with Feature Recognition Software,” Aerospace Daily & Defense Report, April 13, 2004, http://aviationweek .com/awin/uav-battlelab-experiments-feature-recognition-software. 8. Steven Trimble, “Sierra Nevada Fields ARGUS-IS Upgrade to Gorgon Stare Pod,” FlightGlobal, July 2, 2014, www.flightglobal.com/news/articles/ sierra-nevada-fields-argus-is-upgrade-to-gorgon-stare-400978/. 9. Parsa, “Knowing and Seeing,” 20. 10. David Lippincott, “UAV Data Imaging Solutions Push Limits of Embedded Technologies,” Journal of Military Electronics & Computing (April 2016) 18–21. 11. Don Ihde, Expanding : Visualism in Science (Evanston, IL: Northwestern University Press, 1998), 46–74. 12. Vilém Flusser, Into the Universe of Technical Images, trans. Nancy Ann Roth (Minneapolis: University of Minnesota Press, 2011), 16. 13. Virilio, Vision Machine, 73. 14. Ibid., 59–60.

16028-0303f-Finalpass-r01.indd 160 9/24/2019 12:03:32 PM Omnivoyance and Blindness 161

15. See for example, Rosi Braidotti, The Posthuman (Cambridge: Polity Press, 2013); Andy Miah, “Posthumanism: A Critical History,” in Medical Enhancements & Posthumanity, ed. Bert Gordijn and Ruth (Berlin: Springer, 2009), 71–94; Katherine N. Hayles, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (Chicago, IL: University of Chicago Press, 1999). 16. Jaap Panksepp, “The Periconscious Substrates of Consciousness: Affective States and the Evolutionary Origins of the Self,” Journal of Consciousness Studies 5, nos. 5–6 (1998): 566–82; René Rosfort, “Ambivalent Embodiment,” in The Atom- ized Body: The Cultural Lives of Genes, Stem Cells and Neurons, ed. Max Liljefors, Susanne Lundin, and Andrea Wiszmeg (Lund: Nordic Academic Press, 2012), 83–111. 17. Charles Darwin, The Descent of Man, and Selection in Relation to Sex (New York: D. Appleton, 1871), 135–42. 18. , Civilization and Its Discontents, trans. James Strachey (New York: W. W. Norton, 1962), 46–47n1. 19. Yeganeh Torbati, “Pentagon Creates Award for US Drone Pilots, Cyber Warriors,” Reuters, January 7, 2016, http://www.reuters.com/article/ us-usa-military-awards-idUSKBN0UL2MN20160107. 20. H. L. H. de Penning, et al., “A Neural-Symbolic Agent with a Mind’s Eye,” Neuro-Symbolic Learning and Reasoning Technical Report WS-12-11, Association for the Advancement of Artificial Intelligence, 2012, 9. 21. Noll, this volume, 88. 22. Denis Diderot, “Letter on the Blind for the Use of Those Who See,” in Diderot’s Early Philosophical Works, trans. Margaret Jourdain (London: Open Court, 1916), 73. 23. Ibid., 77. 24. Ibid., 72. 25. Epicurus, The Extant Remains, trans. Cyril Bailey (Oxford: Clarendon Press, 1926), 18–55. 26. Lucretius, De rerum natura, trans. William Ellery Leonard (Boston, MA: E. P. Dutton, 1916), 4.110–142. 27. Steven Connor, The Book of Skin (London: Reaktion, 2003), 111. 28. Roland Barthes, Camera Lucida: Reflections on Photography, trans. Richard Howard (New York: Hill & Wang, 1982), 110. 29. Jean-Luc Nancy, The Ground of the Image, trans. Jeff Fort (New York: Fordham University Press, 2005), 3–4. 30. Jean-Luc Nancy, The , trans. Peggy Kamuf (Palo Alto, CA: Stanford University Press, 1996), 17. 31. Sacha Loeve, “Sensible Atoms: A Techno-Aesthetic Approach to Representa- tion,” Nanoethics 5, no. 2 (2011): 212n17. 32. We might recall Soviet admiral Sergei Gorschkov, in 1973, prophesying after having witnessed the Israeli military outmaneuvering Syrian missiles with jamming technology during the Arab–Israeli War: “The next war will be won by the side that best exploits the electromagnetic spectrum.” When the U.S. Army released its first field manual for Cyber Electromagnetic Activities in 2014, commentators noted that

16028-0303f-Finalpass-r01.indd 161 9/24/2019 12:03:32 PM 162 Chapter 6

the Pentagon agreed at last with Gorshkov’s forty-year-old prediction. U.S. Army Headquarters, Field Manual 3–38: Cyber Electromagnetic Activities, February 12, 2014, available at www.fas.org/irp/doddir/army/fm3-38.pdf; Homeland Security News Wire, “US Army Releases First Field Manual for War in the Electromag- netic Spectrum,” March 6, 2014, www.homelandsecuritynewswire.com/dr20140306- u-s-army-releases-first-field-manual-for-war-in-the-electromagnetic-spectrum. 33. Gerhard Richter, Gerhard Richter: Text. Writings, Interviews and Letters 1961–2007 (London: Thames & Hudson, 2009), 262. 34. Robert Morris, “Three Folds in the Fabric and Four Autobiographical Asides as Allegories (or Interruptions),” Art in America 77 (1989): 150. 35. Eric L. Santner, Stranded Objects: Mourning, Memory, and Film in Postwar Germany (Ithaca, NY: Cornell University Press, 1990). 36. Oliver Sacks, The Man Who Mistook His Wife for a Hat (London: Picador, 1986), 39. 37. See Marabrata Guha, Reimagining War in the 21st Century: From Clausewitz to Network-Centric Warfare (Abingdon-on-Thames: Routledge, 2010), 178n59. 38. Michel Foucault, Discipline and Punish: The Birth of the Prison, trans. Alan Sheridan (New York: Pantheon, 1977), 195–228. 39. Byung-Chul Han, The Transparency Society, trans. Erik Butler (Palo Alto, CA: Stanford University Press, 2015), 45. 40. Gen. 1:31 (English Standard Version). 41. Chaim Potok, The Gift of Asher Lev (London: Penguin, 1990), 100. 42. John R. Searle, The Construction of Social Reality (New York: Free Press, 1995), 12; Max Liljefors, “In between the Human and the Animal: Subjectivity and Authority in Ann-Sofi Sidén’s Queen of Mud Project,” Journal of Art History 79, no. 4 (2010): 185–99; Max Liljefors, “Neuronal Fantasies: Reading Neuroscience with Schreber,” in The Atomized Body: The Cultural Lives of Genes, Stem Cells and Neurons, ed. Max Liljefors, Susanne Lundin, and Andrea Wiszmeg (Lund: Nordic Academic Press, 2012), 143–70. 43. Edgar M. Cortright, Exploring Space with a Camera (Washington, DC: Office of Technology Utilization, National Aeronautics and Space Administration, 1968). 44. Hannah Arendt, “The Conquest of Space and the Stature of Man,” in Hannah Arendt, Between Past and Future: Eight Exercises in Political Thought (London: Penguin, 2006), 261. 45. Ibid., 272. 46. Ibid., 261. 47. Ibid., 260, 273–74. 48. Ibid., 274. 49. Hannah Arendt, “The Archimedean Point,” lecture at the College of Engi- neers, University of Michigan, 1968, in The Hannah Arendt Papers at the Library of Congress (Series: Speeches and Writings File, 1923–1975, n.d.), 2. 50. Arendt, “Conquest of Space,” 272. 51. Han, Transparency Society, 1. 52. Ibid., 11.

16028-0303f-Finalpass-r01.indd 162 9/24/2019 12:03:32 PM Omnivoyance and Blindness 163

53. Ibid., 2. 54. Ibid., 13–14. 55. Ibid., 2–3. 56. David Summers, “Real Metaphor: Towards a Redefinition of the ‘Concep- tual’ Image,” in Visual Theory, ed. Norman Bryson, Ann Holly, and Keith Moxey (Cambridge: Polity Press, 1990), 254. 57. Rosa Brooks, “Drones and Cognitive Dissonance,” in Drone Wars: Trans- forming Conflict, Law, and Policy, ed. Peter L. Bergen and Daniel Rothenberg (Cambridge: Cambridge University Press, 2014), 233. 58. Chris Cole, Mary Dobbing, and Amy Hailwood, Convenient Killing: Armed Drones and the “Playstation” Mentality (Oxford: Fellowship of Reconciliation, 2010). 59. Joseph L. Campo, “Distance in War: The Experience of MQ1 and MQ9 Air- crew,” Air & Space Power Journal, 2015, available at http://www.au.af.mil/au/afri/ aspj/apjinternational/apj-s/2015/2015-3/2015_3_03_campo_s_eng.pdf. 60. Grégoire Chamayou, A Theory of the Drone, trans. Janet Lloyd (London: New Press, 2015), 114–24. 61. Joseph Leo Koerner, Caspar David Friedrich and the Subject of the Land- scape (London: Reaktion, 2009), 213. 62. Ibid., 212. 63. Ibid., 213. 64. Han, Transparency Society, 2–3.

16028-0303f-Finalpass-r01.indd 163 9/24/2019 12:03:32 PM 16028-0303f-Finalpass-r01.indd 164 9/24/2019 12:03:32 PM Chapter 7 Of the Pointless View: From the Ecotechnology to the Echotheology of Omnivoyant War Allen Feldman

OMNIVOYANCE: THE LAW OF LAWS

Max Liljefors’s “Omnivoyance and Blindness” maps the “emergence of a world of war,” through an erudite navigation of art-historical, theological, and techno-polemological fields of inquiry, representation, and practice. He excavates emergent planetary spheres of warfare that have been filtered through the occlusions, insensibility, and impunity of the apparatuses and ideologeme of omnivoyance. The prefix “omni-” refers to “in all ways or places,” or “of all things,” and the French voyant, “seeing,” is from the Latin videre, “to see,” and indirectly refers to the derangement of the senses of the omnivoyeur. The Latin etymon of omnus (all) is omnīnō—the act of making, to produce in abundance, and to possess. Omniparens—the bringing forth of forms—links to emergere, the etymon of emergence and emergency, which in the seventeenth century described a process of coming forth, issuing from concealment, obscurity, and having buoyancy. Emergence and emergency index an anamorphic form arising from murky origins as a parodic simulation of world creation. In early modern optics, emergence described a heavenly body after occultation or an eclipse. The apparitional arc from emergence to emergency, from concealment to anamorphic creation/destruction, marks the military entanglement with omnivoyance. Rather than revisiting Liljefors’s powerful intervention, I will follow selected lines of flight suggested by his multiplex analysis. I consequently engage omnivoyant war through four interlocking schemas: (1) a phenom- enology of omnivoyant algorithmic topology; (2) the desubjectivation of vision and violence; (3) the optical apophatics of Nicholas de Cusa; and (4) the pastoral governmentality of Michel Foucault. Under the military rubric of “the operational preparation of a battle-space,” a new organology of war

165

16028-0303f-Finalpass-r01.indd 165 9/24/2019 12:03:32 PM 166 Chapter 7

has been precipitated by omnivoyant technologies and ideologies exempli- fied by autonomous weapon systems, meta-dataveillance, facial recognition programs, n-gram modeling, distributed querying across cloud databases, and algorithmic “disposition (targeting) matrices” extracted from human intelli- gence (HUMINT) and signal intelligence (SIGINT). This algorithmic “eye” advances the specter of a computable and thus totalizable world. Computa- tional ordinance assembles “a world” and presents itself as a nonconstructed and ubiquitous global becoming in converging with the world target without lacunae through “real-time” archivization. Omnivoyant warfare has become a seminal mediology of globalization through its vectoring of terrestrial topolo- gies as actionable sites of data and body capture. Targeting bodies through the signature of “pattern of life” metadata has evolved into warring on forms of life, a shift from the (presumed political behavior) to the inferential ontology of essentializing metadata. However, omnivoyant globalization is a scopic drive that falls short of world totalization, though not self-totalization, through its didactic presen- tation as a regime of truth that claims to secure ultimates—life, death, and security itself. Omnivoyance becomes its own recursive historical object and telos as opposed to the world it claims to encapsulate and reorder as its supportive scaffolding. In effect, omnivoyant warfare is the tactical mas- tering of the differentia of the world through the latter’s optical compres- sion to and vectoring by commensurable topological profiles. “Topology is concerned with how bodies, discourses, and spaces are to be organized and related, and with the political connectivity arising from these arrange- ments.”1 Topologization is the demarcation of epistemic and actionable sites and nonactionable terrains. It both surrogates and enframes the earth through a superimposed disciplinary grid of datascapes. Omnivoyance invests in the constitution and archivization of topologies as vehicles in miniature of world monstration. As an engine of globalization, topologiza- tion is concerned with “maximization,” as the “constitution of fantasms to subsume particulars.”2 Here I follow the philosopher Reiner Schürmann, who associates epis- temic claims to maximality, such as those made by omnivoyant warfare, with hyperbolic reason, wherein it “(becomes) necessary to train reason in finitude, a task to be taken up ever anew.”3 However, the conundrum of submitting military omnivoyance to an analytic of finitude is that it has instituted itself ex nihilo as “the law of laws” that governs the phenomenality and destruction of risk, threat, and political deviance. Omnivoyance pres- ents as an ontologizing political theology that accords itself an immunizing trans-ascendance and phronesis through technological determinism, scopic power, and arcana mathematica Imperii (the mathematical secrets of power) as the raison d’état.4 A media archeology of omnivoyance suggests its genesis

16028-0303f-Finalpass-r01.indd 166 9/24/2019 12:03:32 PM Of the Pointless View 167

and consequences are irreducible to its technological instantiation, thereby pointing to finite analytic and historical sites beyond its totalizing claims and closures. However, an aporia remains: how are we to master the thinking of omnivoyance as the law of laws in all its technical and ideological forms when it appears to have already mastered us?

The law of laws is fantasmic because it does not yield to any prehension and evades comprehension; we cannot possess it. To assign it a place would already be to make it appear before the law, and we all know that all legal appearances are made before it, not the other way around. It does not appear. Rather, it makes things appear—it makes phenomena out of beings.5

The computational conversion of beings into phenomena infuses the drone’s omnivoyant dwell time over “the world” that implants itself as if to make the densities of the world disappear within itself. As Jean-Luc Nancy writes of the political economy of globalization, “Everything takes place as if the world affected and permeated itself with a death drive that soon would have nothing else to destroy than the world itself.”6 Omnivoyant war targets an immundus. “The adjective mundus . . . means proper, clean, elegant . . . by opposition with immundus: immonde, dirty, impure, foul, abject.”7 Omnivoyance, in its drive to sterilize, is a political theodicy of datascapes, such as the “terrorist” immundus that is both constituted and targeted by omnivoyant technology. It is a simulacrum of world creation through the nonsynonymous substitution of creation by a cartographic will to topologize. This is the globalizing drive to govern the sense of the world and all avenues of access to it. Computational machines reassemble material and temporal discontinui- ties into recursive, self-referential constructs through pattern recognition and anomaly detection. Paradoxically, the topological grid that mirrors omnivoy- ant power hegemonizes space through deterritorializing datafication. Algo- rithmic cartography anonymizes what was once site-specific by transforming in situ sectorial codes into detachable and transposable data packets and profiles freed from territorial tractions. As a geopolitics, topologizing omni- voyance aims at making a uni-verse, which etymologically and politically implies “to be turned toward the one.” As a drive to globalize, asymmetrical war projects a total geopolitical equivalential symmetry that presupposes infinite cartographic translatability from one locus to another. Omnivoyant “real-time” computation propels a maximalizing im-mediate topographic commensuration in its vectoring of the terrestrial surface into the immundus of battle sites unhindered by interposing delays, filters, and the occlusions of duration that were once called mediation and historicity. The trope of world emergence, in tandem with omnivoyant ecotechnol- ogy, tacitly presupposes the coming into view of a terra incognita to be

16028-0303f-Finalpass-r01.indd 167 9/24/2019 12:03:32 PM 168 Chapter 7

explored and mapped and not solely as a “target of opportunity” for polemo- logical colonization. A terra incognita of war implicitly resists any omni- voyant truth claims about war as proffered by the engineers and ideologues of omnivoyance. A terra incognita is not yet an enclosure submissible to omnivoyance if the latter implies exhaustive knowability and unhindered scopic/auditory/informatic access. For Nancy such globalizing programs are liable to the inaccessible, and by its own self-definition omnivoyance is condemned to and defined by the impenetrable. With its investment in total access, omnivoyance is also a withdrawal from the excess of sense—from the inaccessibility of what is jettisoned from its topological enclosures, such as collateral damage.

A POINTLESS VIEW

Omnivoyance, in becoming omnus, cannot be theorized as a positional gaze; it does not emanate from a singular point of view. Its gaze would not be analogous to an expanding cone, widening outward from a narrowed aperture associated with the point of view of the perspectival subject. Omnivoyance, irrespective of its targeting function, is a pointless gaze sundered from human seeing as a contractive aperture. Omnivoyant warfare is a desubjectivation of vision and violence that overturns a sensory anthropology, conical vision, and related regimes of knowledge that have been historically postulated from Renaissance perspective to photography and cinema. The “operative image” of the unmanned apparatus instates what Harun Farocki calls “the uncon- scious visible,” an a-subjective visibility generated through “a-signifying machines” that does not originate in, nor requires, human consciousness, intentionality, or a point of view to generate significance.8 This autonomized visibility is the effect of what Felix Guattari identified as a “molecular machinic unconscious.”9 Deleuze describes the machinery of an unconscious structured like a state in the language and logistics of military ballistics and cartography, as a moving itinerary that “follows world-historical trajectories.” “The unconscious no longer deals with persons and objects, but with trajec- tories and becomings; it is no longer an unconscious of commemoration but one of mobilization, an unconscious whose objects take flight rather than remaining buried in the ground.”10 Farocki discerned an early version of optical desubjectivation in the smart bomb footage of the 1991 Gulf War. With “operative images” generated by “unmanned” delivery systems, “it must be noted that . . . no people can be seen. The battlefield is uninhabited. When you see an entire roll of such images you cannot help but think that the war will continue on well after humanity has disappeared from the face of machines.”11 Farocki notes the cinematic

16028-0303f-Finalpass-r01.indd 168 9/24/2019 12:03:32 PM Of the Pointless View 169

lack of extras in this scenography—an absence that reciprocally points to the progressive evisceration of the first-person shooter from the dispositif of the algorithmic weapon. This is exemplified by the discourse of drone-war ethicists and designers for whom the elimination of human optical-cognitive error, termed “scenario fulfillment,” from the “data loop” of drone operations is the precondition of achieving “humanitarian” warfare through unmanned ordinance.12 The desubjectivation of vision by the omnivoyant apparatus is the political ordering of delimited spheres of human association and interac- tion by seemingly illimitable bandwidths of nonhuman, “unmanned” vision. To what degree does this lack of anthropological position, now posited as the precondition of “humanitarian war,” authorize the ethical neutralization and a-normativity of algorithmic governmentality? In drone omnivoyance, a gaze does not beam out from the expanding purview of the positioned drone operator and terminate in the target. Rather both “operator” and specified target are reciprocally generated and positioned as nodes of the “location service” delivered by dwell time over the target and “the kill box” (battle space). Here the conical model associated with subject-centered vision is inverted. The kill box is the punctum, the narrowed aperture, the privileged emplacement, and the “filled space” that expands exponentially to capture the remote operator and the long-ranged target at the outer rim of a net so distended as to be named “split location,” as it does not spread over a coextensive spatial-temporal continuum. The appellation “first-person shooter,” which ascribes “firstness” or authorial control to the operator, is overturned by the data flow of the kill box, which triggers the operation and the operator. It is from the punctum of the kill box, a cavity without eyes, that metadata is generated and algorithmically compressed from dispersed raw elements the “human” operator has not seen and could not discriminate prior to the discrete assemblage of this flux into ballistical data packets. Here the terrestrial surface and the lifeworlds supported by it are subjected to a living pixilation, a decomposition/recomposition of vari- ous discontinuities of time and space that permits the recombinant production of terrorist value forms through prosecutory metadata that legitimate acts of destruction from the air. The assemblage comprising the remediated first-person shooter, kill-box data flows, and the real-time target is flattened by the embedding of these elements in a “kill chain” that exceeds the sum of its constituent parts. This enchainment corresponds to what Cornelia Vismann terms an “autopraxis,” (Eigenpraxis) that “implies . . . that operations can also be executed by non-personal agents that do not act in a syntactical-juridical sense. . . . Per- petrators and victims, those who give orders and those who suffer them, no longer coincide with grammatical subjects and objects. The medium creates a relational middle ground here, which does not simply amount to a reversal

16028-0303f-Finalpass-r01.indd 169 9/24/2019 12:03:32 PM 170 Chapter 7

of the two positions.”13 This is applicable to the operator/kill box interface, in which neither occupies a demarcated subject and object position but each is compressed into a series of profiles—a continuous flow of differentiating attributes. The operator’s decision making begins and ends as subordinate intervals and puncta rendered operational by ongoing algorithmic archiviza- tion and retrieval. The kill box’s datafication of volumetric space conducts the conduct of operator. The operator as triggered is a “target” and endpoint of the operation as much as the site of assault or data-image capture. The kill box discharges the operator’s reactivity and liability to an assembled view lent to the operator through informatic architecture comprising preemptive pattern recognition and anomaly detection routines. The operator is percep- tually (in)formed and malformed (sometimes traumatically) by a machinic unconscious—a dense and artificed sensory envelope of filtered molecular data. This highly focalized and specialized spectrum or Umwelt is an islanded milieu that adumbrates the Umgebung (the so-called objective space) of the operation. For Jakob von Uexküll, the Umwelt as sensorial specularity and spatial aesthesis is the species-specific horizon from which the animal receives its triggers of enactment. In Heidegger’s desubjectifying reading of the Umwelt/animal relation, the latter is in a state of captivation (Benom- menheit) by its surround and in a state of “benumbedness” (benommen) as it is lured by a hypnotic ring (Umring) of “search images” and corresponding stimuli, a net of “disinhibiting” triggers borne and delivered by the speci- ated and otherwise vectored Umwelt, understood as a sensorial bubble.14 The drone operator is not omnivoyant but rather his or her hand–eye coordination and cognition are contracted and fitted into algorithmic catalysts that disin- hibit the operator’s action through the perceptual cues of a “search image.”15 Perceptual objects are encountered indirectly by way of the search image as “that inborn path which mocks any and all objectivity and yet intervenes in the environment according to a plan.”16 The drone operator is interpellated by and subjected to the envelope and bombardment of a data milieu. This milieu effectuates the techno-informatic flattening of operator autonomy, which accounts for drone drivers naming themselves “stick monkeys.” This nomenclature indexes the dehumanization and mechanization of operator conduct by metonymizing a child’s simian puppet. The stick monkey is posi- tioned as a knower and doer by a system of constraints—epistemic triggers of pattern recognition and anomaly detection. As Gilles Deleuze proposes, “The seeing subject is himself a space within visibility, a function derived from visibility.”17 Or as Kalevi Kull proposes, “Semiosis always requires a previous semiosis that produced the translator.”18 Here, we witness the eclipse of that form of historical agency that was contingent on the “synthetic activity of the subject.”19

16028-0303f-Finalpass-r01.indd 170 9/24/2019 12:03:32 PM Of the Pointless View 171

THE POLEMOCENE: OF ECOTECHNOLOGICAL WAR

Drone omnivoyance gravitates to the binary of “filled” and “empty” spaces— the latter lie within and beyond the borders of the kill box. These topoi are either populated with pattern-of-life metadata or voided of data value when kill boxes are “opened” and then closed by drone assault. Drones’ operations manufacture kill-box cartography through data discretization and compres- sion, that is, through inclusive disjuncture. Drone informatics are contingent on an enabling unseeing, elliptical discretization that functions as part of the drone’s sensory membrane, giving rise to a data bubble or foreclosure, which in another context describes as “inclusive exclusivity” and an act of “world condensation.”20 Topologizing omnivoyance is contingent on data compression dictated by remote-sensing bandwidth limitations. Compression demarcates the minimal level of data required to generate the maximal globalizing effect, as in drone signature strikes based on elliptical yet enveloping metadata. Compressing algorithms

are often described as “lossies,” meaning that information is lost in the pro- cess. . . . [T]hey choose what visual information to discard—based on what is least likely to be noticed missing—and then reformat whatever details they retain to provide an imperceptibly altered version of reality. . . . [Introducing] a degree of representation, or of illusory space, into a situation where reality apparently remains intact, they summon memory while also “losing” the infor- mation the site would have seemed to contain. (The very word memory begins to resonate differently then, pertaining more to its role as an empty vessel or container than to the information that would fill it.)21

Decreasing the topological memory space of the world laid open by omni- voyance increases the alacrity with which the world can be delivered to this gaze as a cartographic construct. Compression-based discretization of “continuous attributes” or time flows into encapsulated intervals is a process by which a continuum of nondiscursive material is differentially separated and transposed into discrete or modular gramme—recombinant marks—that enable pattern recognition and rule discrimination, that systematize and prognosticate surveilled spaces and behaviors. Schürmann succinctly sum- marizes the epistemic formation of topological constructs: “Topology . . . will recover those objects of exhibition and make of itself an analytic of ulti- mates. . . . Under a hegemonic regime, one acts and speaks in the name of a fantasm—an expression we hold to be tautological. Both common nouns and fantasms direct us to de-realize the singular and maximize a thetic reality.”22 The installation of maximal affirmations through derealizing data compres- sion is reliant upon, mediated by, and externalizes an “inessential surplus

16028-0303f-Finalpass-r01.indd 171 9/24/2019 12:03:32 PM 172 Chapter 7

of actuality” and “inconsistent multiplicities.”23 Topographic compression archives topoi by jettisoning lived crevices of the undatafiable—the surplus lifeworlds populated by the spacing, dissonance, and deferment of non- sense, the cacographic, and the otherwise computationally nondescript. It is this mediation of such hegemonic, maximalizing fantasms by unsubsum- able immanence that provisions the possible vantage points from which to submit omnivoyance to Schürmann’s analytic of finitudes. The flotsam and jetsam, the inessential actualities, rendered unavailable by data compression negatively determine the scopic availability of what passes for world as an algorithmic construct. These are crucial lacunae, for that which bears, con- veys, and supports intelligibility, even by its elimination or occlusion, does not belong to the significance it makes possible. That which transmits sense, even as the latter’s determination in negation, is rendered insensible in itself. This qualification by and jettisoning of the incomputable opens a geopoli- tics of the kill box from the standpoint of what remains unencapsulated and unsequenced. Discretization is a censoring system; it is the elliptical assemblage of an archive through the spacing of subtraction. This presupposes a finite number of computational events in any domain of space/time in order to extract a discrete datafied fraction from a presumable continuum of flow and flux marked by filled and empty space, signal, and noise. In mediological dis- cussions of discretization, there is the holistic assumption of a preexisting, integral, spatiotemporal continuum or flow that is in place for decomposi- tion and assemblage into recombinant parts. “Continuum” is here the code for a proto-totality, a convenient fiction that is required to the degree that the discretized combinatory must perform operations autonomous from the presumed temporal source that alter this source through its nonsynonymous substitution. The topologizing datascape is the spatialization of in situ and irreversible temporal flux through discrete and elliptical capsules that enable reverse time axis manipulation—the recursive storage, conversion, manipu- lation, and transmission of captured temporal information to be emitted as real-time events.24 In drone warfare, recursion is an analeptic intervention that profoundly alters the proleptic meaning of violence and its aftermaths. “The recursion runs back, it ‘takes recourse,’ as it were, to itself, but at the same time it runs ahead to a predefined result (which, however, could not come about without the running back). Note, however, that this expanded reproduc- tion involves a Janus-type double movement.”25 Here beings are discretized and elliptically phenomenalized into data sets in order for these data sets to be acted upon as if they were the things themselves despite being organized by an artificing retentional apparatus that produces time by consuming time. Contrary to Kittlerian claims that the spatialization of time through con- tinuous data storage expands access to the immediate Real of scopic or

16028-0303f-Finalpass-r01.indd 172 9/24/2019 12:03:32 PM Of the Pointless View 173

auditory sensibility, the post–kill box topoi constituted by omnivoyant assault generates a manufactured landscape striated by the ravages of ballistical damage inflicted upon a once “filled space” or lifeworld. In digital warfare, the incomputable and the cacographic are remediated as “collateral dam- age”—the algorithmic definition and disavowal of indiscrete violence, that which falls outside the computational continuum. As a data form, the collat- eral damage installed by algorithmic war encapsulates a topology of deletion and disframing, “the radical off-centredness of a point of view that mutilates the body and expels it beyond the frame to focus instead on dead, empty zones . . . the use of the frame as a cutting-edge, the living pushed out to the periphery beyond the frame . . . the focusing on the bleak or dead sections of the scene.”26 Here we witness the Real of the reverse time axis of datafication; the ruins and corpses, as the surplus damage of a signature strike, are irrevers- ible finalities that do not reassemble themselves through recursion. As the Real of algorithmic governance, they resist and exceed storage compression while emitting counter temporalities precluded from amputating datafication, though this debris may attain an attenuated, quantifiable second life in ex post facto battle damage assessments as misidentified liquidated “terrorists.” Such reconstructions of exquisite cadavers evoke the origin of mathematica arcana in the etymon of algebra, the Arabic al-jabr meaning bone setting, as the reunion of broken parts of a skeleton.27 The kill box magnetizes and hooks “dwell time,” the recursive hovering of the drone over a scannable terrain that awaits its topological vectoring and simultaneous or consequent assault—its sundering from adjacent, undiffer- entiated, and yet-to-be encircled surfaces of the earth. Sloterdijk terms war executed though atmospheric assault “air quake.”28 The air quake inverts the physis of sky and earth; the sky becomes a pressurized bubble whose erup- tions, ballistical assaults, and, more recently, data flows the earth from above and not from below. The concept of the air quake historicizes the desubjectivation of vision by periodizing the predigital dislocation of the first-person shooter and his requirement of a line of sight toward a target by World War I chemical warfare. This subject-centered trajectory is superseded by “atmoterrorism” as ambient violence that indiscriminately diffuses from and through the atmosphere, which it weaponizes by vacuuming up the pre- conditions of life. The drone kill box, as the generative site of triggering data, instates a war of milieu; its very geometry and design is an act of ecotechnological assault as it edits and rescripts an existing terrestrial surround to generate an abstracted and actionable datascape. The ecocide anticipated by gas warfare now takes an informatic turn that pivots the lived body of the potential target against itself. The liquefaction of habitus is currently executed by recombi- nant datafication that decomposes human life on earth into the unconscious

16028-0303f-Finalpass-r01.indd 173 9/24/2019 12:03:32 PM 174 Chapter 7

visible made up of prosecutorial metadata mannequins. This is “the malig- nant exploitation of the life habits of the victims . . . in such a way that they become involuntary accomplices in their own destruction.”29 Accusatory encapsulation by metadata resembles von Uexküll’s infamous tick assault: “The tick, stubborn, sullen and loathsome, huddles there and lives and waits, waits, for that most improbable of chances that will bring blood, in animal form, directly beneath its tree. And only then does it abandon caution and drop, and scratch and bore and bite into that alien flesh.”30 As Geoffrey Winthrop-Young comments on the tick attack, “Nothing can move about and kill more freely than that which is outside the subjective world-construction of its victims and enemies.”31 Metadata as a machinic unconscious consti- tutes an informatic ecocide that is beyond the subjective world construction of its targets.

THE REPRESENTATIVE OF REPRESENTATION

Give it shadow enough, give it as much as you know to be partitioned out around you between and midday and midnight. —Paul Celan, “Speak, You Also,” 1955

Perhaps the most profound and uncanny treatment of an omnivoyant power/ knowledge apparatus is Nicholas de Cusa’s De Visione Dei, published in 1453. De Cusa’s text is a neglected resource for the media archaeology of surveillance that connects to political theology. His theology of vision is more than an archaic curiosity when profiled against present-day surveillance tech- nology. De Cusa presents an environing optics of the surveilled that incites the acts of seeing of the surveilled while decentering any autonomous subjective derivation of sight. Omnivoyant surveillance as a pointless view paradoxically constitutes the subject in an ocular net as a seer but not as synthetic author of vision. De Cusa’s omnivoyance, as the seeing of seeing, foreshadows Michel Foucault’s theorem of generative and performative power as an action upon the action of another that requires the other’s agency in order to capture it in the net of a power-knowledge apparatus. Therefore, de Cusa distances omni- voyant surveillance from the idioms of repressive power and mere objec- tifying reportage; rather, de Cusan surveillance proactively makes subjects and scenography through their complicity in assembling a visual habitus. Subjected to omnivoyance, they become in effect a literal assembly of seeing. De Cusan surveillance, despite its starting point in the binary of an illimit- able versus a finite spectator, achieves omnivoyance by eventually evacuat- ing any optics of agent and patient. He thereby anticipates the immersion of

16028-0303f-Finalpass-r01.indd 174 9/24/2019 12:03:32 PM Of the Pointless View 175

ocularized subjects in a normative (self-)surveillance grid that unfolds as the nonsynonymous substitution-cum-summation of social existence. De Cusa identifies omnivoyance in singulars that express a supreme Exem- plifier or general optical equivalent as the law of laws:

In this way I see that this Power is the Face, or Exemplar, of every arboreal species and of each tree. In this [Power] I see this nut tree not as in its own con- tracted seminal potency but as in the Cause and Maker of that seminal power. And so, I see that this tree is a certain unfolding of the seed’s power and that the seed is a certain unfolding of Omnipotent Power.32

For de Cusa, representational sovereignty over the world is holographically exemplified by each thing within it. He anticipates Lacan’s concept of the representative of representation.33 For Lacan, before the world is represented it becomes representative or exemplary in unfolding the potential depict- ability of all things in it. The representative of representation possesses the capacity and authority not only to install representations but also to determine criteria of representability. The representative of representation is the Exem- plifier. From this perspective, polemological omnivoyance is the presentation of surveillance, force, and destruction as nominating vehicles of exemplifica- tion—friend/enemy, combatant/noncombatant, metadata, “collateral dam- age,” “signature strike,” “disposition matrix,” and “crowd killings.” The omnivoyeur, for de Cusa, in seeing all and in being seen in its see- ing, becomes the arch-exemplar of representation. The universalization of typification is the quintessential representation of a global power in which representational enactment as such, independent of any restricted content, becomes an ecotechnology that makes a capacity for representation knowable and modalities of representation preferable. Under a regime of securitization, the political powers of revealability are self-referential idioms of government, traversing the state, the media, and public culture; the exemplifying power of securitization is reiterated in scenic affirmations through the prudential discretization—spacing and timing—of revealed peril. Securitizing scenog- raphy determines not only what and how danger may appear but also what will never be allowed to come before the public gaze, including the political formation of that gaze.34 De Cusa’s media theology is exemplified by a visual experiment that demonstrates the degree to which both transcendental and subject-centered visuality become problematic within an omnivoyant grid. De Cusa’s genesis of desubjectified ocularity precedes through a dramaturgy that captures the entrapment and submergence of originary subject positions within an optical eschaton where each actor degrades as a subject in occupying and being occu- pied, or in archiving and being archived, in the other. This praxis coincides

16028-0303f-Finalpass-r01.indd 175 9/24/2019 12:03:32 PM 176 Chapter 7

with Vismann’s desubjectifying autopraxis (Eigenpraxis) by stressing the incorporation of the agent of an operative sequence as both component and consequence—an incorporation that has been previously described in refer- ence to the drone operator. This experiment is “an operation that produces nothing outside of the one who operates it . . . but which modifies the one who operates it, to the point that, upon being repeated, it confers on him a new way of behaving, another habitus.”35 De Cusa maps out an echotheol- ogy of visibility by deploying a “painting” (tabellam) that he calls the “Icon of God” (Eiconam Dei) as the trigger of an experimental procedure—an image work that constitutes seeing subjects as poles of its unhuman gaze. He prefaces this experiment, “I will attempt to lead you experientially into the most sacred darkness [vos experimentaliter in sacratissimam obscuritatem manuducere].”36

Hang this icon somewhere, e.g., on the north wall; and you brother stand around it, at a short distance from it, and observe it. Regardless of the place from which each of you looks at it, each will have the impression that he alone is being looked at by it. To the brother who is situated in the east it will seem that the face is looking toward the east; to the brother in the south, that the face is looking toward the south; to the brother in the west, that it is looking westward. First of all, then, marvel at how it is possible that [the face] behold each and every one of you at once. For the imagination of the brother who is standing in the east does not at all apprehend the icon’s gaze that is being directed toward a differ- ent region, viz., toward the west or the south. Next, let the brother who was in the east situate himself in the west, and he will experience the [icon’s] gaze as fixed on him in the west, just as it previously was in the east. But since he knows that the icon is stationary and unchanged, he will marvel at the changing of the unchangeable gaze. Moreover, if while fixing his sight upon the icon he walks from west to east, he will find that the icon’s gaze proceeds continually with him; and if he returns from east to west, the gaze will likewise not desert him. He will marvel at how the icon’s gaze is moved immovably. And his imagina- tion will be unable to apprehend that the gaze is also moved in accompaniment with someone else who is coming toward him from the opposite direction. Now, [suppose that] wanting to experience this [phenomenon], he has a fellow-monk, while beholding the icon, cross from east to west at the same time that he him- self proceeds from west to east.37

Michel de Certeau concludes that this experiment stages “the progressive constitution of a labyrinth, a maze of gazes and not of objects or things.”38 The lack of optical objects reciprocally implies an absence of positing and anchored subjects. The title of de Cusa’s treatise captures the reversal and disframing of a subject-centered point of view; De Visione Dei could be trans- lated as “Our Vision of God” and also as “God’s Vision of Us.” The “anthro- pological gaze” is a prosthetic and an effect of a saturating dissemination of a

16028-0303f-Finalpass-r01.indd 176 9/24/2019 12:03:33 PM Of the Pointless View 177

sovereign gaze that infests a scenography of looking. Human vision is a see- ing (videre) mediated by being seen (videri) at all times and from everywhere at once as a metadatum of individual ocularity. A global eye relays back to the finite seeing subject the subject’s own glance as a component of a governing omni-directionality. Reciprocally, the illimitable apprehension of the omnivoyeur is activated and mediated and therefore channeled by the delimited spectator. The repre- sentative of representation, the “great Exemplifier” as a power of unhindered surveillance triggered by a finite spectator, is captured by de Cusa’s question: “What other, O Lord, is your seeing . . . than your being seen by me?” The “your being seen” does not refer to a face or body but to the deity’s act of seeing commissioned by the plurality of the surveilled. His neologism Pos- sest (the convergence of act and identity) implies that the omnivoyeur is not a subject of vision but an excarnating/incarnating power of seeing that produces affects and subjects of vision, a capacity to do and to be in media res. Possest “is made of two terms in Latin, posse‚ which is the infinitive of the verb to be able to (pouvoir), and est is the third person of the verb to be (être) in the present indicative, he is‚ (il est) . . . I would not define something by its essence, what it is, I would define it by . . . its Possest: . . . what it can actually do.”39 Other variations on this theme are: “To see You is not other than that You see the one who sees You [Nec est aliud te videre, quam quod tu videas videntem te]” and “The being of a creature is, alike, Your seeing and Your being seen [Esse creaturae est videre tuum pariter et videri].”40 De Cusa concentrates on the seeing of seeing as an exercise in metadata that archives information about visual information and its supporting network. The omnivoyeur interweaves a matrix consisting of intersecting nodal points that trigger a targeting power that is “moved immovably” in tandem with a visual subjection triggered by constant motion in space and time. This tension evokes and twists the relation between the continuum of substance and the exigency of the accident. A decentering of the subject, understood as hypokeimenon, the subjectum, which stands under and supports the act of seeing, is generated by de Cusa’s reworking of the relation between substance (as immovability and continuity) and the accidental, embodied in the ever- visible itinerancy and errancy of the monks in the experiment. Aristotelian philosophy distinguished essence (ousia) and accident (sumbebêkos), charac- terizing the accident as the necessary property of a substance but as peripheral to its essence.41 As a surplus beyond intrinsic essence, the accident is cast as a subordinate substance in its mode of mischance. However, this framework overly reduces the accidental to a derivative property of a stable substance rather than treating it, as de Cusa does, as a constitutive, autonomic, and anarchic materiality. He tracks a discombobulating chain of effects, affects, and transformative exchanges expressed through the random and dispersed

16028-0303f-Finalpass-r01.indd 177 9/24/2019 12:03:33 PM 178 Chapter 7

movements of the monks and the intransient mobility of the icon’s gaze, whose seeing is made visible only through itinerancy and dispersal, as if this static face were produced through the accidental and derived from it. Here de Cusa leaps centuries to converge with robotic war. As exemplified by the nomadic “dwell time” and the calculated accident of collateral dam- age of drone signature strikes, war now unfolds as “rhythmed anachronies” and as “moving errors whose errance is both finite and infinite, aleatory and programmed.”42 The icon triggers a volumetric space of uninterrupted optical communica- bility made up of disembodied gazes. The body exists or rather subsists; it is seen in motion and sees itself seen in motion, but the embodied entity moves only insofar as it is seen in its movement, which is both archived and pre- empted in the anticipatory gaze of a nonhuman voyeur. In the haptics of the seeing icon, the body in motion becomes a refractive surface, a support for the topographics of the omnivoyant purview. In this context, de Cusa poses sovereign omnivoyance as a scopic parasitology of the human body in media res, a schema that anticipates the drone’s pattern-of-life capture through metadata. The icon’s power transforms the errance of finite and contingent movement into a choreography that is potentially paranoic. Each itinerary in this choreography of errance generates an escalating excess of visuality, com- prising optical subsumption and magnifiable surfaces of observation rather than their diminishment. The itinerant body triggers a scopic surplus and its motion escalates the power of the surveilling gaze, creating a scenography of control and geometricization that can be called political. Its visual kinetics are triggered by the random movements of its beholders functioning like inner- vated marionettes of the icon, storing and transmitting back its information in a recursive time axis through the invisible ocular tendrils from which all parties in this web appear to hang. De Cusa describes this experience of being visually environed as the coin- cidentia oppositorum of being everywhere and nowhere. The icon’s global attentionality is a “circle whose center is everywhere and circumference is nowhere.” De Cusa severs the concepts of world and omnivoyance from that of centration. He rejects the prestige of the topographic center and reconciles the polarities of center/periphery by presenting a world upon which any locus is a possible center—where all topoi are qualitatively isomorphic, homo- thetic, and interchangeable, be these points the “center of the world, pole of the heavens, axis of the earth, zenith, sphere.”43 His decentration of terrestrial space is entirely complementary to the topological leveling, typification, and spatial compression associated with contemporary omnivoyant war. This spa- tial isomorphism extends de Cusa’s concept of the not-other (li Non-Aliud) in order to subsume differential and asymmetrical topoi. The world making of omnivoyant war generates each kill box, the immundus, as potentially

16028-0303f-Finalpass-r01.indd 178 9/24/2019 12:03:33 PM Of the Pointless View 179

convertible through violence into the not-other, thus precipitating the com- mensurating metric of smooth topologization as a globalizing project. Topo- logical interchangeability is a precondition of omnipresence—the autonomy, remotion, and trans-ascendance of omnipresence requires the “leveling off of all hierarchies and stratifications in what it surpasses.”44 In undoing these antinomies, de Cusa simultaneously deconstructs the authority of the point of view in favor of a pointless view that privileges no spatial site and rejects any scenography in which the anthropos is the central site around which the presentation of the world is gathered.

INTER-DEFACEMENT

It cannot be overlooked that the anamorphism of the iconic face filled with omnivoyant power is itself triggered by the gazes of those that see themselves being seen, irrespective of what locus they occupy or what spatial trajectory they embark upon. Jasper Hopkins writes,

If both observers move, in such way that they approach each other from opposite directions while looking toward the icon, the icon’s eyes are experienced by each as wholly following his own motion; and yet, the eyes are inferred to be following opposite motions at the same time. Like the omnivoyant figure, God’s eyes “run to and fro throughout the whole earth.”45

The omnivoyeur’s running eyes, which imply acceleration, simultaneity, and anticipation, refract the detachment of the gaze from a fixed subjectifying faciality or static mask. These running eyes do not simply move over the earth but in their “to and fro” constitute a world as a globality of gazes that its shifting orbs enfold and enforce as a habitus. De Cusa observes that “His being, His moving is His remaining at rest, His running is His being still.”46 These aporia imply both stasis and kinetics, indivisibility and violent metas- tasis: “This unmovable face [immobilis facies] . . . is directed simultaneously toward one place and towards all; and . . . its gaze follows an individual movement as well as all the movements at once.”47 The immutability of this gaze that bypasses position and place destroys the atomized authority of “the point of view.” The omnivoyeur’s pointless view circulates between the transcendent and the immanent, the global and the focal, in reducing diverse bipedal trajectories—past, present, and future—to scopic simultaneity. The “stance” of the omnivoyeur is aperspectival. It displaces all situated per- spectives while conditionally exploiting their momentary emergence. As the representative of representation, this apparatus captures and archives perspec- tive as a contraction of seeing without holding to any one perspective and by

16028-0303f-Finalpass-r01.indd 179 9/24/2019 12:03:33 PM 180 Chapter 7

infinitely multiplying and synchronizing, through memory and preemption, the sites and times from which the icon and the spectator attain concomitant ocular legibility. De Cusa plays upon the contrast between the look and the exemplar, the focal and the global: what initially appears as localized reap- pears as exemplar; and what seems to exemplify dissolves into a multiplicity of finite gazes. Typification here refuses stabilization; the type, as deity, icon, and mortal, becomes substitutable by what each exemplifies and assumes for itself the recursive power of standing for what it is not. Jacques Derrida observes this reversal of type and source:

Its slidings slip it out of the simple alternative presence/absence. That is the danger. And that is what always enables the type to pass for the original. As soon as a supplementary outside is opened, its structure implies that the supple- ment itself can be “typed,” replaced by its double, and that a supplement to the supplement, a surrogate for the surrogate, is possible and necessary.48

By analogy to Derrida’s analysis, de Cusa’s icon appears to open a supple- mentary outside that releases an ocular field of fluid transposition, tracing, and substitution. Visuality is folded not into subjectivity but rather into an apparatus of subjectivation as a component of an operation that is irreduc- ible to the anthropological that is not its origin. The panoptical painted face that seems to follow and touch everyone and everywhere is not a source but a filter, aperture, and conduit, a prosopopeia or mask that emits a pointless view, an apparatus of infinite targeting and infinite transposition between the targets and the vectoring beam of an aim. The representation that is the painted face is quickly defaced by its omni- voyant actualization, which effectively implodes the emblematic face as the continuous surface and static, atemporal support of this omnidirectional scan. Its surface stasis is there to provide an always already elapsed orientation point for the motility and synchronizing simultaneity of the emerging scopic echology wherein the gaze falls out of and is severed from the face in an act of desubjectivizing defacement that points to a deus absconditus—a sover- eign absent from the scenic play of visual sovereignty. There is an implo- sion of the anthropomorphic face, a decapitation, that is the simultaneous death of man and the deity as a depictable and graspable sovereignty—de Cusa’s deity solely collapses into the dispersed performativity of acts of seeing and being seen seeing and retrieves its sovereignty from the relays or echography of other eyes. Here the omnivoyant gaze undergoes a blink that occurs behind the facade of its looking, a suspension, a moment of blindness scarring the anthropomorphic surface, which is meant to support seeing and yet is destroyed by it. The face is not omnipresent. In seeing everything and everyone all at once the gaze attributed to the face abdicates all positionality

16028-0303f-Finalpass-r01.indd 180 9/24/2019 12:03:33 PM Of the Pointless View 181

contingently indicated by its anthropomorphic facade. The frontality of the face, which has a long theological and aesthetic genealogy, here autode- structs; it is eviscerated by its own omnidirectional recursions. Taking this a step further turns our attention to the itinerancy of the monks moving here and there that gives rise to the sovereign “running of eyes.” Are the monks testing the infinite gaze or fleeing from it? The mask of the omnivoyeur requires this movement, this running life and running of life that animates the eyes of the corpse analogue that is the painted face; it is a dead thing animated, fueled, and driven by the monks’ temporalizing choreography, wherein each errant body shifts and swerves as it overwrites and rewrites this habitus of tracks and traces. De Cusa addresses this transposition of visual positions:

You were a shadow. Your icon’s gaze seems to be changed and that Your coun- tenance seems to be changed because I am changed, You seem to me as if You were a shadow which follows the changing of the one who is walking. But because I am a living shadow and You are the Truth, I judge from the changing of the shadow that the Truth is changed. Therefore, O my God, You are shadow in such way that You are Truth; You are the image of me and of each one in such way that You are Exemplar.49

Here exemplification and surveillance appear as two sides of a unitary opera- tion of “shadowing.” The shadow is a representative of representation and yet exemplifies by filtering out unnecessary elements in its coming after, for shadowing implies a temporal deferment, spatial dislocation, storage, and repetition that is both like and unlike what it traces. The shadow is inher- ently elliptical in its différance—the differential space and temporal interval between shadow and body cannot be captured or stabilized—it is untravers- able. In its shadowing, the omnivoyeur reciprocally turns its observers into “living shadows” of its own visual truth. Those who have experienced the threat and actuality of drone dwell time know intimately what it is like to fall under the shadow of an omnivoyant apparatus only to become living shad- ows under its specular (metadata) capture and consequent destruction of their lifeworlds. The insensible interval that delineates the shadow from its model implies that any positioned apprehension by the omnivoyeur is discontinuous, elliptical, and occluded and thus, like the drone or metadata, in excess of the subjective-world construction of those it archives. This is how “the Truth is changed” by a malleable, anamorphic power of shadowing, where the finite subaltern glance “is the inspection of the concrete momentariness of the tran- sient situation. As aisthēsis, it is a look of an eye in the blink of an eye [der Blick des Auges, der Augen-Blick], a momentary look at what is momentarily concrete, which as such can always be otherwise.”50 There is a detheologizing disjuncture between de Cusa’s description of the “subtle art” and techne of the icon and its analogy to the ineffable glance of

16028-0303f-Finalpass-r01.indd 181 9/24/2019 12:03:33 PM 182 Chapter 7

a deity. Does the icon shadow those unplaceable and unpresentifiable deified orbs that can have no enframing aperture? Is it a conduit to the invisible, the cockpit of the deity that originates visibility? Or does the icon trigger the omnivoyeur as the drone its operator? The icon presents an incarnation that is simultaneously an excarnation; it is a corporeality that cannot contain the sovereign gaze that overflows all visual contraction implied by a facial circumference. I would suggest that a version of this inhabitation/implosion of shadowed and outlined bodies by sovereign power describes omnivoyant warfare in which the drone target or victim and the environing kill box in their ruination inadvertently bear witness to an excess of vision as the signature of an incommensurable sovereign shadow. Corpses, dismembered body parts, and ruins track the passing of the omnivoyant event as the fractured vessels that could not contain its saturating gaze. Yet the apocalypse of frontality and the face occurs in stages. The disjunc- ture between the icon/face and the unpresentifiable omnivoyeur is confessed by de Cusa himself, who in other writings, such as De Possest, refers to the text as De Icona rather than De Visione Dei, thereby problematizing who or what is seeing.51 If the deity is the supreme Exemplifier that represents all representability, the icon at one point exemplifies the deity’s omnivoyance, exemplifies the exemplifier, and is the representative of the exemplifier’s power of uninterrupted (re)presentation. This transposition and conveyance of the icon is avowed by de Cusa when he declaims, “Your icon’s gaze.”52 In what way does the deity literally possess and inhabit the icon that exem- plifies its archon? In what way does executive power inhabit the detached, prosthetic satellites of its extension? De Cusa implicitly problematizes the presuppositions of sovereign indivisibility; we can then ask how an organ of sovereignty and violence might sever itself from an originating or underly- ing body politic and procedural order and become self-organ-izing, a self- mediating organ as the expansive power of ever-shifting centers of force. This is an organogenesis in which the increasingly detachable organ of vision is no longer a subordinate organ of sovereignty but rather of sovereignty’s dis- organ-ization. The icon as organ and satellite mediates the deity, and de Cusa attributes autonomy to the icon as a power of surrogation or shadowing of the sovereign. Is the icon’s gaze at one point in time congruent with that of the deity or does it diverge from it? Is the icon’s gaze continuous with the deity or is it a relocating incarnation that can be severed from the sovereign as origin? Does the icon possess the deity’s gaze when it exemplifies the seeing of a God and yet is not God? Both autonomization and iconoclasm haunt the icon as container and support of that which it is not. The painted face is a simula- crum and black box that enables de Cusa to provisionally anthropomorphize sovereignty in order to ultimately dehumanize omnivoyant power—this is de Cusa’s eschatological anti-humanism and his apophatics of sovereignty as

16028-0303f-Finalpass-r01.indd 182 9/24/2019 12:03:33 PM Of the Pointless View 183

being both uncontainable and miniaturized through prosthetic distantiation. The deity lends itself to a contraction, the face of the icon, in order to become observable and apprehensible in the same way that this omnivoyeur endows those subjects it beholds with orbs and faces with which to behold its gaze. De Cusa analogically attributes to this gaze a face that is unable to anchor omnivoyance and is ultimately defaced as each spectator inserts himself into a field of vision that is not the field of a face and circulates tangentially within it without the loss of ocular surplus and simultaneity. In de Cusa, the visible and the invisible are both revealed in the act of seeing as an act of creation and its exemplification. Omnivoyance has its origin in the darkness and infinite remotion of a deus absconditus that is pivoted against, behind, and beyond the static surface of the self-destructing icon. This is the remotion of “the God, who calls into existence the things that are not,” who calls from a site that is not and with a voice that is not.53 This calling and sending of vision and things from the supplementary site of the uncreated from which creation emerges introduces an excess of measure and visibility that carries an inheritance, the trace structure of the not, which implies the uncreated as much as creation and therefore speaks to uncreat- ing (destruction) as much as creating. Omnivoyant emergence follows upon obscuration—creation unfolds from the uncreated and introduces obscuration and the reversal of creation into the creation. Can creation return to the not? Is there a secret apocalypse built into the infrastructure of omnivoyance of which contingent invisibility of the seer and the seeable is a shadow? The uncreated is the overneath that both arches and underlies the creation of the visible and the invisible. The uncreated precedes and conditions creation as the making of the capacity for visibility, which includes making the invisible as such visible, as apprehensible as a contractive emblem or mask of the inap- proachable that preceded creation. This is the void upon which sovereignty, as deific substitute, in all its facets and impermanent facades, resides; this is the political theology of uncreation, of that which is not. Sovereignty as purportedly indivisible is dissolved in its self-substitution in ex-centric war, by surrogating satellites, technocratic shadows, where the figure of the sov- ereign functions like a deus absconditus, exemplifying its visual power at a distance, its power of withdrawing from the created through the destructibil- ity of creation.

THE PASTORATE OF GAZES

Omnivoyance cares; it individualizes or atomizes and totalizes all at once, creating a volumetric attentionality and supervisory affect. “If he [the recipi- ent of the gaze] observes (attendere) that the gaze leaves none of the persons

16028-0303f-Finalpass-r01.indd 183 9/24/2019 12:03:33 PM 184 Chapter 7

present, he will see (videre) that this gaze is concerned with each one with as much care as if it were the only one to have the experience of being fol- lowed.”54 This scan does not leave off from seeing each seeing subject but rather fixes upon each gaze irrespective of its spatial trajectory. These gazes only have a point of view to the degree that they see themselves being seen in their topographic dispersion and kinesis; thus their point of view originates in the field and gaze of the Other who has no immutable singular point of view. Their point of view only exists because of the simultaneity of their being seen, and their seeing being seen, which unifies their dispersion and provides continuity to their spatiotemporal mutability. The dispersed vantage point from which they look upon the representative of representation becomes an artifact of a panoptical synchronization that emerges as a government of gazes and a political echology. The omnivoyeur follows each individual movement as well as all the movements as a uni-totality. For de Certeau, this is “a geometry of rela- tions of position”—a topology that draws the prosopopeia of omnivoyance into a net. In its spatial synchronicity, the icon synchronizes chronotopes through their spatialization—the dispersed movements of the monks become a chrono-choreography that can be read as a text, an algorithmics of gazes, and a spatialization of embodied time in their scanning. Despite the multiplex errancy of motion, the omnivoyeur adheres to the beholding itinerants, never abandoning any one. Omnivoyant compression gathers the observable and observant points of dispersive movement and nonmovement into syntactical arrangements. De Certeau concludes, “The space that this gaze organizes thus has a depth of obliterated histories; plural, made up of visual strata that play one on the other, it is an anonymous theater of memory.”55 Anonymity here, combined with a totalizing memory, a voracious archivizing vision, is but a short historical, if not technical, interval from the logic of our contemporary architectures of secrecy warehoused by surveillance machines. From a Foucauldian perspective, this labyrinth of gazes would be the infra- structure of pastoral power, the political metaphor of the relation of a shep- herd to a flock, for nothing escapes its tracking and targeting; it spreads its net over the minimal, for example, the smallest creature, appearing to regard each spectator as “him alone” as well as the maxima, the multitudinous, extending to the totality of the universe without center or circumference. This compression and distension refracts a pastoral power that comprises both a cathexis of singularities and the government of populations in their purposive distribution to spatial coordinates. The icon as a pastoral gaze is global in its securitization of the pastoral flock, of the well-being and maintenance of populations, and focalized in its investment in individuation—the production of a disciplined subject by diligent oversight. Here individuation as subjec- tion is a matter of assigning to each and all, omnes et singulatim, their proper

16028-0303f-Finalpass-r01.indd 184 9/24/2019 12:03:33 PM Of the Pointless View 185

ratio and measure of visibility as a seeing and a being seen. Foucault, without citing de Cusa, likens the pastoral optic to an act of scanning. “The theme of keeping watch is important. It brings out two aspects of the shepherd’s devotedness. . . . [H]e watches over them. He pays attention to them all and scans each one of them.”56 Or as de Cusa writes, “For you, Lord, so look on anything that exists that no existing thing can conceive that you have any other care but that it alone exist in the best manner possible for it and that all other existing things exist only for the purpose of serving the best state of the one which you are beholding.”57 Foucault identifies pastoral power with an inescapable gaze that designates directionality and conduct for the scanned, as in the de Cusan icon’s con- ducting of the singular contracted gaze in its running to and fro.58 Foucault populates pastoral power with interlacing visual, linguistic, and ultimately computational practices as “the set of possible verbal or non-verbal pro- cedures by which one brings to light what is laid down as true as opposed to false, hidden, inexpressible, unforeseeable, or forgotten.”59 The “hidden, inexpressible, unforeseeable, or forgotten” constitute the cacography, the detritus, the flotsam and jetsam, the inconsistent multiplicities that escape the phronesis of truth making in much the same way that the collaterally dam- aged exceed the sterilizing, algorithmic diagnostics of the drone. De Cusa anticipates Foucault’s translatability between seeing and linguistic command and compact: “that your gaze speaks, for your speaking is not other than your seeing.”60 He provides a framework within which the omnivoyant matrix can be located as a regime of visual truth insofar as pastoral truth is con- nected to an adherence to revelation—to the act and power of revealability, autonomous and detachable from any delimited content. The de Cusan icon has no intrinsic content other than its power of infinite revealability. The revelation of the power to reveal is its being through enactment and scenic affirmation or Possest. Foucault writes, “A regime of truth is then that which constrains individuals to these truth acts, that which defines, determines the form of these acts and establishes their conditions of effectuation and specific effects.”61 The reciprocity of gazes, a circulation without circumference, con- nects to pastoral power in being “linked in a circular relation with systems of power which produce and sustain it, and to effects of power which it induces and which extend it.”62 This recursivity is demonstrated in the bidirectionality of gazes, caught in the omnivoyant net wherein the seer of the omnivoyeur avows the manifestation of the truth of omnivoyance through the optical equivalent of a veridiction. The continuous visual recursivity resolves into the “the production of the truth of oneself,” as heteronomous self-subjugation by exteriorities that tell us our identity.63 “To recognize ourselves in their dicta, is to interiorize power in the form of knowledge. Indeed, in saying—in acknowledging, confessing ‘This is what I am,’ the subject objectivates itself

16028-0303f-Finalpass-r01.indd 185 9/24/2019 12:03:33 PM 186 Chapter 7

within itself. . . . Self-identity is self-objectivation accepted and enforced as self-subjection.”64 Omnivoyance appears to participate in this political logic of subditio, defined by Foucault as a practice where one’s will is substituted, displaced, and occupied by a targeting and directing will of the omnivoyant pastorate. In de Cusa to see and to be seen is to will and to be willed, in which “the ‘pathos of distance’ is not thinkable without one type of will fixing that distance and another type keeping it.”65 In de Cusa’s version of subditio, the dispersed and contracted wills of seeing subjects are overtaken and underwritten by a polymorphic and globalizing seeing as the willing of wills. For Marion the intensity of the icon’s gaze “overturns the order of subjective intentional- ity.” “Neither object nor ego (subject), I receive myself from that through which (or through whom) I see myself seen.”66 In remarking on the gaze’s removal of “ego,” de Cusa avows, “I exist only insomuch as you are with me. And since your seeing is your being, therefore, because you regard me, I am, and if you remove your face from me, I will cease to be.”67 Foucault writes, “I believe that at the very heart of the notion of subditio was the total penetration of one’s entire existence and of all one’s actions with the will of another, of others, of an x; and this is important because it was opposed to the idea of obedience to a law. The law is what obliges you to do or forbids you to do something; consequently, it implies that you are free to do the rest.”68 Foucault invokes an omnivoyeur of conduct: “In these monastic practices . . . one was never to be master of oneself, but rather one was to ensure that there was always within oneself someone who was the master and the master of everything.”69 However, de Cusa also anticipates a moment where not looking at the omnivoyeur is equivalent to breaking a contractual accord through the visu- ally discordant: “That You do not look at me is the following: viz., that I do not look at You but disregard You and despise You.”70 Here the despis- ing, unseeing proto-subject, in looking away from and willing against the prosopopeia of the supreme exemplifier, exercises its own subjective and autonomous point of view in breaking reciprocity and withdrawing from the topology of gazes. This is what Foucault, in discussing adherence to pastoral power, calls counter conduct. For de Cusa, deviating from the relationality and interweaving of gazes illicitly absolutizes an atomizing contracted gaze that by definition is only partial and does not participate in the network of Truth. Not looking reverts to a contracted gaze, for to look elsewhere and to be seen to look elsewhere transgresses the visual contract and global contact of the omnivoyant meshwork. However, this disavowal points to the possibil- ity of another type of contact, contract, and concordance of the discordant, of seeing as pluralization, disagreement, nonsynchronicity, and as heterotopic. For de Cusa, not looking is a turning away from what has been mandated to

16028-0303f-Finalpass-r01.indd 186 9/24/2019 12:03:33 PM Of the Pointless View 187

be seen and from being seen seeing it; he derives the disavowal of omnivoy- ance from anthropocentric visual contraction—the locus of a possible human- ism. The right not to see and not to be seen invokes Hannah Arendt’s right to rights and its self-designing subject. As Foucault writes of the political theol- ogy of surveillance, which today exceeds its ecclesiastical derivation while perpetuating its metaphysics from Facebook to the National Security Agency to Immigration and Customs Enforcement, “This dimorphism of the pure and the impure, of the perfect and those who are not perfect, of the chosen and those not chosen, will be one of the most fundamental and problematic points of dogma, organization, and the pastorate throughout Christianity.”71

NOTES

1. Allen Feldman, Archives of the Insensible: Of War, Photopolitics, and Dead Memory (Chicago, IL: University of Chicago Press, 2015), 9. 2. Reiner Schürmann, Broken Hegemonies, trans. Reginald Lilly (Bloomington: Indiana University Press, 2003), 4. 3. Ibid., 12. 4. Michel Foucault, Security, Territory, Population: Lectures at the Collège de France 1977–1978 (London: Macmillan, 2009), 354; Matteo Pasquinelli, “Arcana Mathematica Imperii: The Evolution of Western Computational Norms,” in Former West: Art and the Contemporary after 1989, ed. Maria Hlavajova and Simon Sheikh (Cambridge: MIT Press, 2017), 281–93. 5. Schürmann, Broken Hegemonies, 22 (emphasis mine). 6. Jean-Luc Nancy, The Creation of the World or Globalization, trans. Raffoul François (Albany, NY: SUNY Press, 2007), 96–97. 7. Jacques Derrida, The Beast and the Sovereign: Volume II, trans. Geoffrey Bennington, ed. Michel Lisse, Marie-Louis Mallet, and Ginette Michaud (Chicago, IL: University of Chicago Press, 2011), 9. 8. Harun Farocki, “Phantom Images,” Public 29 (2004): 15; Antoinette Rouvroy, “Algorithmic Governmentality: Radicalisation and Immune Strategy of Capitalism and Neoliberalism?” trans. Benoît Dillet, La Deleuziana—Online Journal of Philosophy 3 (2016): 30–36. 9. Felix Guattari, The Machinic Unconscious: Essays in Schizoanalysis, trans. Taylor Adkins (Los Angeles, CA: Semiotext(e), 2011). 10. Gilles Deleuze, “What Children Say,” in Gilles Deleuze: Essays Critical and Clinical, trans. Daniel W. Smith and Michael Greco (Minneapolis: University of Minnesota Press, 1997), 63. 11. Farocki, “Phantom Images,” 15. 12. Ronald C. Arkin, “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/ Reactive Robot Architecture,” Technical Report GIT-GVU-07–11, Georgia Institute of Technology, 2012, available at https://www.cc.gatech.edu/ai/ robot-lab/online-publications/formalizationv35.pdf

16028-0303f-Finalpass-r01.indd 187 9/24/2019 12:03:33 PM 188 Chapter 7

13. Cornelia Vismann, “Cultural Techniques and Sovereignty,” Theory Culture Society 30, no. 6 (2013): 84–85. 14. Jakob von Uexküll, “A Stroll through the Worlds of Animals and Men: A Pic- ture Book of Invisible Worlds,” in Instinctive Behavior: The Development of a Mod- ern Concept, trans. and ed. Claire H. Schiller (New York: International University Press, 1957), 5–80; Martin Heidegger, The Fundamental Concepts of Metaphysics: World, Finitude, Solitude, ed. Martin Heidegger, trans. William McNeill and Nicholas Walker (Bloomington: Indiana University Press, 2001), 236–67; Geoffrey Winthrop- Young, “Afterword,” in Instinctive Behavior: The Development of a Modern Concept, trans. and ed. Claire H. Schiller (New York: International University Press, 1957), 209–43. 15. Von Uexküll, “Stroll through the Worlds,” 13–17. 16. Ibid., 73. 17. Gilles Deleuze, Foucault, trans. Seán Hand (Minneapolis: University of Minnesota Press, 1988), 57. 18. Kalevi Kull, “On Semiosis, Umwelt, and Semiosphere,” Semiotica 120 (1998), no. 3–4: 303. 19. Michel Foucault, Archaeology of Knowledge, trans. A. M. Sheridan Smith (London: Routledge, 1972), 15. 20. Peter Sloterdijk, Spheres I: Bubbles, trans. Wieland Hoban (Los Angeles, CA: Semiotext(e), 2011), 56. 21. Tim Griffin, “Compression,” October 135 (2011): 11. 22. Schürmann, Broken Hegemonies, 12 (emphasis mine). 23. Ibid., 182. 24. See Friedrich Kittler, “Real Time Analysis, Time Axis Manipulation,” trans. Geoffrey Winthrop-Young, Cultural Politics 13, no. 1 (2017): 1–18. 25. Geoffrey Winthrop-Young, “ Recursions,” in Kittler Now: Current Per- spectives in Kittler Studies, ed. Stephen Sale and Laura Salisbury (Cambridge: Polity Press, 2015), 75. 26. Pascal Bonitzer, “Deframings,” in Cahiers du Cinéma Volume Four 1973– 1978: History, Ideology, Cultural Struggle, ed. David Wilson (New York: Routledge, 2000), 198–99. 27. Denis Guadj, The Parrot’s Theorem: A Novel, trans. Frank Wynne (New York: Macmillan, 2013). 28. Peter Sloterdijk, “Air Quakes,” Environment and Planning D: Society and Space 27, no. 1 (2009): 41–57. 29. Ibid., 48. 30. Patrick Süskind, Perfume: The Story of a Murder (New York: Washington Square Press, 1991), 24. 31. Winthrop-Young, “Afterword,” 236. 32. Nicholas de Cusa, “The Vision of God,” in Complete Philosophical and Theo- logical Treatises of Nicholas of Cusa, vol. 2, trans. Jasper Hopkins, (Minneapolis, MN: Arthur J. Banning Press, 2001), 691. 33. , Écrits: The First Complete Edition in English, trans. Bruce Fink (New York: W. W. Norton, 2006), 75–76.

16028-0303f-Finalpass-r01.indd 188 9/24/2019 12:03:33 PM Of the Pointless View 189

34. Feldman, Archives, 214–16. 35. Jean-Luc Marion, “Seeing, or Seeing Oneself Seen: Nicholas of Cusa’s Con- tribution in De Visione Dei,” Journal of Religion 96, no. 3 (2016): 308. 36. Quoted at ibid. 37. De Cusa, “Vision of God,” 680–81. 38. Michel de Certeau, “The Gaze of Nicholas de Cusa,” Diacritics 17, no. 3 (1987): 12. 39. Gilles Deleuze, “Spinoza: Power (Puissance), Classical Natural Right,” trans. Simon Duffy (lecture, University of Vincennes in Saint-Denis, Paris, December 9, 1980), available at https://www.webdeleuze.com/textes/20 (my emphasis). 40. De Cusa, “Vision of God,” 680–81, V 13, 14–15, 686; X 40, 12, 698. 41. Alan Code, “Aristotle: Essence and Accident,” in Philosophical Grounds of Rationality: Intentions, Categories, Ends, ed. Richard E. Grandy and Richard Warner (Oxford: Clarendon, 1988), 411–39. 42. Jacques Derrida, “Why Does Peter Eisenman Write Such Good Books,” in Psyche: In Psyche: Inventions of the Other, Volume II, ed. Peggy Kamuf and Elizabeth Rottenberg (Stanford, CA: Stanford University Press, 2008), 114. 43. Hans Blumenberg, The Legitimacy of the Modern Age (Cambridge: MIT Press, 1985), 505–6, 510–11, 513. 44. Ibid., 538. 45. Jasper Hopkins, Nicholas of Cusa’s Dialectical Mysticism: Text, Translation, and Interpretive Study of De Visione Dei (Minneapolis, MN: Arthur J. Banning Press, 1985), 18. 46. De Cusa, “Vision of God,” 681. 47. De Cusa quoted in Hopkins, Nicholas of Cusa’s Dialectical Mysticism, 18. 48. Jacques Derrida, Dissemination, trans. Barbara Johnson (Chicago, IL: Uni- versity of Chicago Press, 1981), 109. 49. De Cusa, “Vision of God,” 710. 50. Martin Heidegger, ’s Sophist, trans. Richard Rojcewicz and André Schuwer (Bloomington: Indiana University Press, 2003), 112–13. 51. Hopkins, Nicholas of Cusa’s Dialectical Mysticism, 18. 52. Ibid. 53. Nicholas De Cusa, Selected Spiritual Writings, trans. H. Lawrence Bond (New York: Paulist Press, 1997), 253. 54. Quoted in de Certeau, “The Gaze,” 12. 55. Ibid., 13. 56. Michel Foucault, “Omnes et Singulatim: Towards a Criticism of ‘Political Reason’ [1979],” in Essential Works of Michel Foucault 1954–1984, vol. 3, Power (New York: New Press, 2000), 298–325. 57. De Cusa, Selected Spiritual Writings, 239. 58. Michel Foucault, On the Government of the Living: Lectures at the Collège de France, 1979–1980, ed. Arnold I. Davidson, trans. Graham Burchell (Basingstoke: Palgrave Macmillan, 2014), 255. 59. Ibid., 7. 60. De Cusa, Selected Spiritual Writings, 248.

16028-0303f-Finalpass-r01.indd 189 9/24/2019 12:03:33 PM 190 Chapter 7

61. Foucault, Government of the Living, 93. 62. Michel Foucault, Psychiatric Power: Lectures at the Collège de France, 1973–1974, ed. Arnold I. Davidson, trans. Graham Burchell (Basingstoke: Palgrave Macmillan, 2006), 346. 63. Foucault, Government of the Living, 304. 64. Reiner Schürmann, “ ‘What Can I Do?’ In an Archaeological-Genealogical History,” Journal of Philosophy 82, no. 10 (1985): 544. 65. Reiner Schürmann, “Legislation-Transgression: Strategies and Counter- Strategies in the Transcendental Justification of Norms,” Man and World 17, no. 3–4 (1984): 378. 66. Marion, “Seeing,” 314. 67. De Cusa, Selected Spiritual Writings, 240. 68. Michel Foucault, Wrong-Doing, Truth-Telling: The Function of Avowal in Justice, ed. Fabienne Brion and Bernard E. Harcourt, trans. Stephen W. Sawyer (Chicago, IL: University of Chicago Press, 2014), 139. 69. Ibid. 70. De Cusa, “Vision of God,” 689. 71. Foucault, Government of the Living, 121.

16028-0303f-Finalpass-r01.indd 190 9/24/2019 12:03:33 PM Chapter 8 Visions Max Liljefors, Gregor Noll, and Daniel Steuer

Klee’s Angelus Novus, Benjamin’s angel of history, has his face turned toward the past. He is staring at “one single catastrophe, which keeps piling wreckage upon wreckage.” Because the storm of what is called progress continuously propels him into the future, he cannot “awaken the dead, and make whole what has been smashed.” What are his options? He may try to resist the storm, become stronger than the propelling force, and move toward the past. Or he may try to beat prog- ress by becoming faster than the storm, moving ahead into the future and gaining a moment of pause. Dystopian speculation, we hold, is a utopian force that, by moving ahead of the storm, may create breathing space that interrupts progress, maybe even redirects it. Only by moving faster than the storm may the angel leave his position at the cutting edge of homogeneous, empty, linear time, and enter a time “filled full by now-time”—Jetztzeit. Only then may he be granted his wish—to linger; only then may he cease to stare, begin to see, and put things together again.1

RECOGNIZING VIOLENCE

How do we recognize violence? If violence cannot be identified by apply- ing universal and objective criteria, how do we know that a deed, performed by others or ourselves, exceeds what is proper or warranted? How do we recognize that a certain application of force—physical or moral—constitutes a violation? How do we recognize the element of transgression that turns a deed into an act of violence? Perhaps, rather than from the deed as such, considered in isolation, our judgment proceeds from a perception that, within its concrete context, the 191

16028-0303f-Finalpass-r01.indd 191 9/24/2019 12:03:33 PM 192 Chapter 8

deed infracts a limit that ought to be respected: that its force is exagger- ated, out of balance with what surrounds it; that it lacks proportionality and therefore rejects or ignores the reciprocity of dialogue; that it is, so to speak, closed, devoid of empathy, devoid of that openness that may contain the image of the Other. For a deed to be conceived as balanced, we must sense that it matches the world in which it is performed. Its purpose, means, and consequences must form a picture of sorts, in which we can recognize some truth about the world as it is and as it ought to be. There can be neither violence nor beauty in the act itself, only in how it intervenes in that which surrounds it, its Other, whether near or far. It is through our capacity to assess how an act interposes itself into the world that the act comes to present to us a dual reflection of the world: of how the world is and how it could be. In this respect, every act is an utterance, just as speech itself is an act. Every speech act closes the gap between what could be and what comes to be, against the background of a shared language. In proceeding from the uncertainty inherent in every situa- tion to the finality of our judgment—in coauthoring language, even with our silences—we perform justice or injustice, fairness or violation. Authority, ultimately, rests with everyone or no one.2 The form of the act, the “action,” might be measured—like a bureaucrat files a document, sine ira et studio—and yet we may sense that it lacks the quality of empathy, of recognition of the Other. As an act, therefore, it can be violent. Conversely, a very dramatic action can constitute a proportional act, one that is in harmony with its surroundings. To recognize violence, then, is to perceive in a deed the absence of some- thing we feel it ought to provide: a truthful reflection of the world. There- fore, we tend to say, rightly, that violence is “blind.” It signals an inability to acknowledge proportionality, reciprocity, and empathy within and across forms of life and their perspectives on the world. This is a quintessentially human problem, a problem besetting beings positioned between nature and God. If we attempt to imagine the natural world stripped of anthropomor- phisms—from Arendt’s Archimedean point—we are apt to intuit nature as intrinsically violent, for we would see that its continuous internal transfor- mation unfolds blindly, engulfing everything without distinction. Nothing remains outside its scope. Analogously, if we attempt to imagine God devoid of human characteristics, He too appears inherently violent, undoing all oth- erness, encompassing all. Only God has no opposite; only God, therefore, can never be defined or delimited. This is the core of the problem of theodicy. In spite of their opposition, pure spirit and disenchanted nature share this property: they lack an Other. Hence, the ability to recognize violence in the collapse of proportional reciprocity is exclusively human. It exists only “in between” the realms of nature and the divine. This ability follows from the

16028-0303f-Finalpass-r01.indd 192 9/24/2019 12:03:33 PM Visions 193

human gaze on things, this invisible wedge that splits uniform reality into Self and Other. This is also the foundation of our feeling of justice. Human community— and the very idea of humanity—rests on our ability to imagine the Other’s suffering and to allow her plight to moderate the limitless demands of the ego. As Iris Murdoch argues, imagination thus drives processes of “unselfing,”3 that is, a movement toward realism, an “effortful ability to see what lies before one more clearly, more justly.”4 Justice is built on impartiality, not on objective calculability. In the final instance, it is rooted in compassion. Impartial com- passion implies both blindness, to lull the ego to sleep, and superior vision—to see beyond the ego’s horizon, in two senses: beyond one’s own horizon and beyond the Other’s. In a way, then, it involves imagining oneself into the Other and from the Other’s point of view. Hence the motif of Iustitia, who, ever since the sixteenth century, has been portrayed as blindfolded so that she might see all the more clearly. Until we have cultivated our power of imagination into this form of blind-sighted compassion, we cannot claim to have become fully human, let alone post human. We are still lingering in the proto-human state.

THE ALCHEMY OF ARTIFICIAL INTELLIGENCE

Is artificial intelligence (AI) a new form of alchemy? Machine-learning researchers find it increasingly difficult to reproduce their peers’ experimental results, and for this reason such accusations are increasingly leveled against AI. The derogatory aspect of the analogy should not blind us, however, to an illuminating aspect: we are at a historical juncture akin to the transition from alchemy to chemistry, and this merits some thought. The central problem of alchemy was transubstantiation. How could the consecrating words of the priest during Mass transform the substances of wine and bread into those of Christ’s flesh and blood? Studying the trans- formation of one substance into another meant studying the presence of the divine in the world. Medieval alchemists dedicated themselves to experimen- tal theology; almost by accident, they provided the practical foundations for the modern experimental science of chemistry. While the relation between the divine and the world had been central to medieval alchemy, the transition to chemistry foregrounded the relationship of the human and the world. In the 1940s, cybernetics brought yet another radical shift: the place formerly occupied by the divine came to be occupied by all of life, rather than by the human. How is life related to the world? By the scriptural command to subject the world to our will, we humans had been given authority to govern nonhuman life. But cybernetics did away with this privileged position, turn- ing human life into one form of governable life among others.

16028-0303f-Finalpass-r01.indd 193 9/24/2019 12:03:33 PM 194 Chapter 8

Yet, for all its radical flattening of hierarchies, cybernetics is still about transubstantiation. Everything has to be convertible into everything else, and data assume the role that, in alchemy, was performed by matter. The alchemist turned lesser metals into gold, and the consecrating words of the priest turned wine into blood. With Latin translations of Aristotle’s writings accessible to alchemists from the twelfth century onward, Aristotelian prime matter sug- gested itself as a location for transubstantiation to take place. Aristotelian “matter” was what a thing was made of: the matter of a statue might be plas- ter; the matter of a desk might be wood. Aristotle’s prime matter, however, is pure potentiality: a point from which it is possible to become any actual matter and, indeed, a point from which it is also possible not to become actual matter. An alchemist would work from one Aristotelian matter toward a point of pure potentiality, and from there that matter could pass into any other. In machine learning, the black box that turns input into output is for the researcher what prime matter was for alchemists. It trans-substantiates raw data (a kind of actual matter) into processed data (another kind of actual matter), and it assumes the function of pure potentiality. Save, perhaps, for the possibility of not passing over into an actual kind of matter. Could we at all imagine a machine-learning researcher choosing to stop the learning machine halfway before it produces its output? Would that not be anathema to the very project of machine learning, which requires an unceasing process of conversion in order to continue to perfect itself? Alchemy was a project of human self-improvement; the alchemist’s work and knowledge were ultimately motivated by a desire to relate to the divine. The conversion of lesser metals into gold and of wine into blood are analogous to the conversion of the alchemist from an ordinary into an initi- ated human being, which establishes a relationship with the divine as one matter passes through prime matter into another one. The AI researcher, by contrast, remains at the margins. She is driven by “what works,” that is, the usefulness, the fit, of output in the world. “What works” is shorthand for code interacting with the world: interacting as life itself, animated by the black box. Indeed, in human self-enhancement, the alchemical tradition and that of AI are simultaneously incarnated, bringing the pure potentiality of the alchemical and the natural selection of AI into the closest proximity. It is clear, though, that natural selection is superimposed and alchemical potentiality subordi- nated. In human self-enhancement through AI, the human is the matter of transubstantiation, not its agent. The agent is inside the black box—and about what happens there, the sum of our ignorance will always remain greater than the sum of our knowledge. This is exactly how the Fourth Lateran Council of 1215 described the relation between humans and God.

16028-0303f-Finalpass-r01.indd 194 9/24/2019 12:03:33 PM Visions 195

FUTURE PERFECT SCIENCE

At a certain intermediary phase in history, alchemy and chemistry could not have been told apart. Those pursuing the transubstantiation of base metals into gold coexisted more or less peacefully with those who invested them- selves in the early industrial metallurgical processing of mined metals. And, indeed, some went back and forth between alchemy and chemistry. The name of this disciplinary interregnum is chymistry. It was only in the early eigh- teenth century that a strict line was drawn between chemists and alchemists, and the latter were evicted from the scientific community. At around the same time, theological argument was evicted from the sciences. AI is an interregnum, too, and it shares some characteristics with chymistry. While cosmological conflict simmers under the surface, technical and practi- cal issues dominate, and sharp dividing lines between competing cosmologies will only emerge in the future, once it is possible to look back on where we are now. AI is a future perfect construct: it will have had a particular sig- nificance once the direction of the project catalyzed by it has become clear. With this in mind, we can see that the critique of machine learning’s results as insufficiently reproducible and intelligible is an anachronism of sorts. It is leveled against AI from a position located within a type of scientific positiv- ism and humanism: positions soon, perhaps, to become obsolete. It took a few centuries to isolate and remove the question of God from the sciences. If we are still on the same trajectory, it might be a century or two before we can isolate and remove the question of the human from a science of AI.

TEMPORALITY OF THE APOCALYPSE

And I saw. Seven seals, seven trumpets, the vials of the wrath of God, and seven angels: but nowhere does the prophet see the end. The final revelation never occurs. Revelation and the last judgment, the last book of the Bible seems to suggest, either take place continually or never. They are not goals, not ends, but a relation between things.

Unending End of Time. The katechon turns the instantaneous apocalypse, incomprehensible in its absoluteness, into a prolonged affair in which every “and I saw” is followed by another “and I saw.” The steadily approaching moment of revelation lies between the transience of happiness and messianic intensity,5 not at the end of time but between two poles that can be connected across all pasts and futures. The katechon, by constraining the Antichrist, provides the necessary temporality for the messianic to come. The final

16028-0303f-Finalpass-r01.indd 195 9/24/2019 12:03:33 PM 196 Chapter 8

revelation is suspended, and instead revelations continue on within temporal- ity. The last judgment is spoken always and everywhere: listen, look.

The Need for Incompleteness. That “which is, and which was, and which is to come”6 is a formula that is repeated throughout Revelation. The prophet is asked to “write the things” he has seen, “the things which are, and the things which shall be hereafter.”7 The word spoken by he who asks the prophet to write, and who encapsulates all words, is a “two-edged sword,”8 something not fit for the purpose of speaking the one final judgment. Nor is the record complete. At crucial moments, the heavens fall silent9 and the prophet is told not to write down what the seven thunders told him—a voice from heaven commands silence.10 From Moses’s broken slate to the concluding book of the Bible, God’s word is never directly spoken or written. The text is incomplete reported speech. Without such indirectness and incompleteness, what would there be to interpret and to understand? The instructions to choreograph the angels and apocalyptic horsemen are given only through the prophet’s text: the crown that conquers, the sword that takes peace and kills, the balances that exchange.11 And I saw . . . power, violence, exchange, and death. Apocalypse turns out to name the continuous possibility of revelation. The end of the apocalypse: catastrophe. Catastrophe means: there is no truth left to be revealed. The temporal tension of revelation would be entirely dissolved.

A WORLD BEYOND REDEMPTION?

What the aspect of redemption brings into view is a fundamental ambivalence toward technology in its most radical expression—the singularity. The singu- larity is the endpoint of outsourced askesis and at the same time the image for a hubris of Babylonian proportions: instigated by humans and denying the human. The ambivalence toward AI leads to a dichotomous perspective: it is the solution; it is the problem. What is lost is a perspective that sees the split between rational and irrational as running through both sides, that of the technical, of AI, and that of the human, the living body.

WATCH THIS!

Digital systems and the human meet at hand-over points: a human looking at a system’s screen, a robotic team member signaling a human team member, a human neuronal signal being recorded by a system’s sensor, a human input- ting a pattern of movements and clicks on the Internet. In the emerging world

16028-0303f-Finalpass-r01.indd 196 9/24/2019 12:03:33 PM Visions 197

of war, hand-over points present themselves as merely technical, engineering questions. They are not. It is at these points that the political assumptions of the emerging world of war are expressed in their most condensed form: an idea of what makes the human, and an idea of what makes the machine, and how they are supposed to work together; an idea of what kind of society is desirable; an idea of what nature is; and an idea of how much control over our can be entrusted to machines. If you want to understand how our digital systems limit the range of our future political possibilities, you should prioritize the study of these concrete hand-over points. What might otherwise been seen as an engineering solution embodies the constitution of the emerging world of war. This constitution can only be undone by analyzing it. By analyzing it, we can make its constitutive elements reappear: assumptions about how human sensing and sensemaking can be translated into machine sensing and sense- making; assumptions about how the human can be translated into the work- ings of a machine and back; and assumptions about how the human is the same everywhere and in everybody. Once these assumptions have been made to reappear, we are free to constitute ourselves differently.

REDEEMING THE WORLD

Think of AI as a mathematical reading of nature and as the science of nature’s single will. AI allows us to rationalize and manipulate nature by optimizing, for example, farming yields, human shopping behavior, or the targeting of enemies. Now assume—with Schopenhauer—that nature’s will is irrational, that it causes misery and suffering. Humans also have a part in, and are part of, this will, and warfare is but one form of the misery and suffering it causes. However, human intellect is able to recognize this and to begin negating that will. Humans, Schopenhauer claimed, can bring life to a standstill by practic- ing askesis, and they can thus, ultimately, redeem nature. In more concrete terms, Schopenhauer imagined this to be possible by indulging less in food and drink, abstaining from procreation, and disengaging from life to the point that the ascetic human no longer has a will. Can AI be conceived as a form of askesis, as a shedding of flesh and pro- creation, a negation of life’s irrationality? Would AI-based warfare, then, not become less and less entangled in the single, irrational will that drives nature, less and less part of humanity’s irrational side? Would it not thus make room for the rational, redemptive side of humanity in war? Should we not pray for the singularity to come, and come fast, just as some pray for the return of the Messiah? This would change Schopenhauer’s order of redemption: the human gives up her redeeming role and is reduced to the nature that is to be redeemed.

16028-0303f-Finalpass-r01.indd 197 9/24/2019 12:03:34 PM 198 Chapter 8

The use of AI in warfare, however, is framed differently by developers and militaries. They emphasize that the human always retains ultimate control. If this claim is correct, the order of redemption is modified once more. The redeemer is under the ultimate control of a human reduced to her irrational nature, a human that desires victory or retribution, a fully carnal human who is part of the chain of suffering that characterizes the work of the will in nature. Under these conditions of outsourced askesis, there is no hope for redemption.

BLACK BOX RELIGION

According to Kant, all religious forms of belief (Glaubensarten) are associ- ated with a secret or mystery, something that is known by all but cannot be publicly communicated. This secret has practical consequences, but it cannot be understood in theoretical terms. (We hear the voice of the moral law, but we cannot give a theoretical account of how it translates into actions.) In the belief in the world as a black box, the reverse is the case: everything can be publicly communicated, and nothing is secretly known. The one secret inside each and every box is, theoretically, fully transparent, but, as in the case of the moral law, the translation into action, the overall effect of the black-box system, remains hidden. Apocalyptic stasis—endless civil war—is like an illegal poker game played by Christ, the Antichrist, and the katechon. The faces are unreadable, and the players are anonymous.

All will be reveiled. John receives mail. He then transmits a message already transmitted, testifies to a testimony that will again be that of another testi- mony. But who is at “the exchange [le central] of these telephone lines or the terminal of this endless computer?”12 A possible answer would be: the “very proper and discreet girl” of Shannon and Weaver’s engineering com- munication theory, who accepts your telegram, any telegram: “She pays no attention to the meaning, whether it be sad, or joyous, or embarrassing” but deals “with all that come to her desk.”13 This (im)proper and (in)discreet girl can be read as the perfectly neutral and meaning-blind operator handling all exchanges at the heart of AI and the singularity, the inside of the black box. But what does the anthropomorphism of this image hide? Where is it located? Does it have a location? Can it be held responsible? And if the girl at the central exchange treats all messages with perfect indifference, how can it be proper or improper, discreet or indiscreet? It is neither the one nor the other: her practice is blind. AI therefore does not mirror the “self-presentation of the apocalyptic structure of language.”14 On the contrary, it reveals by hiding the transcendental conditions of its operations. The girl’s neutrality is fake and

16028-0303f-Finalpass-r01.indd 198 9/24/2019 12:03:34 PM Visions 199

provides a false authority. Look, this neutrality says, this is all I do—mean- while performing sleights of hand like a street magician.

The Gaze in Language. The ground of the moral law—given in the way Jesus spoke—Kant calls “Liebenswürdigkeit” (translated as “worthiness of love”).15 “Liebenswürdigkeit” has a double sense. The moral law does not command; it does not ask to be followed because of its authority, but out of love. At the same time, in revealing our freedom to us, by revealing itself, it is gentle with us. It loves us and is therefore worthy of our love. A law that governs by command works like a mechanism; a law that governs by offering itself unconditionally works through its gaze. The way Jesus speaks must be understood as language gazing at us. Where there are human languages and voices, there is also what Kant called “Liebenswürdigkeit,” independent of a specific language or religion. (Kant’s argument is not a Christian argument.) The universal language of AI pretends to eradicate the need for translation, the process in which acknowledgment finds expression. That universal lan- guage builds the true tower of Babel.

Preparation by Analogy. For Kant, the story of the apocalypse is an analogy, “a historical narrative of the future world, which is not itself history.” The “moral world-epoch” is not empirically accessible to us, but we can “have a glimpse of it in the continuous advance and approximation toward the highest possible good on earth.”16 The moral law as analogy folds the apocalyptic moment back into time: act as if you will be judged on the last day. Every moment partakes of, or borders on, . We should “always [ jederzeit—at all times] consider ourselves as actually the chosen citizens of a divine (ethical) state.”17 This, for Kant, is a permanent challenge that cannot be replaced with any positive dogma. The apocalypse is, indeed, permanent: it is the symbol for the ongoing transi- tion from church-based to reason-based religion. The Church (dogma) is the Antichrist; reason is the katechon. Christ’s gaze—the way he spoke—points us toward the end of all things. Should Christianity ever cease to be worthy of love, the Antichrist “would begin his—albeit short—regime (presumably based on fear and self- interest),” and “the (perverted) end of all things, in a moral respect, would arrive.”18 Kant’s words are hopeful; the Antichrist’s reign would be short. Today, we are confronted with the prospect that this reign will be perennial, and that it can no longer be identified with a concrete figure.

Total Operativity. Call this the collapse of Messianic temporality into a pure presence: uninterrupted “operativity,” without any possibility of inoperativity or contemplation. Where all potentiality passes instantaneously into actuality,

16028-0303f-Finalpass-r01.indd 199 9/24/2019 12:03:34 PM 200 Chapter 8

the distinction between process and creation, between signal and language, disappears: “What, indeed, is poetry, if not an operation in language which deactivates it, rendering its communicative and informative function inopera- tive in order to open up a new possibility of its use?”19 Under conditions of AI, any delay or hesitation is a fault to be removed, and what poetry accomplishes for language, and “politics and philosophy” for the human “potentiality to act”20—namely the possibility of contemplation and inoperativity—become obstacles: “Contemplation and inoperativity are . . . the metaphysical opera- tors of anthropogenesis which, by freeing the living human being from any biological or social predetermination, and from any predetermined task, make it available [disponibile] to that particular absence of work we are used to calling ‘politics’ or ‘art.’ ”21 In a world of AI and black-box religion, by con- trast, anonymous and invisible predetermination would be universal.

CONDEMNED TO WHAT?

Digitalization renders the laws of war obsolete. Formally still in force, they are unable to capture the relations that digitalization engenders. Forgoing human law, digitalization claims to be the first law, underwritten by nature. Abstraction ensures that everything may be converted into everything else in an ever-present, never-ending circularity. This circularity, with no past or future, hence no history, is what makes the subjects of digitalization akin to the . “The law of emergence, which was the only law the Titans rec- ognized,” Friedrich Georg Jünger wrote in 1947, “followed, like okeanos, a circular path, flowing back into itself; this law was the being-in-flow of things that swap their places only to return to their original place. The vis inertia asserts itself with the Titans, and their character displays a determinability that can be called mechanical. This regularity of nature is so precise that it can be calculated. An old, rock-hard causality lies in nature’s movement.”22 In the mythical order, Titanic law is first law, preceding the laws of gods and humans and coming after only chaos. Having been subdued in their battle with the gods, the Titans forever seek to resurrect this law. Seeking to apply human law to artificially intelligent war is a kind of Titanic resurrec- tion attempt. “Within Titanic emergence,” Jünger went on, “a powerful will is at work. A human who seeks to depict this will through imitation exceeds his measure. He is made to seek something unattainable, and to succumb to the effort. The gods punish him by forever tethering him to that effort. The work of Sisyphus, tirelessly rolling the boulder up the hill before it slips out of his hands just below the summit, is Titanic.”23 The gods threw the subdued Titans into an underworld of elemental tasks. They punish humans resurrect- ing the Titans’ struggle similarly: the humans are condemned to the eternal

16028-0303f-Finalpass-r01.indd 200 9/24/2019 12:03:34 PM Visions 201

repetition of an elemental task. To what elemental task are we about to be condemned?

CATASTROPHE

The catastrophe of the emerging world of war is anti-apocalyptic: a tendency toward universal reveiling by rendering indistinguishable analogy and iden- tity, gaze and stare, machine and man, judgment and mechanism. As a result, everything is final, and nothing is final.

THE STARS UNDER OUR FEET

Like Thales, gazing aloft and wild to know what is up in the sky, we fail to see what is in front of us and under our feet. This is no laughing matter. Today’s stars are but positional data, and today’s astronomers do not fall into a well; they fall much deeper: where everything is a star, nothing shines. Theory and technology are inseparably bound up with one another within our lives: we are on board yet feel and behave like spectators. The difference between the grounded maid and the theoretical stargazer has disappeared—as has laugh- ter, which always depends on the surprising discrepancy between what is expected and what actually happens. In the absence of transcendence, there can be no theory, no analysis, of the present moment; theory, like humor, requires distance, and any genu- ine distance rests on transcendence. In the closed world of its absence, the repressed—what is rendered invisible—is the capital yielding the interest to be paid in the hell of the unconscious.24 A debt that cannot be repaid must issue in an addictive spiraling toward nowhere: a wild goose chase after a fake transcendence. We reach for the stars, and fail to see them under our feet. Laughter slowly becomes hollow and is finally drowned out by repetition. Thales, gazing at the stars and falling into a well, was at least still in the world. We, today, are permanently half in and half out of it, caught in hand- over points, both in- and outside our heads and our bodies.

RECOGNIZING VIOLENCE

With automated warfare, an order of violence is emerging in which decisions are handed over to algorithms or to man–machine assemblages that do not allow us to distinguish between the human and nonhuman input into a deci- sion, between “responsibility” and “cause.”

16028-0303f-Finalpass-r01.indd 201 9/24/2019 12:03:34 PM 202 Chapter 8

What metaphor best represents the complexities of imagination, blindness, and vision? It is the figure of the black box, an altogether different symbol from Iustitia. While Lady Justice is the personification of the ideal, as well as the problem, of determining the just—and can any ideal manifest itself as something other than a never fully resolvable problem of conduct?—the image of the black box dispenses with the human figure altogether. Yet it is no less symbolic. Just as “justice” is not a person (or a god), the “black box” is not a box; it is a label for technological systems the inner workings of which we do not fully grasp despite the fact that they are of our own making. The form of represents containment: it separates what is inside from the outside. Its (lack of) color symbolizes unknowability. But, like Iustitia, the black box gives shape to an ideal: the sovereignty of algorithmic reason. The human species collectively appears to have placed itself in the position of the sorcerer’s apprentice of Goethe’s poem, and our world is now awash. This means we must approach the problem of recognizing violence anew. The twentieth century taught us to hear the storm in the rustling of the bureaucrat’s papers, a storm that proved able to consume entire populations. Will we also learn to hear violence in the soundless weaving of algorithms? Will we be able to distinguish, in the results of algorithmic reasoning, between a truthful and a deceptive reflection of the world? Will we encounter the Other’s face across the darkness of the black box? The black box is a “magic” box: its contents cannot be determined from its outward appearance. We might imagine its inside as boundless or as two-dimensionally flat. The wall of the symbolic box does not, therefore, represent proportionality between inside and outside (like we would expect from an ordinary container) but untranslatability between the two. If we can- not translate algorithmic intelligence into human reason, should we trust the algorithm to truthfully translate human intention, judgment, and compassion? Here we might recall Borges’s short story “On Exactitude in Science” (1946),25 which is about an empire where the science of cartography has become so refined that there is produced, eventually, a map of the empire in 1:1 scale, a map that completely coincides with the empire point by point. In Borges’s story, this marks the end of cartography; thereafter, the discipline declines, and the map is abandoned to the elements. But today, Borges’s map seems not so much an endpoint as a tipping point and one we have now passed. The amount of data from which algorithms build their internal rep- resentations of the world far outstrips the cognitive capacities of the human mind. Their map is denser with data than the reality we experience, than the world we live in. How can the model of Borges’s map surpass the 1:1 scale map in detail and still “match” the world? Distortions appear; the map wrinkles and folds up and in on itself. Some areas disappear from view. Imag- ine the map becoming a thousand times more detailed, covering the empire

16028-0303f-Finalpass-r01.indd 202 9/24/2019 12:03:34 PM Visions 203

and itself in layers upon layers of hidden correspondences. Here we begin to approach the untranslatability of the black box. The map substitutes itself for reality and becomes the enigmatic real object that can never fully be mapped. A second tipping point follows the first. When the map learns to draw itself, mapping and transforming its own map-making processes, it begins to imitate not only external reality but reason, intelligence, assessment, and decision making. We call this “automation,” but the meaning of that term is broad, stretching from blind reaction to autonomous self-governance. As our capacity to penetrate the technological system is outstripped by the system’s ability to map reality, the system moves toward autonomy, while we become increasingly pushed toward blind reaction. We become “automated,” reac- tive automatons. The black box governs itself with inhuman speed; we react reflexively to signals whose source we cannot understand. That source will always be infinitely beyond us. In order to resist, we must slow things down enough to be able to act rather than reflexively react. We must take the time to insist on judging the proportionality of our deeds and on critically assessing the degree to which they truthfully and impartially reflect our shared world. We should ask our- selves, can we still cultivate our imagination into compassion so that we may encounter the Other’s face as an inviolable limit, the very precondition for our ability to recognize violence? That would mean to refuse to yield to the authority of the algorithm and thereby to the economic-political authority that draws its legitimacy from it. That might be our only chance to avoid sliding from the proto-human into the post human without ever having become truly human.

NOTES

1. See Walter Benjamin “On the Concept of History”, in Selected Writings, vol. 4 (1938-1940), ed. Michael W. Jennings et. al. (Cambridge, MA: Harvard University Press, 2003), 389–400. 2. “In assuming the burden of finality in the absence of certainty, an authority stakes the virtue of its community: if its judgments are not accepted as scrupulously fair, in its criteria and in its application of criteria, the community is shown to that extent not to provide a secure human habitation for its members; it fails to take up the slack between the uncertainty of judgment and the finality of decision.” Stanley Cavell, The Claim of Reason: Wittgenstein, Skepticism, Morality, and Tragedy (Oxford: Oxford University Press, 1979), 31. 3. Iris Murdoch, Metaphysics as a Guide to Morals (London: Penguin, 1992), 54. 4. Ibid., 322.

16028-0303f-Finalpass-r01.indd 203 9/24/2019 12:03:34 PM 204 Chapter 8

5. Walter Benjamin, “Theologico-Political Fragment,” in Reflections: Essays, Aphorisms, Autobiographical Writings, trans. Edmund Jephcott, ed. Peter Demetz (New York: Schocken, 1986), 313. 6. Rev 1:8 (KJV). 7. Rev 2:7 (KJV). 8. Rev 1:16 (KJV). 9. Rev 8:1 (KJV). 10. Rev 10:4 (KJV). 11. “A measure of wheat for a penny, and three measures of barley for a penny [eight to twelve times the usual price at the time]; and see thou hurt not the oil and the wine [only available to the rich].” Rev 6:5–6 (KJV). 12. Jacques Derrida, “On a Newly Arisen Apocalyptic Tone in Philosophy,” in Raising the Tone of Philosophy: Late Essays by Immanuel Kant, Transformative Critique by Jacques Derrida, ed. Peter Fenves (Baltimore, MD: Johns Hopkins Uni- versity Press, 1999), 156. 13. Warren Weaver, “Some Recent Contributions to the Mathematical Theory of Communication,” in The Mathematical Theory of Communication, ed. Claude E. Shannon and Warren Weaver (Chicago: University of Illinois Press, 1998), 27. 14. Derrida, “Newly Arisen Apocalyptic Tone in Philosophy,” 157. 15. Immanuel Kant, “The End of All Things,” in Immanuel Kant, Religion within the Boundaries of Mere Reason and Other Writings, ed. Allen Wood and George di Giovanni (Cambridge: Cambridge University Press, 1998), 203, 205. 16. Immanuel Kant, “Religion within the Boundaries of Mere Reason,” in Immanuel Kant, Religion within the Boundaries of Mere Reason and Other Writings, ed. Allen Wood and George di Giovanni (Cambridge: Cambridge University Press, 1998), 138. 17. Ibid., 139 (translation modified). 18. Kant, “The End of All Things,” 205. 19. Giorgio Agamben, “Che cos’è l’atto di creazione?,” in Creazione e anarchia: L’opera nell’età della religione capitalista (Vicenza: Neri Pozza Editore, 2017), 51. 20. Ibid., 52. 21. Ibid., 50. 22. Friedrich Georg Jünger, Griechische Mythen (Frankfurt am Main: Klostermann, 2015), 114. 23. Ibid., 115. 24. Walter Benjamin, “Capitalism as Religion,” in Selected Writings, vol. 1 (1913–1926), ed. Marcus Bullock and Michael W. Jennings (Cambridge, MA: Harvard University Press, 1996), 288–91. 25. Jorge Luis Borges, “On Exactitude in Science,” in The Aleph and Other Stories, 1933–1969, ed. and trans. Norman Thomas di Giovanni (London: Penguin, 1998), 181.

16028-0303f-Finalpass-r01.indd 204 9/24/2019 12:03:34 PM Bibliography

Aalberts, Tanja. “Rethinking the Principle of (Sovereign) Equality as a Standard of Civilization.” Millennium: Journal of International Studies 42 (2014): 767–89. Adorno, Theodor W. Negative Dialectics. Translated by E. B. Ashton. London: Routledge, 1973. ———. The Jargon of Authenticity. Translated by Knut Tarnowski and Frederic Will. Evanston, IL: Northwestern University Press, 1973. ———. “Introduction.” In The Positivist Dispute in German Sociology. Translated by G. Adey and D. Frisby, 1–67. London: Heinemann Educational, 1976. ———. “The Actuality of Philosophy.” Telos 31 (1977): 120–33. ———. “The Idea of Natural History.” Telos 60 (1984): 111–24. ———. Metaphysics: Concepts and Problems. Translated by Edmund Jephcott. Cambridge: Polity, 2000. Agamben, Giorgio. Stasis: Civil War as a Political Paradigm. Translated by Nicholas Heron. Edinburgh: Edinburgh University Press, 2015. ———. “Che cos’è l’atto di creazione?” In Giorgio Agamben, Creazione e anarchia: L’opera nell’età della religione capitalista, 29–52. Vicenza: Neri Pozza Editore, 2017. Allaby, Michael, and James Lovelock. The Great Extinction: What Killed the Dino- saurs and Devastated the Earth? London: Martin Secker and Warburg, 1983. Anghie, Antony. Imperialism, Sovereignty, and the Making of International Law. Cambridge: Cambridge University Press, 2005. ———. “Rethinking Sovereignty in International Law.” Annual Review of Law and Social Science 5, no. 1 (2009): 291–310. Angstrom, Jan. “Introduction: Debating the Nature of Modern War.” In Rethinking the Nature of War, edited by Isabelle Duyvesteyn and Jan Angstrom, 1–27. London: Frank Cass, 2005. Angus, Ian. Facing the Anthropocene: Fossil Capitalism and the Crisis of the Earth System. New York: Monthly Review Press, 2016.

205

16028-0303f-Finalpass-r01.indd 205 9/24/2019 12:03:34 PM 206 Bibliography

Arendt, Hannah. “The Archimedean Point.” Lecture at the College of Engineers, University of Michigan, 1968. In The Hannah Arendt Papers at the Library of Congress, Series: Speeches and Writings File, 1923–1975, n.d. ———. “The Conquest of Space and the Stature of Man.” In Hannah Arendt, Between Past and Future. Eight Exercises in Political Thought, 260–74. London: Penguin, 2006. ———. “The Conquest of Space and the Stature of Man.” The New Atlantis: A Jour- nal of Technology & Society no. 18 (Fall 2007 [originally published 1963]): 43–55. Arkin, Ronald. “The Case for Ethical Autonomy in Unmanned Systems.” Journal of Military Ethics 9 (2010): 332–41. ———. “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/ Reactive Robot Architecture.” Technical Report GIT-GVU-07–11. Georgia Institute of Technology, 2012. Available at https://smartech.gatech.edu/handle/1853/22715. Arquilla, John, and David Ronfeldt. In Athena’s Camp: Preparing for Conflict in the Information Age. Santa Monica, CA: RAND, 1997. Asad, Talal. “Reflections on Violence, Law and Humanitarianism.” Critical Inquiry 42, no. 2 (2015): 390–427. Assmann, Jan. Monotheismus und die Sprache der Gewalt. Vienna: Verlag Picus, 2006. Atherton, Kelsey. “Targeting the Future of the DoD’s Controversial Project Maven Initiative.” C4ISRNET, July 27, 2018. http://c4isrnet.com/it-networks/2018/07/27/ targeting-the-future-of-the-dods-controversial-project-maven-initiative/. Barthes, Roland. Camera Lucida: Reflections on Photography. Translated by Richard Howard. New York: Hill & Wang, 1982. Baudrillard, Jean. Impossible Exchange. Translated by Chris Turner. London: Verso, 2001. Bauman, Zygmunt. “Reconnaissance Wars of the Planetary Frontierland.” Theory, Culture & Society 19, no. 4 (2002): 81–90. Baxter, Stephen. Revolutions in the Earth: James Hutton and the True Age of the World. London: Weidenfeld and Nicholson, 2003. Beck, Ulrich. World at Risk. Translated by Ciaran Cronin. Cambridge: Polity, 2009. Benjamin, Walter. “Theologico-Political Fragment.” In Reflections: Essays, Apho- risms, Autobiographical Writings, translated by Edmund Jephcott and edited by Peter Demetz, 312–13. New York: Schocken, 1986. ———. “Capitalism as Religion.” In Selected Writings, vol. 1 (1913–1926), edited by Marcus Bullock and Michael W. Jennings, 288–91. Cambridge, MA: Harvard University Press, 1996. ———. “On the Concept of History”, in Selected Writings, vol. 4 (1938-1940), ed. Michael W. Jennings et. al. (Cambridge, MA: Harvard University Press, 2003), 389–400. Benton, Lauren. A Search for Sovereignty: Law and Geography in European Empires, 1400–1900. Cambridge: Cambridge University Press, 2010. Bhuta, Nehal, Suzanne Beck, Robin Geiss, Hin-Yan Liu, and Claus Kress, eds. Autonomous Weapons Systems: Law, Ethics, Policy. Cambridge: Cambridge Uni- versity Press, 2016. Bigo, Didier. “Globalized (In)security: The Field and the Ban-Opticon.” In Terror, Insecurity and Liberty, edited by Didier Bigo and Anastassia Tsoukala, 10–48. London: Routledge, 2008.

16028-0303f-Finalpass-r01.indd 206 9/24/2019 12:03:34 PM Bibliography 207

Bills, Gwendelyn. “LAWS unto Themselves: Controlling the Development and Use of Lethal Autonomous Weapons Systems.” George Washington Law Review 83, no. 1 (2014): 176–208. Blumenberg, Hans. The Legitimacy of the Modern Age. Cambridge: MIT Press, 1985. Bonitzer, Pascal. “Deframings.” In Cahiers du Cinéma Volume Four 1973–1978: History, Ideology, Cultural Struggle, edited by David Wilson, 197–203. New York: Routledge, 2000. Bonneuil, Christophe, and Jean-Baptiste Fressoz. The Shock of the Anthropocene: The Earth, History and Us. Translated by David Fernbach. London: Verso, 2016. Borges, Jorge Luis. “On Exactitude in Science.” In The Aleph and Other Stories, 1933–1969, edited and translated by Norman Thomas di Giovanni, 181. London: Penguin, 1998. Bowcott, Owen. 2015. “UK Opposes International Ban on Developing ‘Killer Robots.’ ” Guardian, April 13, 2015. https://www.theguardian.com/politics/2015/ apr/13/uk-opposes-international-ban-on-developing-killer-robots. Boyce, Michael W., Jessie Y. C. Chen, Anthony R. Selkowitz, and Shan G. Lakhmani. Agent Transparency for an Autonomous Squad Member. US Army Research Labo- ratory, 2015. Braidotti, Rosi. The Posthuman. Cambridge: Polity, 2013. Breger, Herbert. Die Natur als arbeitende Maschine. Zur Entstehung des Energiebe- griffs in der Physik 1840–1850. Frankfurt: Campus, 1982. Brooks, Rosa. “Drones and Cognitive Dissonance.” In Drone Wars: Transforming Conflict, Law, and Policy, edited by Peter L. Bergen and Daniel Rothenberg, 230–52. Cambridge: Cambridge University Press, 2014. Campo, Joseph L. “Distance in War: The Experience of MQ1 and MQ9 Aircrew.” Air & Space Power Journal, 2015. Available at http://www.au.af.mil/au/afri/aspj/ apjinternational/apj-s/2015/2015-3/2015_3_03_campo_s_eng.pdf. Cavell, Stanley. The Claim of Reason: Wittgenstein, Skepticism, Morality, and Trag- edy. Oxford: Oxford University Press, 1979. Caygill, Howard. “Perpetual Police? Kosovo and the Elision of Police and Military Violence.” European Journal of Social Theory 4, no. 1 (2001): 233–42. ———. On Resistance: A Philosophy of Defiance. London: Bloomsbury. 2013. ———. “Arcanum: The Secret Life of State and Civil Society.” In The Public Sphere from Outside the West, edited by Divya Dwivedi and Sanil V., 21–40. London: Bloomsbury, 2015. ———. “Strategic Intervention and the Digital Capacity to Resist.” In Interventions in Digital Culture, edited by Howard Caygill, Martina Leeker, and Tobias Schulze, 45–60. Lüneburg: Meson Press, 2017. Certeau, Michel de. “The Gaze of Nicholas de Cusa.” Diacritics 17, no. 3 (1987): 2–38. Chamayou, Grégoire. Manhunts: A Philosophical History. Translated by Steve Rendall. Princeton, NJ: Princeton University Press, 2012. ———. A Theory of the Drone. Translated by Janet Lloyd. Harmondsworth: Penguin, 2015.

16028-0303f-Finalpass-r01.indd 207 9/24/2019 12:03:34 PM 208 Bibliography

Chaturvedi, Sanjay, and Timothy Doyle. Climate Terror: A Critical Geography of Climate Change. Basingstoke: Macmillan, 2015. Clarke, Alan, and Daniel Knudson III. “Examination of Cognitive Load in the Human-Machine Teaming Context.” Master’s thesis, Naval Postgraduate School, June 2018. Clausewitz, Carl von. On War. Translated by Michael Howard and Peter Paret. New York: Alfred A. Knopf, 1993. Code, Alan. “Aristotle: Essence and Accident.” In Philosophical Grounds of Ratio- nality: Intentions, Categories, Ends, edited by Richard E. Grandy and Richard Warner, 411–39. Oxford: Clarendon, 1988. Cole, Chris, Mary Dobbing, and Amy Hailwood. Convenient Killing: Armed Drones and the “Playstation” Mentality. Oxford: Fellowship of Reconciliation, 2010. Connor, Steven. The Book of Skin. London: Reaktion, 2003. Cortright, Edgar M. Exploring Space with a Camera. Washington, DC: Office of Technology Utilization, National Aeronautics and Space Administration (NASA), 1968. Cowen, Deborah. The Deadly Life of Logistics: Mapping Violence in Global Trade. Minneapolis, MN: University of Minnesota Press, 2014. Craven, Matthew. “Between Law and History: The Berlin Conference of 1884–1885 and the Logic of Free Trade.” London Review of International Law 3, no. 1 (2015): 31–59. Crawford, Emily, and Alison Pert. International Humanitarian Law. Cambridge: Cambridge University Press, 2015. Crutzen, Paul J. “The Anthropocene: Geology of Mankind”, Nature , Vol. 415 (3 January 2002): 23. Crutzen, Paul J. “The Background of an Ozone Researcher: A Brief Biography.” In Paul J. Crutzen: A Pioneer on Atmospheric Chemistry and Climate Change in the Anthropocene, edited by Paul J. Crutzen and Gunther Brauch, 3–60. Switzerland: Springer, 2016. Crutzen, Paul J., and John W. Birks. “The Atmosphere after a Nuclear War: Twilight at Noon.” In Paul J. Crutzen: A Pioneer on Atmospheric Chemistry and Climate Change in the Anthropocene, edited by Paul J. Crutzen and Gunther Brauch, 125–52. Switzerland: Springer, 2016. Cusa, Nicholas de. Nicolas of Cusa: Selected Spiritual Writings. Translated by H. Lawrence Bond. New York: Paulist Press, 1997. ———. “The Vision of God.” Translated by Jasper Hopkins. In Complete Philosophi- cal and Theological Treatises of Nicholas of Cusa, vol. 2, 679–743. Minneapolis, MN: Arthur J. Banning Press, 2001. Darwin, Charles. The Descent of Man, and Selection in Relation to Sex. New York: D. Appleton, 1871. Davis, Heather, and Etienne Turpin, eds. Art in the Anthropocene: Encounters among Aesthetics, Politics, Environments and Epistemologies. London: Open Humanities Press, 2015. Debord, Guy. Comments on the Society of the Spectacle. London: Verso, 1980. Deeks, Ashley. “ ‘Unwilling or Unable’: Toward a Normative Framework for Extrater- ritorial Self-Defense.” Virginia Journal of International Law 52 (2012): 483–550.

16028-0303f-Finalpass-r01.indd 208 9/24/2019 12:03:34 PM Bibliography 209

Deitchman, Seymour, Val Fitch, Murray Gell-Mann, Henry W. Kendall, Leon M., Lederman, H. Mayer, William, Nierenberg, Fred Zarchariasen, and George Zweig. Air-Supported Anti-Infiltration Barrier. Alexandria, VA: Institute for Defense Analyses, Jason Division, 1966. Available at https://fas.org/irp/agency/dod/jason/ barrier.pdf. Deleuze, Gilles. “Spinoza: Power (Puissance), Classical Natural Right.” Translated by Simon Duffy. Lecture, University of Vincennes in Saint-Denis, Paris, Decem- ber 9, 1980. Available at https://www.webdeleuze.com/textes/20. ———. Foucault. Translated by Seán Hand. Minneapolis: University of Minnesota Press, 1988. ———. “Postscript on ‘Control Society.’ ” In Gilles Deleuze, Negotiations 1972– 1990. Translated by Martin Joughin. New York: Columbia University Press, 1995. Deleuze, Gilles, and Félix Guattari. “Treatise on Nomadology—The War Machine.” In Gilles Deleuze and Félix Guattari, A Thousand Plateaus, 409–92. London: Bloomsbury, 2013. Derrida, Jacques. Dissemination. Translated by Barbara Johnson. Chicago, IL: Uni- versity of Chicago Press, 1981. ———. “No Apocalypse, Not Now (Full Speed Ahead, Seven Missiles, Seven Mis- sives).” Diacritics 14, no. 2 (1984): 20–31. ———. “On a Newly Arisen Apocalyptic Tone in Philosophy.” In Raising the Tone of Philosophy: Late Essays by Immanuel Kant, Transformative Critique by Jacques Derrida, edited by Peter Fenves, 117–71. Baltimore, MD: Johns Hopkins Univer- sity Press, 1999. ———. The Beast and the Sovereign: Volume II. Translated by Geoffrey Bennington. Edited by Michel Lisse, Marie-Louis Mallet, and Ginette Michaud. Chicago, IL: University of Chicago Press, 2011. Diderot, Denis. “Letter on the Blind for the Use of Those Who See.” In Diderot’s Early Philosophical Works, translated by Margaret Jourdain. London: Open Court, 1916. Dillon, Michael. Biopolitics of Security: A Political Analytic of Finitude. London: Routledge, 2015. Dillon, Michael, and Julian Reid. The Liberal Way of War: Killing to Make Life Live. London: Routledge, 2009. Dupuy, Jean-Pierre. Aux origines des sciences cognitives. Paris: Éditions La Décou- verte, 1999. ———. Pour un catastrophisme eclaire: Quand l’impossible est certain. Paris: Seuil, 2004. Duyvesteyn, Isabelle. “Exploring the Utility of Force: Some Conclusions.” Small Wars & Insurgencies 19, no. 3 (2008): 423–43. Dyer, Geoff. “US to Deploy Robot Combat Strategists.” Financial Times, April 27, 2016. https://www.ft.com/content/29b93562-0c5f-11e6-b0f1-61f222853ff3. Edwards, Paul N. The Closed World: Computers and the Politics of Discourse in Cold War America. Cambridge: MIT Press, 1996. ———. “The World in a Machine: Origins and Impacts of Early Computerized Global Systems Models.” In Systems, Experts, and Computers: The Systems

16028-0303f-Finalpass-r01.indd 209 9/24/2019 12:03:34 PM 210 Bibliography

Approach in Management and Engineering, World War II and After, edited by Agatha C. Hughes and Thomas P. Hughes, 221–53. Cambridge: MIT Press, 2000. Ekelhof, Merel A. C. “Complications of a Common Language: Why It Is So Hard to Talk about Autonomous Weapons.” Journal of Conflict and Security Law 22, no. 2 (2017): 311–31. Elden, Stuart. “Contingent Sovereignty, Territorial Integrity, and the Sanctity of Bor- ders.” SAIS Review 26 (2006): 11–24. ———. Terror and Territory: The Spatial Extent of Sovereignty. Minneapolis: Uni- versity of Michigan Press, 2009. ———. The Birth of Territory. Chicago, IL: University of Chicago Press, 2013. Elkana, Yehuda. “Helmholtz’ ‘Kraft’: An Illustration of Concepts in Flux.” Historical Studies in the Physical Sciences 2 (1970): 263–98. ———. The Discovery of the Conservation of Energy. London: Hutchinson Educa- tional, 1974. Epicurus. The Extant Remains. Translated by Cyril Bailey. Oxford: Clarendon Press, 1926. Evans, Nicholas G., and Jonathan D. Moreno. “Yesterday’s War; Tomorrow’s Tech- nology: Peer Commentary on ‘Ethical, Legal, Social and Policy Issues in the Use of Genomic Technologies by the US Military.’ ” Journal of Law and the Biosciences 2, no. 1 (2015): 79–84. Fang, Lee. “Google Hedges on Promise to End Controversial Involvement in Military Drone Contract.” Intercept, March 1, 2019. https://theintercept.com/2019/03/01/ google-project-maven-contract/. ———. “Defense Tech Startup Founded by Trump’s Most Prominent Silicon Valley Supporters Wins Secretive Military AI Contract.” Intercept, March 19, 2019. https:// theintercept.com/2019/03/09/anduril-industries-project-maven-palmer-luckey/. Farocki, Harun. “Phantom Images.” Public 29 (2004): 12–22. Fassin, Didier. Humanitarian Reason: A Moral History of the Present. Berkeley: University of California Press, 2012. Feldman, Allen. Archives of the Insensible: Of War, Photopolitics, and Dead Memory. Chicago, IL: University of Chicago Press, 2015. ———. “War under Erasure: Contretemps, Disappearance, Anthropophagy, Surviv- ance.” Theory & Event 22, no. 1 (2019): 175–203. Feldman, Yoram, and Uri Blau. “Consent and Advise.” Haaretz, January 29, 2009. https://www.haaretz.com/1.5069101. Fifth Review Conference of the High Contracting Parties to the Convention on Pro- hibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects. Report of the 2016 Informal Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS), CCW/CONF.V/2. June 10, 2016. Fligstein, Neil, and Doug McAdam. A Theory of Fields. Oxford: Oxford University Press, 2012. Floridi, Luciano. Philosophy of Information. Oxford: Oxford University Press, 2011. ———. “Distributed Morality in an Information Society.” Science and Engineering Ethics 19, no. 3 (2013): 727–43.

16028-0303f-Finalpass-r01.indd 210 9/24/2019 12:03:34 PM Bibliography 211

Flusser, Vilém. Into the Universe of Technical Images. Translated by Nancy Ann Roth. Minneapolis: University of Minnesota Press, 2011. Ford, Kenneth, and Clark Glymour. “The Enhanced Warfighter.” Bulletin of the Atomic Scientists 70 (2014): 43–53. Foucault, Michel. Archaeology of Knowledge. Translated by A. M. Sheridan Smith. London: Routledge, 1972. ———. Discipline and Punish: The Birth of the Prison. Translated by Alan Sheridan. New York: Pantheon, 1977. ———. “Omnes et Singulatim: Towards a Criticism of ‘Political Reason.’ ” In Essen- tial Works of Michel Foucault 1954–1984, vol. 3, Power, 298–325. New York: New Press, 2000. ———. Psychiatric Power: Lectures at the Collège de France 1973–1974. Edited by Arnold I. Davidson. Translated by Graham Burchell. Basingstoke: Palgrave Macmillan, 2006. ———. The Birth of Biopolitics: Lectures at the Collège de France 1978–1979. Translated by Graham Burchell. Basingstoke: Palgrave Macmillan, 2008. ———. Security, Territory, Population: Lectures at the Collège de France 1977– 1978. Edited by Arnold I. Davidson. London: Macmillan, 2009. ———. On the Government of the Living: Lectures at the Collège de France, 1979– 1980. Edited by Arnold I. Davidson. Translated by Graham Burchell. Basingstoke: Palgrave Macmillan, 2014. ———. Wrong-Doing, Truth-Telling: The Function of Avowal in Justice. Edited by Fabienne Brion and Bernard E. Harcourt. Translated by Stephen W. Sawyer. Chicago, IL: University of Chicago Press, 2014. Fox-Keller, Evelyn. Refiguring Life: Metaphors of Twentieth-Century Biology. New York: Columbia University Press, 1995. Freedberg, Sydney, Jr. “Joint Artificial Intelligence Center Created under DoD CIO.” Breaking Defense, June 29, 2018. https://breakingdefense.com/2018/06/ joint-artificial-intelligence-center-created-under-dod-cio/. Freedman, Lawrence. The Future of War: A History. London: Allen Lane, 2017. Freud, Sigmund. Civilization and Its Discontents. Translated by James Strachey. New York: W. W. Norton, 1962. Garcia, Denise. “Lethal Artificial Intelligence and Change: The Future of Interna- tional Peace and Security.” International Studies Review 20, no. 2 (June 2018): 334–41. Girard, René. Battling to the End: Conversations with Benoît Chantre. Translated by Mary Baker. East Lansing: Michigan State University Press, 2010. Goethe, Johann Wolfgang. “Analysis and Synthesis.” In Collected Works, vol. 12, Scientific Studies, edited and translated by Douglas Miller, 48–50. Princeton, NJ: Princeton University Press, 1995. Gregory, Derek. “The Everywhere War.” Geographical Journal 177, no. 3 (2011): 238–50. ———. “Drone Geographies.” Radical Philosophy 183 (2014): 8–19. Gribbin, John, and Mary Gribbin. He Knew He Was Right: The Irrepressible Life of James Lovelock and Gaia. London: Allen Lane, 2009.

16028-0303f-Finalpass-r01.indd 211 9/24/2019 12:03:34 PM 212 Bibliography

Griffin, Tim. “Compression.” October 135 (2011): 3–20. Gros, Frédéric. Le Principe Sécurité. Paris: Gallimard, 2012. Guadj, Denis. The Parrot’s Theorem: A Novel. Translated by Frank Wynne. New York: Macmillan, 2013. Guattari, Félix. The Machinic Unconscious: Essays in Schizoanalysis. Translated by Taylor Adkins. Los Angeles, CA: Semiotext(e), 2011. Guha, Marabrata. Reimagining War in the 21st Century: From Clausewitz to Network-Centric Warfare. Abingdon-on-Thames: Routledge, 2010. Gunneflo, Markus. Targeted Killing. Cambridge: Cambridge University Press, 2016. Gusterson, Hugh. Drone: Remote Control Warfare. Boston: MIT University Press, 2016. Halpérin, Jean-Louis. “The Concept of Law: A Western Transplant?” Theoretical Inquiries in Law 10, no. 2 (2009): 333–54. Hamblin, Jacob. Oceanographers and the Cold War: Disciples of Marine Science. Seattle: University of Washington Press, 2005. ———. Arming Mother Nature: The Birth of Catastrophic Environmentalism. Oxford: Oxford University Press, 2013. Hamilton, Clive. Earthmasters: The Dawn of the Age of Climate Engineering. New Haven, CT: Yale University Press, 2013. Han, Byung-Chul. The Transparency Society. Translated by Erik Butler. Palo Alto, CA: Stanford University Press, 2015. Hansen, James V., Paul Benjamin Lowry, Rayman D. Meservy, and Daniel M. McDonald. “Genetic Programming for Prevention of Cyberterrorism through Dynamic and Evolving Intrusion Detection.” Decision Support Systems 43, no. 4 (2007): 1362–74. Harlow, Barbara, and Mia Carter, eds. Archives of Empire: Volume II, The Scramble for Africa. Durham: Duke University Press, 2003. Hayles, Katherine N. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago, IL: University of Chicago Press, 1999. Heidegger, Martin. “The Question Concerning Technology.” In Basic Writings, edited by David Farrell Krell, 213–38. London: Routledge, 1978. ————. Parmenides. Translated by André Schuwer and Richard Rojcewicz. Bloomington: Indiana University Press, 1992. ———. The Fundamental Concepts of Metaphysics: World, Finitude, Solitude. Translated by William McNeill and Nicholas Walker. Bloomington: Indiana Uni- versity Press, 1995. ———. The Fundamental Concepts of Metaphysics: World, Finitude, Solitude. Edited by Martin Heidegger. Translated by William McNeill and Nicholas Walker. Bloomington: Indiana University Press, 2001. ———. “The Age of the World Picture.” In Off the Beaten Track, translated by Julian Young and Kenneth Haynes, 57–85. Cambridge: Cambridge University Press, 2002. ———. Plato’s Sophist. Translated by Richard Rojcewicz and André Schuwer. Bloomington: Indiana University Press, 2003. Heisenberg, Werner. “Das Naturbild der heutigen Physik.” In Die Künste im tech- nischen Zeitalter, edited by Bayerische Akademie der schönen Künste, 31–47. Darmstadt: Wissenschaftliche Buchgesellschaft, 1956.

16028-0303f-Finalpass-r01.indd 212 9/24/2019 12:03:34 PM Bibliography 213

Heyns, Christof. “Autonomous Weapons in Armed Conflict and the Right to a Digni- fied Life: An African Perspective.” South African Journal on Human Rights 33, no. 1 (2017): 46–71. Hildebrandt, Mireille. “Legal and Technological Normativity: More (and Less) Than Twin Sisters.” Techné 12, no. 3 (2008): 169–83. ———. “Law as Information in the Era of Data-Driven Agency.” Modern Law Review 79, no. 1 (2016): 1–30. Hoffman, Frank G. Conflict in the 21st Century: The Rise of Hybrid Wars. Arlington: Potomac Institute for Policy Studies, 2007. ———. “Hybrid Threats: Neither Omnipotent Nor Unbeatable.” Orbis 54, no. 3 (2010): 441–55. Homeland Security News Wire. “US Army Releases First Field Manual for War in the Electromagnetic Spectrum.” March 6, 2014. www.homelandsecuritynewswire .com/dr20140306-u-s-army-releases-first-field-manual-for-war-in-the-electromag netic-spectrum. Hopkins, Jasper. Nicholas of Cusa’s Dialectical Mysticism: Text, Translation, and Inter- pretive Study of De Visione Dei. Minneapolis, MN: Arthur J. Banning Press, 1985. Horowitz, Michael. “The Ethics and Morality of Autonomous Warfare: Assessing the Debate over Autonomous Weapons.” Daedalus: The Journal of the American Academy of Arts and Sciences 4 (2016): 25–36. ———. “Artificial Intelligence, International Competition, and the Balance of Power.” Texas National Security Review 1, no. 3 (May 2018): 36–57. Hussain, Nasser. “The Sound of Terror: Phenomenology of a Drone Strike.” Boston Review, October 16, 2013. http://bostonreview.net/world/hussain-drone- phenomenology. Ihde, Don. Expanding Hermeneutics: Visualism in Science. Evanston, IL: Northwest- ern University Press, 1998. International Human Rights and Conflict Resolution Clinic (Stanford Law School) and Global Justice Clinic (NYU School of Law). Living under Drones: Death, Injury and Trauma to Civilians from US Drone Practices in Pakistan. Septem- ber 25, 2012. Available at https://law.stanford.edu/publications/living-under- drones-death-injury-and-trauma-to-civilians-from-us-drone-practices-in-pakistan/. Janich, Peter. Protophysik der Zeit. Konstruktive Begründung und Geschichte der Zeitmessung. Frankfurt am Main: Suhrkamp, 1980. ———. “Methodical Constructivism.” In Issues and Images in the Philosophy of Science, edited by D. Ginev and R. S. Cohen, 173–90. Dordrecht: Kluwer, 1997. ———. Was ist Information? Frankfurt am Main: Suhrkamp, 2006. Jappe, Anselm. “Sohn-Rethel and the Origin of ‘Real Abstraction’: A Critique of Production or a Critique of Circulation?” Historical Materialism 21, no. 1 (2013): 3–14. Jünger, Friedrich Georg. Griechische Mythen. Frankfurt am Main: Klostermann, 2015. Kahn, Hermann. On Thermonuclear War. Princeton, NJ: Princeton University Press, 1960. Kaldor, Mary. “Elaborating the ‘New War’ Thesis.” In Rethinking the Nature of War, edited by Isabelle Duyvestein and Jan Angstrom, 210–24. London: Frank Cass, 2005.

16028-0303f-Finalpass-r01.indd 213 9/24/2019 12:03:34 PM 214 Bibliography

———. New and Old Wars: Organized Violence in a Global Era, 3rd ed. Stanford, CA: Stanford University Press, 2012. ———. “Identity and War.” Global Policy 4, no. 4 (2013): 336–46. ———. “Missing the Point on Hard and Soft Power?” Political Quarterly 85, no. 3 (2014): 373–77. Kant, Immanuel. “Religion within the Boundaries of Mere Reason.” In Immanuel Kant, Religion within the Boundaries of Mere Reason and Other Writings, edited by Allen Wood and George di Giovanni, 31–191. Cambridge: Cambridge Univer- sity Press, 1998. ———. “The End of All Things.” In Immanuel Kant, Religion within the Bound- aries of Mere Reason and Other Writings, edited by Allen Wood and George di Giovanni, 193–205. Cambridge: Cambridge University Press, 1998. ———. “Prolegomena to Any Future Metaphysics That Will Be Able to Come For- ward as Science (1783).” Translated by Gary Hatfield. In Immanuel Kant, Theo- retical Philosophy after 1781, edited by Henry Allison and Peter Heath, 29–169. Cambridge: Cambridge University Press, 2002. Kapusta, Philip. The Gray Zone. Special Warfare (October-December 2015), 18–25. Kelemen, Deborah, and Evelyn Rosset. “The Human Function Compunction: Teleo- logical Explanation in Adults.” Cognition 111, no. 1 (2009): 138–43. Kendall, Sara. “Cartographies of the Present: ‘Contingent Sovereignty’ and Territorial Integrity.” Netherlands Yearbook of International Law 47 (2017): 83–105. ———. “Immanent Enemies, Imminent Crimes: Targeted Killing as Humanitarian Sacrifice.” In Criminals and Enemies, edited by Austin Sarat, Lawrence Doug- las, and Martha Umphrey, 130–54. Amherst: University of Massachusetts Press, 2019. Kilcullen, David. “Complex Warfighting.” Future Land Operating Concept, Austra- lian Army, Unclassified Draft Developing Concept, April 7, 2004. ———. “Counter-Insurgency Redux.” Survival 48, no. 4 (2006): 111–30. ———. Out of the Mountains: The Coming Age of the Urban Guerilla. London: C. Hurst, 2013. Kitchin, Rob, and Martin Dodge. Code/Space: Software and Everyday Life. Cambridge: MIT Press, 2011. Kittler, Friedrich. “Real Time Analysis, Time Axis Manipulation.” Translated by Geoffrey Winthrop-Young. Cultural Politics 13, no. 1 (2017): 1–18. Kline, Ronald R. The Cybernetic Moment: Or Why We Call Our Age the Information Age. Baltimore, MD: Johns Hopkins University Press, 2015. Knoke, David. “ ‘It Takes a Network’: The Rise and Fall of Social Network Analysis in US Army Counterinsurgency Doctrine.” Connections 33, no. 1 (2013): 1–10. Knorr-Cetina, Karin. “From Pipes to Scopes: The Flow Architecture of Financial Markets.” Distinktion 7 (2003): 7–23. Knorr-Cetina, Karin, and Alex Preda. “The Temporalization of Financial Mar- kets: From Network to Flow.” Theory, Culture & Society 24, nos. 7–8 (2007): 116–38.

16028-0303f-Finalpass-r01.indd 214 9/24/2019 12:03:34 PM Bibliography 215

Knorr-Cetina, Karin, and Urs Brueger. “Global Microstructures: The Virtual Societies of Financial Markets.” American Journal of Sociology 107, no. 4 (2002): 905–50. Koerner, Joseph Leo. “Hieronymus Bosch’s World Picture.” In Picturing Science Producing Art, edited by Caroline A. Jones and Peter Galison, 297–323. Abingdon-on-Thames: Routledge, 1998. ———. Caspar David Friedrich and the Subject of the Landscape. London: Reaktion, 2009. Kolbert, Elizabeth. The Sixth Extinction Event: An Unnatural History. New York: Picador, 2015. Koza, John. The Genetic Programming Paradigm: Genetically Breeding Populations of Computer Programs to Solve Problems. Cambridge: MIT Press, 1992. Kull, Kalevi. “On Semiosis, Umwelt, and Semiosphere.” Semiotica 120 (3–4): 299–310. Lacan, Jacques. Écrits: The First Complete Edition in English. Translated by Bruce Fink. New York: W. W. Norton, 2006. Latour, Bruno. Face a Gaia: Huit Conferences sur le nouveau regime climatique. Paris: Éditions La Découverte, 2015. Lenton, Tim. Earth System Science. Oxford: Oxford University Press, 2016. Lenton, Tim, and Andrew Watson. Revolutions That Made the Planet. Oxford: Oxford University Press, 2011. Leonard, Robert. Von Neumann, Morgenstern, and the Creation of Game Theory: From Chess to Social Science, 1900–1960. Cambridge: Cambridge University Press, 2010. Lesinski, Gene, Steven M. Corns, and Cihan H. Dagli. “A Fuzzy Genetic Algorithm Approach to Generate and Assess Meta-Architectures for Non-Line of Site Fires Battlefield Capability.” IEEE Congress on Evolutionary Computation (2006): 2395–401. Lewis, Michael. Flash Boys: Cracking the Monday Code. London: Penguin, 2014. Liljefors, Max. “In between the Human and the Animal: Subjectivity and Authority in Ann-Sofi Sidén’s Queen of Mud Project.” Journal of Art History 79, no. 4 (2010): 185–99. ———. “Neuronal Fantasies: Reading Neuroscience with Schreber.” In The Atom- ized Body: The Cultural Lives of Genes, Stem Cells and Neurons, edited by Max Liljefors, Susanne Lundin, and Andrea Wiszmeg, 143–70. Lund: Nordic Academic Press, 2012. Lindqvist, Sven. A History of Bombing. New York: New Press, 2000. Lippincott, David. “UAV Data Imaging Solutions Push Limits of Embedded Tech- nologies.” Journal of Military Electronics & Computing (April 2016) 18–21. Locklear, Mallory. “Ex-Pentagon Official behind Project Maven ‘Alarmed’ by Google Withdrawal.” Engadget, June 26, 2018. https://www.engadget.com/2018/06/26/ pentagon-official-project-maven-alarmed-google-withdrawal/. Loeve, Sacha. “Sensible Atoms: A Techno-Aesthetic Approach to Representation.” Nanoethics 5, no. 2 (2011): 203–22. Lovelock, James. Gaia: A New Look at Life on Earth. Oxford: Oxford University Press, 1979.

16028-0303f-Finalpass-r01.indd 215 9/24/2019 12:03:34 PM 216 Bibliography

———. The Revenge of Gaia: Why the Earth Is Fighting Back and How We Can Still Save Humanity. London: Penguin, 2006. Lucretius. De rerum natura. Translated by William Ellery Leonard. Boston, MA: E. P. Dutton, 1916. MacKenzie, Donald. “How Algorithms Interact: Goffman’s ‘Interaction Order’ in Automated Trading.” Theory, Culture & Society 36, no. 2 (2019): 39–59. Marder, Michael. Energy Dreams: Of Actuality. New York: Columbia University Press, 2017. Marion, Jean-Luc. “Seeing, or Seeing Oneself Seen: Nicholas of Cusa’s Contribution in De visione Dei.” Journal of Religion 96, no. 3 (2016): 305–31. Marx, Karl. Capital: A Critique of Political Economy, vol. 1. Translated by Ben Fowkes. London: Penguin, 1976. Massumi, Brian. Ontopower: War, Powers, and the State of Perception. Durham: Duke University Press, 2015. McBurney, Vincent. “Professor Luciano Floridi on the Philosophy of the Infos- phere.” Toolbox. Accessed May 9, 2019. http://it.toolbox.com/blogs/infosphere/ professor-luciano-floridi-on-the-philosophy-of-the-infosphere-23608. McChrystal, Stanley A. “Becoming the Enemy.” Foreign Policy 185 (2011): 66–70. McLeary, Paul. “Pentagon’s Big AI Program, Maven, Already Hunts Data in Middle East, Africa.” Breaking Defense, May 1, 2018. https://breakingdefense.com/2018/05/ pentagons-big-ai-program-maven-already-hunts-data-in-middle-east-africa/. McNeill, J. R., and Peter Engelke. The Great Acceleration: An Environmental History of the Anthropocene. Cambridge, MA: Harvard University Press, 2014. Meeting of the High Contracting Parties to the Convention on Prohibitions or Restric- tions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects. Report of the 2014 Infor- mal Meeting of Experts on Lethal Autonomous Weapons systems (LAWS), CCW/ MSP/2014/3, June 11, 2014. Mesarites, Nikolaos. “Description of the Church of the Holy Apostles at Constanti- nople.” Translated by Glanville Downey. Transactions of the American Philosophi- cal Society 47, no. 6 (1957): 855–924. Miah, Andy. “Posthumanism: A Critical History.” In Medical Enhancements and Posthumanity, edited by Bert Gordijn and Ruth Chadwick, 71–94. Berlin: Springer, 2009. Mini, Fabio. Che Guerra Sarà. Bologna: il Mulino, 2017. Mirowski, Philip. Machine Dreams: How Economics Became a Cyborg Science. Cambridge: Cambridge University Press, 2002. ———. Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown. London: Verso, 2013. Moore, Jason W., ed. Anthropocene or Capitalocene? Nature, History and the Crisis of Capitalism. Oakland, CA: PM Press, 2016. Morris, Charles W. “Foundations of the Theory of Signs.” In International Encyclo- pedia of Unified Science, vol. 1, no. 2, edited by Otto Neurath, Rudolf Carnap, and Charles W. Morris, 79–137. Chicago, IL: University of Chicago Press, 1938.

16028-0303f-Finalpass-r01.indd 216 9/24/2019 12:03:35 PM Bibliography 217

Morris, Jefferson. “UAV Battlelab Experiments with Feature Recognition Software.” Aerospace Daily & Defense Report, April 13, 2004. http://aviationweek.com/awin/ uav-battlelab-experiments-feature-recognition-software. Morris, Robert. “Three Folds in the Fabric and Four Autobiographical Asides as Alle- gories (or Interruptions).” Art in America 77 (1989): 142–51. Morse, Philip M., and George E. Kimball. Methods of Operations Research. New York: Dover, 2003. Morton, Oliver. The Planet Remade: How Geoengineering Could Change the World. London: Granta, 2015. Münkler, Herfried. “Die Gestalt des Partisanen: Herkunft und Zukunft.” In Der Par- tisan: Theorie, Strategie, Gestalt, edited by Herfried Münkler, 14–39. Opladen: Westdeutscher Verlag, 1990. ———. Empires. Translated by Patrick Camiller. Cambridge: Polity, 2007. ———. Kriegssplitter: Die Evolution der Gewalt im 20. und 21. Jahrhundert. Berlin: Rowohlt, 2015. Munro, Campbell. “Mapping the Vertical Battlespace: Toward a Legal Cartography of Aerial Sovereignty.” London Review of International Law 2 (2014): 233–61. Murdoch, Iris. Metaphysics as a Guide to Morals. London: Penguin, 1992. Nancy, Jean-Luc. The Muses. Translated by Peggy Kamuf. Palo Alto, CA: Stanford University Press, 1996. ———. The Sense of the World. Translated by J. S. Librett. Minneapolis: Minnesota University Press, 1997. ———. The Ground of the Image. Translated by Jeff Fort. New York: Fordham Uni- versity Press, 2005. ———. The Creation of the World, or Globalization. Translated by François Raffoul and David Pettigrew. New York: SUNY Press, 2007. ———. “Of Struction.” Parrhesia 17 (2013): 1–10. ———. After Fukushima: The Equivalence of Catastrophes. Translated by Charlotte Mandell. New York: Fordham University Press, 2015. Naughton, John. A Brief History of the Future: The Origins of the Internet. London: Phoenix, 2000. Neocleous, Mark. “The Police of Civilization: The War on Terror as Civilizing Offen- sive.” International Political Sociology 5 (2011): 144–59. ———. “Air Power as Police Power.” Environment and Planning D: Society and Space 31 (2013): 578–93. Nixon, Rob. Slow Violence and the Environmentalism of the Poor. Cambridge, MA: Harvard University Press, 2011. Onlife Initiative, The. “The Onlife Manifesto.” In The Onlife Manifesto: Being Human in a Hyperconnected Era, edited by Luciano Floridi, 7–13. London: Springer, 2015. Palmer, Douglas. Earth Time. Chichester: John Wiley, 2005. Panksepp, Jaap. “The Periconscious Substrates of Consciousness: Affective States and the Evolutionary Origins of the Self.” Journal of Consciousness Studies 5, nos. 5–6 (1998): 566–82.

16028-0303f-Finalpass-r01.indd 217 9/24/2019 12:03:35 PM 218 Bibliography

Parsa, Amin. “Knowing and Seeing the Combatant: War, Counterinsurgency and Tar- geting in International Law.” PhD diss., Lund University, 2017. Pasquinelli, Matteo. “Arcana Mathematica Imperii: The Evolution of Western Com- putational Norms.” In Former West: Art and the Contemporary after 1989, edited by Maria Hlavajova and Simon Sheikh, 281–93. Cambridge: MIT Press, 2017. Patrick, Süskind. Perfume: The Story of a Murder. New York: Washington Square Press, 1991. Pellerin, Cheryl. “Project Maven to Deploy Computer Algorithms to War Zone by Year’s End.” US Department of Defense News, July 17, 2017. https://dod .defense.gov/News/Article/Article/1254719/project-maven-to-deploy-computer- algorithms-to-war-zone-by-years-end/. Penning H. L. H. de, R. J. M. den Hollander, H. Bouma, G. J. Burghouts, and A. S. d’Avila Garcez. “A Neural-Symbolic Agent with a Mind’s Eye.” Neuro-Symbolic Learning and Reasoning Technical Report WS-12-11. Association for the Advance- ment of Artificial Intelligence, 2012. Pickering, Andrew. “Cyborg History and the World War II Regime.” Perspectives on Science 3, no. 1 (1995): 1–48. ———. The Cybernetic Brain. London: Chicago University Press, 2010. Potok, Chaim. The Gift of Asher Lev. London: Penguin, 1990. Protocol Additional to the Geneva Conventions of August 12, 1949, and relating to the Protection of Victims of International Armed Conflicts. June 8, 1977. 1125 UNTS 3. Reichelt, Helmut. “Marx’s Critique of Economic Categories: Reflections on the Prob- lem of Validity in the Dialectical Method of Presentation in Capital.” Historical Materialism 15, no. 4 (2007): 3–52. Richter, Gerhard. Gerhard Richter: Text. Writings, Interviews and Letters 1961–2007. London: Thames & Hudson, 2009. Rid, Thomas. Rise of the Machines: A Cybernetic History. New York: Norton, 2016. Riskin, Jessica. The Restless Clock: A History of the Centuries-Long Argument over What Makes Living Things Tick. Chicago, IL: University of Chicago Press, 2016. Roberts, Jessica L. “Good Soldiers Are Made, Not Born: The Dangers of Medical- izing Ability in the Military Use of Genetics.” Journal of Law & the Biosciences 2, no. 1 (2015): 92–98. Rosen, Lawrence. Law as Culture: An Invitation. Princeton, NJ: Princeton University Press, 2006. Rosenberg, Jay F. “Connectionism and Cognition.” In Mind Design II, Philoso- phy, Psychology, Artificial Intelligence, edited by John Haugeland, 293–308. Cambridge: MIT Press, 1990. Rosfort, René. “Ambivalent Embodiment.” In The Atomized Body: The Cultural Lives of Genes, Stem Cells and Neurons, edited by Max Liljefors, Susanne Lundin, and Andrea Wiszmeg, 83–111. Lund: Nordic Academic Press, 2012. Rouvroy, Antoinette. “Algorithmic Governmentality: Radicalisation and Immune Strategy of Capitalism and Neoliberalism?” Translated by Benoît Dillet. La Deleu- ziana—Online Journal Of Philosophy 3 (2016): 30–36. Sacks, Oliver. The Man Who Mistook His Wife for a Hat. London: Picador, 1986.

16028-0303f-Finalpass-r01.indd 218 9/24/2019 12:03:35 PM Bibliography 219

Sample, Ian. “Computer Says No: Why Making AIs Fair, Accountable and Trans- parent Is Crucial.” Guardian, November 5, 2017. https://www.theguardian .com/science/2017/nov/05/computer-says-no-why-making-ais-fair-accountable- and-transparent-is-crucial. Samuel, Arthur L. “Some Studies in Machine Learning Using the Game of Checkers.” IBM Journal of Research and Development 3, no. 3 (1959): 210. Santner, Eric L. Stranded Objects: Mourning, Memory, and Film in Postwar Ger- many. Ithaca, NY: Cornell University Press, 1990. Santoni di Sio, Filippo, and Jeroen van den Hoven. “Meaningful Human Control over Autonomous Systems: A Philosophical Account.” Frontiers in Robotics and AI 5 (February 2018): 1–14. Sassen, Saskia. Expulsions: Brutality and Complexity in the Global Economy. Cambridge, MA: Harvard University Press, 2014. Schmitt, Carl. The Concept of the Political. Translated by George Schwab. Chicago, IL: Chicago University Press, 1996. ———. Theory of the Partisan: Intermediate Commentary on the Concept of the Political. Translated by G. L. Ulmen. New York: Telos Press, 2007. Schmutter, Peter. “Object-Oriented Ontogenic Programming.” Abstract. 2002. Avail- able at https://www.researchgate.net/publication/28351433_Object-Oriented_ Ontogenetic_Programming. Schnellnhuber, Hans Joachim, Paul J. Crutzen, William C. Clark, Martin Claussen, and Hermann Held, eds. Earth System Analysis for Sustainability. Cambridge: MIT Press, 2004. Schönthaler, Philipp. “Vor Anbruch der Morgenröte.” In Philipp Schönthaler, Vor Anbruch der Morgenröte: Erzählungen, 5–57. Berlin: Matthes & Seitz, 2017. Schürmann, Reiner. “Legislation-Transgression: Strategies and Counter-Strategies in the Transcendental Justification of Norms.” Man and World 17, nos. 3–4 (1984): 361–98. ———. “ ‘What Can I Do?’ In an Archaeological-Genealogical History.” Journal of Philosophy 82, no. 10 (1985): 540–47. ———. Broken Hegemonies. Translated by Reginald Lilly. Bloomington: Indiana University Press, 2003. Searle, John R. The Construction of Social Reality. New York: Free Press, 1995. Shannon, Claude, and Warren Weaver. The Mathematical Theory of Communication. Chicago: University of Illinois Press, 1998. Shaw, Martin. The New Western Way of War. Cambridge: Polity, 2005. Shiva, Vandana. Who Really Feeds the World? The Failures of Agribusiness and the Promise of Agroecology. London: Zed, 2017. Simonite, Tom. “Pentagon Will Expand AI Project Prompting Pro- tests at Google.” Wired, May 29, 2018. https://www.wired.com/story/ googles-contentious-pentagon-project-is-likely-to-expand/. “Slaughterbots.” YouTube video, 7:47. Posted by “Stop Autonomous Weapons.” November 12, 2017. https://www.youtube.com/watch?v=9CO6M2HsoIA. Slayton, Rebecca. Arguments That Count: Physics, Computing and Missile Defense, 1949–2012. Cambridge: MIT Press, 2013.

16028-0303f-Finalpass-r01.indd 219 9/24/2019 12:03:35 PM 220 Bibliography

Sloterdijk, Peter. “Air Quakes.” Environment and Planning D: Society and Space 27, no. 1 (2009): 41–57. ———. Spheres I: Bubbles. Translated by Wieland Hoban. Los Angeles, CA: Semiotext(e), 2011. Smith, Rupert. The Utility of Force: The Art of War in the Modern World. New York: Vintage, 2008. Sohn-Rethel, Alfred. “Das Geld, die bare Münze des Apriori.” In Beiträge zur Kritik des Geldes, edited by Paul Mattick, Alfred Sohn-Rethel, and Hellmut G. Haasis, 35–117. Frankfurt am Main: Suhrkamp, 1976. ———. Intellectual and Manual Labour: A Critique of Epistemology. Translated by Martin Sohn-Rethel. London: Macmillan, 1978. ———. Soziologische Theorie der Erkenntnis. Frankfurt am Main: Suhrkamp, 1985. Steffen, Will, Angelina Sanderson, Peter D. Tyson, Jill Jäger, Pamela A. Matson, Berrien Moore III, Frank Oldfield, Katherine Richardson, Hans-Joachim Schelln- huber, Billie L. Turner, and Robert J. Wasson. Global Change and the Earth System: A Planet under Pressure. Berlin: Springer, 2004. Sterling, Bruce. Shaping Things. Cambridge: MIT Press, 2005. Steuer, Daniel. “The Exception of Psychoanalysis: Adorno and Cavell as Readers of Freud.” In Psychoanalysis, Literature, and Culture, edited by Laura Marcus and Ankhi Mukherjee, 82–101. Oxford: Wiley-Blackwell, 2014. Strange, Susan. “The Westfailure system.” Review of International Studies 25 (1999): 345–54. Streeck, Wolfgang. “How Will Capitalism End?” New Left Review 87 (2014): 35–64. Summers, David. “Real Metaphor: Towards a Redefinition of the ‘Conceptual’ Image.” In Visual Theory, edited by Norman Bryson, Ann Holly, and Keith Moxey, 231–59. Cambridge: Polity, 1990. Terranova, Tiziana. Network Culture: Politics for the Information Age. London: Pluto Press, 2004. Theodorou, Andreas, Robert H. Wortham, and Joanna J. Bryson. “Designing and Implementing Transparency for Real Time Inspection of Autonomous Robots.” Connection Science 29, no. 3 (2017): 230–41. Torbati, Yeganeh. “Pentagon Creates Award for US Drone Pilots, Cyber War- riors.” Reuters, January 7, 2016. http://www.reuters.com/article/ us-usa-military-awards-idUSKBN0UL2MN20160107. Touryan, Jon, Anthony J. Ries, Paul Weber, and Laurie Gibson. “Integration of Auto- mated Neural Processing into an Army-Relevant Multitasking Simulation Environ- ment.” Lecture Notes in Computer Science 8027 (2013): 774–82. Tribe, Keith. “The Political Economy of Modernity: Foucault’s Collège de France Lectures of 1978 and 1979.” Economy and Society 38, no. 4 (2009): 679–98. Tribus, Myron, and Edward C. McIrvine. “Energy and Information.” Scientific Ameri- can 224 (1971): 179–88. Trimble, Steven. “Sierra Nevada Fields ARGUS-IS Upgrade to Gorgon Stare Pod.” FlightGlobal, July 2, 2014. www.flightglobal.com/news/articles/sierra-nevada- fields-argus-is-upgrade-to-gorgon-stare-400978/.

16028-0303f-Finalpass-r01.indd 220 9/24/2019 12:03:35 PM Bibliography 221

Tzouvala, Ntina. “TWAIL and the ‘Unwilling or Unable’ Doctrine: Continuities and Ruptures.” AJIL Unbound 109 (2015): 266–70. Uexküll, Jakob von. “A Stroll through the Worlds of Animals and Men: A Picture Book of Invisible Worlds.” In Instinctive Behavior: The Development of a Modern Concept, translated and edited by Claire H. Schiller, 5–80. New York: International University Press, 1957. United States Army Headquarters. Field Manual 3–38: Cyber Electromagnetic Activ- ities. February 12, 2014. Available at www.fas.org/irp/doddir/army/fm3-38.pdf. United States Department of Justice. “Lawfulness of a Lethal Operation Directed against a U.S. Citizen Who Is a Senior Operational Leader of Al-Qa’ida or an Associated Force.” Draft November 8, 2011. Available at https://www.law.upenn .edu/live/files/1903-doj-white-paper. United States Deputy Secretary of Defense. “Establishment of an Algorithmic War- fare Cross-Functional Team (Project Maven).” Memorandum, April 26, 2017. Available at https://www.govexec.com/media/gbc/docs/pdfs_edit/establishment_ of_the_awcft_project_maven.pdf. United States Opening Statement at the Group of Government Experts (GGE) on Lethal Autonomous Weapons Systems, Geneva, April 9, 2018. Available at https:// geneva.usmission.gov/2018/04/09/ccw-u-s-opening-statement-at-the-group-of- governmental-experts-meeting-on-lethal-autonomous-weapons-systems. Van Dooren, Thom. Flight Ways: Life and Loss at the Edge of Extinction. New York: Columbia University Press, 2016. Vince, Gaia. Adventures in the Anthropocene: A Journey to the Heart of the Planet We Made. London: Chatto & Windus, 2014. Virilio, Paul. The Vision Machine. Translated by Julie Rose. Bloomington: Indiana University Press, 1994. ———. Desert Screen: War at the Speed of Light. Translated by Michael Degener. London: Bloomsbury, 2005. Vismann, Cornelia. “Cultural Techniques and Sovereignty.” Theory Culture Society 30, no. 6 (2013): 83–93. Voss, Tobias. “ ‘Ich habe keine Stimme mehr, mein ganzes Leben flieht’: Psychische Dimensionen des Guerilla-Krieges.” In Der Partisan: Theorie, Strategie, Gestalt, edited by Herfried Münkler, 292–321. Opladen: Westdeutscher Verlag, 1990. Waldby, Catherine. The Visible Human Project: Informatic Bodies and Posthuman Medicine. London: Routledge, 2000. Wallach, Wendell, Stan Franklin, and Colin Allen. “A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents.” Topics in Cog- nitive Science 2 (2010): 454–85. Waters, Richard. “Musk’s Brain-Hacking Ambitions Face Scientific Headaches.” Financial Times, March 30, 2017. https://www.ft.com/content/64e70fac-155e- 11e7-b0c1-37e417ee6c76. ———. “Intelligent Machines Are Asked to Explain How Their Minds Work.” Financial Times, July 20, 2017. https://www.ft.com/content/92e3f296-646c- 11e7-8526-7b38dcaef614.

16028-0303f-Finalpass-r01.indd 221 9/24/2019 12:03:35 PM 222 Bibliography

Weaver, Warren. “Some Recent Contributions to the Mathematical Theory of Com- munication.” In Claude E. Shannon and Warren Weaver, The Mathematical Theory of Communication, 1–28. Chicago: University of Illinois Press, 1998. Weil, Simone. “War and Peace.” In Simone Weil, Formative Writings 1929–1941, edited and translated by Dorothy Tuck McFarland and Wilhelmina Van Ness, 227–78. London: Routledge, 1987. Weizman, Eyal. The Least of All Possible Evils: Humanitarian Violence from Arendt to Gaza. London: Verso, 2011. White House, The. “Report on the Legal and Policy Frameworks Guiding the United States’ Use of Military Force and Related National Security Operations.” Decem- ber 2016. Available at https://www.lawfareblog.com/white-house-releases-report- legal-and-policy-frameworks-american-uses-military-force. White, Stephen. 2008. “Brave New World: Neurowarfare and the Limits of Interna- tional Humanitarian Law.” Cornell International Law Journal 41, no. 1: 177–210. Wiener, Norbert. Cybernetics or Control and Communication in the Animal and the Machine. Oxford: Oxford University Press, 1948. Wilke, Christiane. “How International Law Learned to Love the Bomb: Civilians and the Regulation of Aerial Warfare in the 1920s.” Australian Feminist Law Journal 44, no. 1 (2018): 29–47. Winter, Jay. Sites of Memory, Sites of Mourning: The Great War in European Cultural History. Cambridge: Cambridge University Press, 2004. Winter, Yves. “The Asymmetric War Discourse and Its Moral Economies: A Cri- tique.” International Legal Theory 3, no. 3 (2011): 488–514. Winthrop-Young, Geoffrey. “Afterword.” In Instinctive Behavior: The Development of a Modern Concept, translated and edited by Claire H. Schiller, 209–43. New York: International University Press, 1957. ———. “Siren Recursions.” In Kittler Now: Current Perspectives in Kittler Studies, edited by Stephen Sale and Laura Salisbury, 71–94. Cambridge: Polity, 2015. Wittgenstein, Ludwig. Philosophical Investigations, 4th ed. Translated by G. E. M. Anscombe, P. M. S. Hacker, and Joachim Schulte. Chichester: Wiley-Blackwell, 2009. ———. Culture and Value. Revised edition, edited by Georg Henrik von Wright. Oxford: Blackwell, 1998. Wolfendale, Jessica. “ ‘New Wars,’ Terrorism, and Just War Theory.” In New Wars and New Soldiers: Military Ethics in the Contemporary World, edited by Paolo Tripodi and Jessica Wolfendale, 13–30. Farnham: Ashgate, 2011. Wortham, Robert H., and Andreas Theodorou. “Robot Transparency, Trust and Util- ity.” Connection Science 29, no. 3 (2017): 242–48. Zahar, Alexander. “Ordering.” In The Oxford Companion to International Criminal Justice, edited by Antonio Cassese, 446–48. Oxford: Oxford University Press, 2009. Zalasiewicz, Jan. The Earth after Us: What Legacy Will Humans Leave in the Rocks? Oxford: Oxford University Press, 2008.

16028-0303f-Finalpass-r01.indd 222 9/24/2019 12:03:35 PM Index

abstraction, real, 19, 21, 28–33, 36, Aristotle and Aristotelianism, 177, 194 49n161; in Adorno, 32–33. See also Arquilla, John, 47n142 Sohn-Rethel, Alfred artificial intelligence (AI), 78, 80; Adorno, T. W., 10, 11, 18, 32–33, accountability of, 48–49n155, 50n176, 55; and aesthetics, 33; on 198; ambivalence toward, 196; authenticity, 101n37 connectionist, 84–85, 87, 90, 96; Afghanistan, the War in, 79 development of, 82, 117; difficulty Agamben, Giorgio, 41n79 in understanding reasoning of, algorithms, 25, 81, 82, 114; authority 83–84; and genetic programming, of, 203; compression, 171–72; and 85; language of, 199; and law genetic programming, 85; and human incompatible, 5, 94–98, 106, 108, judgment, 106, 114, 118; military 200; and nature, 197; as new form use of, 1–2; and robotics, 79; use in of alchemy, 193–94; use in warfare, targeting of, 85–88, 113–16. See also 6, 79, 115, 197–98; visual, 128, artificial intelligence (AI); Lethal 138–39, 146 Autonomous Weapons Systems artificial life, 85 (LAWS) Asad, Talal, 118 Allaby, Michael, 66–67 Ashby, Ross, 23 Anduril Industries, 125n75 Assmann, Aleida, 94–95 Anghie, Antony, 121n25 Assmann, Jan, 94–95 Anthropocene, the, 5, 54, 61–66; and autonomy, 75, 80, 203; definition of, algorithmic warfare, 56; and earth 77, 107; and Lethal Autonomous systems theory, 59, 63, 64–67 Weapons Systems (LAWS), 76, 106, Arab–Israeli War (1973), the, 161n32 107; threats to, 69. See also freedom Arendt, Hannah, 41n79, 105, 114, Awlaki, Anwar al-, 121n28 152–56, 187, 192 ARGUS (Autonomous Real-Time Baran, Paul, 60–61 Ground Ubiquitous Surveillance Barthes, Roland, 140, 141 Imaging System), 131–32 Baudrillard, Jean, 35

223

16028-0303e_Index.indd 223 9/24/2019 10:41:56 AM 224 Index

Bauman, Zynmunt, 10, 27 Cusa, Nicholas de, 6, 165, 174–83, Beck, Ulrich, 39n58 185–87 Benjamin, Walter, 57, 191 Cuvier, Georges, 58, 64 Bentham, Jeremy, 147, 148 cybernetics, 80–82, 96, 102n41; and Bible, the, 95–96 anticontrol, 23; assumes rationality Bigo, Didier, 39n58 of nature, 85; definition of earth biometrics, 128 system in, 63, 68–69; equates human black box, 29, 67, 198, 200, 202–3; and and nonhuman life, 89, 193–94; artificial intelligence, 48–49n155, 83, equates humans and machines, 80, 194; and digital imaging technologies, 84, 89, 107; and Gaia theory, 65–66, 140; world, 23 67; James Lovelock’s, 73n33; Blau, Uri, 120n17 revolution, 64 Borges, Jorge Luis, 202 Bosch, Hieronymus, 129–30, 147 Darwin, Charles, 64, 134–35 brain–computer interface, 86 data, 34; and artificial intelligence, 82, Breger, Herbert, 42n85 83, 96–97; big, 6, 15, 115; do not Bryson, Joanna J., 83 represent phenomena, 153; -fication, Bush, George W., 113 167, 170, 173; and identification of the enemy, 28, 113–14; meta-, 166, Caerus Associates, 15, 28, 34 169, 171, 174, 175, 177–78; problems capital and capitalism, 21, 30, 31–33, created by vast amounts of, 131–32, 36, 70n7 202; role in new warfare of, 10, 15, catastrophe, 196; and the Cold War, 16, 28–29, 48n142; storage, 172–73; 58; environmental, 58–59, 62–64; world as, 4, 20, 22–26, 29 nuclear, 57–61; survivable, 56–66, 68 Debord, Guy, 37n13 Cenotaph, the, 69 Deeks, Ashey, 121n30 Certeau, Michel de, 176, 184 Defense Advanced Research Project Chakrabarty, Dipesh, 65 Agency (DARPA), 75, 100n13, 131; Chamayou, Grégoire, 28, 54, 112, 156 and Explainable Artificial Intelligence Chaturvedi, Sanjay, 57–58 (XAI) program, 83–84, 85; and Christianity, 96, 109, 129, 187, 199 Mind’s Eye project, 138 Clausewitz, Carl von, 12, 54 Defense Innovation Unit, 117 climate change, 57, 61, 62 Deleuze, Gilles, 35, 168, 170 cloud database, 166 Democritus, 140 collateral damage, 168, 173, 175, 178, Derrida, Jacques, 37n18, 180 185 Diderot, Denis, 139–40 colonialism, 105, 110–11, 120n20. See digital imaging technologies: and artificial also drones, colonial logic of use intelligence, 138; central to algorithmic of; imperialism; Law, International warfare, 128; create new forms of Humanitarian, colonialist history of blindness, 6, 128, 131–33, 137–42, Convention on Certain Conventional 146–52, 155; and human vision, 133, Weapons, 75 141–42, 147–48, 156, 159–60 counterinsurgency, 16, 101n35 Dillon, Michael, 54 Cowen, Deborah, 41n81, 43n89 disposition matrix, 28, 166, 175 Crutzen, Paul J., 61–65, 67, 72n24 Dodge, Martin, 27, 99–100n12

16028-0303e_Index.indd 224 9/24/2019 10:41:56 AM Index 225

Doyle, Timothy, 57–58 French Revolution, the, 129 Draft Rules on Aerial Warfare (1923), Freud, Sigmund, 135 the, 122n36 Friedrich, Caspar David, 156–59 Dresden, the bombing of, 142–43 friend–enemy distinction, 14, 78, 175 drones, 79, 105, 106, 112, 115, 131; colonial logic of use of, 111–12; Galileo, 30, 44n96 development of, 115; and dwell time, General Act of the 1884–1885 167, 169, 173, 178, 181; effect on Conference of Berlin, 111 civilians, 123n47, 131, 181; pilots Geneva Convention (1864), the, 122n34 of, 135, 156, 158–60, 170, 176; and genomics, 79, 80, 89 recursion, 172, 173; and surveillance, Gibson, Laurie, 85 119n7, 128, 131–32 Girard, René, 12 Dunant, Henri, 122n34 Global Information Grid, the, 26, 146–47 Edwards, Paul N., 23, 45n107 globalization, 12, 18, 23, 41n82, Eisenhower, Dwight D., 58, 59 45–46n115; and omnivoyant warfare, Ekelhof, Merel A. C., 98n1 166, 178–79; political economy Elden, Stuart, 110, 121n23 of, 167; as positive force, 35; and emergence, 8, 82–83, 85 topologization, 166 energy, 21, 22 Goethe, Johann Wolfgang von, 17–18, enframing (Gestell). See Heidegger, 202 Martin Google, 6, 117–18, 124n69 Epicurus, 140 Gorgon Stare, 131 excarnation, 80, 90, 94, 96–98, 108, Gorschkov, Sergei, 161–62n32 147, 182 gray-zone conflicts, 10, 16–17, 40n65 exception, state of, 16, 28. See also Gregory, Derek, 27 Schmitt, Carl Gros, Frédéric, 40n58 Grotius, Hugo, 122n34 Facebook, 187 Guattari, Félix, 35, 168 facial recognition, 131, 166 Gulf War, the. See Operation Desert Farocki, Harun, 168–69 Storm Fautrier, Jean, 145 Gunneflo, Marcus, 122n37 Feldman, Yoram, 120n17 Gusterson, Hugh, 111 Fligstein, Neil, 35 Floridi, Luciano, 43n89 Hahn, Walter, 142–43, 150 Flusser, Vilém, 132–33 Hamblin, Jacob, 58–59, 72n19 Forrester, Jay, 45n104, 45n107, 56 Han, Byung-Chul, 147, 152–53, 154–56, Foucault, Michel: and entrepreneurial 159 subject, 34; and globalization, 35; Heidegger, Martin, 23–24, 24–25, 29, on panopticon, 147; and pastoral 46n120, 170 governmentality, 6–7, 165, 184–86; Heisenberg, Werner, 46n120, 154 on power, 174; on surveillance, 187 Helmholtz, Hermann von, 42n83 Fox-Keller, Evelyn, 43–44n92 Heyns, Christof, 76–77 Franck, James, 1–2 Hildebrandt, Mireille, 78, 99n11 freedom, 24, 69, 87. See also autonomy Hobbes, Thomas, 54

16028-0303e_Index.indd 225 9/24/2019 10:41:56 AM 226 Index

Hoffman, Frank G., 38n37 Kolbert, Elizabeth, 71n11 Holocaust, the, 145 Kull, Kalevi, 170 Hopkins, Jasper, 179 Horowitz, Michael, 122n39 Lacan, Jacques, 175 human in the loop, 94, 115 language, 20, 22, 44n99, 84, 138–39 human rights, 14 Latour, Bruno, 66, 73n33 Hussain, Nasser, 119n7, 123n47 law, 78, 81; international criminal, Hutton, James, 64 91–93, 120n22; and Lethal hybrid wars, 10, 14, 38n35 Autonomous Weapons Systems (LAWS), 75, 77, 90–94, 98, 99n10, Ihde, Don, 132, 137–38, 139, 146 107; and monotheism, 5, 78, 95–98, Immigration and Customs Enforcement, 108; and new forms of warfare, 18; 187 proceeds by way of analogy, 112; imperialism, 37–38n20, 38n35, 120n20 Roman, 95, 109, 120n20; rule of, 5, information, 20–22, 25–27, 36, 43n89, 14, 77, 93; and strict liability, 92; 43–44n92, 44nn94–95; age, 43n89; and telos of, 107, 108; temporal horizon transparency, 155; as a weapon, 27 of, 105; as Western-centric, 109, intention, 22, 47n131, 77, 82, 93, 202 120n20. See also artificial intelligence International Committee of the Red (AI), and law incompatible; Law, Cross, 96 International Humanitarian Internet, the, 60–61, 196; of Things, Law, International Humanitarian, 55, 26, 34 76, 90, 96, 99n10, 102n45; aims irregular warfare. See partisan warfare to regulate violence, 108, 124n72; colonialist history of, 5–6, 110–11; Janich, Peter, 22, 44n96 as discriminatory, 106, 112–13, 118; Jernigan, Joseph Paul, 50n184 rendered obsolete by digitalization, Jünger, Friedrich Georg, 200 200; and war crimes, 91 justice, 192, 193, 202 Law of Armed Conflict.See Law, International Humanitarian Kahn, Hermann, 59 Lenton, Tim, 64 Kaldor, Mary, 13–14, 38n36, 40n59 Leopardi, Giacomo, 53, 66 Kant, Immanuel and , 53, 63, Lethal Autonomous Weapons 64, 158, 198–99 Systems (LAWS): and criminal Kármán, Theodor von, 59–60 responsibility, 91–94, 108; katechon, 13, 195–96, 198, 199 and cybernetics, 81; deepen Kelemen, Deborah, 101n38 asymmetries between states, Kilcullen, David, 15–16 112–13; difficulty in testing, 89, kill boxes, 26, 28, 169–73, 178–79, 182 90, 93, 98; as discriminatory, 113; killer robots. See Lethal Autonomous and drone warfare, 112; and human Weapons Systems (LAWS) decision making, 5, 87–88, 90, 198, Kimball, George E., 45n104 201; inability of law to govern, 98; Kitchin, Rob, 27, 99–100n12 international law regarding, 75, Kittler, Friedrich, 172–73 77, 107, 110, 118; and meaningful Knorr-Cetina, Karin, 26, 48n154 human control, 76–77, 107; Koerner, Joseph Leo, 129, 156–57, 159 preemptive logic of, 6, 113–14,

16028-0303e_Index.indd 226 9/24/2019 10:41:56 AM Index 227

117–18. See also law; Lethal National Security Agency, the, 187 Autonomous Weapons Systems NATO, 59 (LAWS) nature, 18, 22, 31, 42n85, 46n120, 96, Leucippus, 140 103n55, 158, 192, 197 Lindqvist, Sven, 113 Nelson, Greg, 88 Loeve, Sacha, 141 Neocleous, Mark, 121n33 logistics, 21, 41n81 neurotechnology, 79, 80, 101n36, 114 Lorimer, James, 121n33 New Wars, 10, 13–14, 17–18, Lovelock, James, 63, 66–67, 72n19, 38n21, 38n35; create conceptual 73n33 indistinctions, 4, 18–19, 36; and idea Lucretius, 140 of world as system, 20, 23, 33. See Lyell, Charles, 58, 64 also Kaldor, Mary n-gram modeling, 166 machine learning, 82, 85, 106–7, 115, Nietzsche, Friedrich, 32 193–95 Noriega, Manuel, 37n13 MacKenzie, Donald, 119n4 nuclear: catastrophe, 57–61; deterrence, Malabou, Catherine, 65 59–60; technology, 57; weapons, 1–2, Marion, Jean-Luc, 186 13, 58–59, 88–89 markets, 26–27 Marx, Karl and Marxism, 30, 32, 33, Obama, Barack, 55, 121n29 50n176, 55; and Capital, 71n7; on omnivoyance: as aperspectival, 6–7, concrete and abstract labor, 42n86 168, 174, 179; and drones, 169–71; Massumi, Brian, 113–14, 115, 119n4, as God’s eye view, 6; as law of laws, 123n48 6, 166–67, 175; sovereign, 178; and McAdams, Doug, 35 surveillance, 129–31, 147–48, 174, McChrystal, Stanley, 47–48n142 175, 177; and topologization, 167, Meadows, Dennis, 45n104 171; and war, 166–68, 178, 182. See media, 15 also surveillance Mesarites, Nikolaos, 129 Operation Desert Storm, 79, 168 Microsoft, 25, 46–47n129 Orozco, Gabriel, 136, 143–44, 150 military-industrial complex, 58 Mingasson, Gilles, 158 Panksepp, Japp, 133–34 Mini, Fabio, 39–40n58 PARC (Palo Alto Research Center), Morris, Charles William, 22, 44n99 83–84 Morris, Robert, 145 partisan warfare: and economy, 17, 19; Morse, Philip M., 45n104 imitated by regular armies, 10, 12–13, Münkler, Herfried, 37–38n20, 38n35 16, 36, 55; as model for new warfare, Murdoch, Iris, 127, 193 4, 10–13, 16, 20, 37n13, 38n20, 55 Musk, Elon, 80 Pasolini, Pier Paolo, 56 mutually assured destruction (MAD), 57 pattern-of-life analysis, 28, 166, 171, 178 Pickering, Andrew, 23, 24 Nancy, Jean-Luc, 35, 45–46n115, 56, Planck, Max, 153 57, 140–41; on globalization, 167, Polanyi, Karl, 41–42n82 168 policing, 14, 38n35, 39n58, 54–55 Napoleon, 11 Pollock, Jackson, 145

16028-0303e_Index.indd 227 9/24/2019 10:41:56 AM 228 Index

Potok, Chaim, 148 sovereignty, 16, 39n56, 54, 94, Preda, Alex, 26, 48n154 109, 182–83; of algorithmic presence, 24, 180 reason, 202; contingent, 5, 110, Project Maven, 6, 106, 114–19, 123n54 111–12; overlapping, 121n23; representational, 175; visual, 180 Quillet, Claude, 140 space race, the, 153 Steffens, Will, 61, 64 RAND Corporation, 59–61 Stefik, Mark, 84 Redell, David, 88 Sterling, Bruce, 34 Reichelt, Helmut, 33 Stiegler, Bernard, 65 responsibility, 4, 90, 108; criminal, Stoppani, Antonio, 63 90–94 Strange, Susan, 42n82 Richter, Gerhard, 143–44 Streeck, Wolfgang, 41–42n82 Ries, Anthony J., 85 surveillance, 39n58, 119n7, 174; risk, 16, 39n58, 40n59, 166; nuclear, 57 aperspectival nature of digital, robotics, 79, 80, 81, 83, 178 147–48; distorts the world it Ronfeldt, David, 47n142 examines, 131; as method of Rosen, Lawrence, 120n18 governance, 129; political theology Rosset, Evelyn, 101n38 of, 187

Sacks, Oliver, 146 targeting, 28, 197; and combatant/ Samuel, Arthur, 82 civilian distinction, 78, 111, 119n7, San Remo Institute, 96 175; by drones, 131; and genetic Sassen, Saskia, 41n76 programming, 85; use of algorithms Schmitt, Carl, 11, 13, 39n56, 41n79 in, 85–88, 113–16 Schopenhauer, Arthur, 197 technological determinism, 166 Schreitmüller, August, 143 technology, 24, 78–79; Schürmann, Reiner, 7, 70n3, 166–67, anthropomorphization of, 132–35, 171–72 138; as normative, 78; and the screens, 20, 25, 26–27, 34, 41n80, 159 posthuman, 133–34; and singularity, Searle, John, 149, 153, 154 196; and social form, 2 security: national, 17; provision as aim Teilhard de Chardin, Pierre, 63 of new warfare, 4, 16, 36, 38n35, 54, Teller, Edward, 59 166 Terranova, Tiziana, 43n89 September 11 attacks, the, 136 Theodorou, Andreas, 83 Shannon, Claude, 21, 42n88, 44n94, 198 Thomson, William, 42n85 Shaw, Martin, 39n58 Tiqqun, 67 Shiva, Vandana, 71n8 topology, 166, 167, 171 singularity, the, 6, 36, 76–77, 114, 119, Torah, the, 95, 109 196–98 Touryan, Jon, 85–87, 96, 101n35 Sloterdijk, Peter, 65, 171, 173 Trump, Donald J., 125n75 Smith, Sir Rupert, 11, 14–15, 16, 39n41 Tzouvala, Ntina, 121n33 Sohn-Rethel, Alfred, 10, 19, 21, 29–31, 36, 49n164, 50n176, 55, 103n57 Uexküll, Jakob von, 7, 24, 170, 174 Solferino, the Battle of, 122n34 United Nations, the, 110

16028-0303e_Index.indd 228 9/24/2019 10:41:56 AM Index 229

Unmanned Aerial Vehicle Battlelab, von Martens, Georg Friedrich, 122n34 131 von Neumann, John, 42n88, 59 U.S. Army Research Laboratory, 83, 101n32 war on terror, 113, 121n33 U.S. Department of Defense, wars among the people, 10, 11, 14–15. 101n32, 114–17, 135, 147. See also See also Smith, Sir Rupert Global Information Grid, the; Project weapons of mass destruction (WMD), 57 Maven Weaver, Warren, 44n96, 198 U.S. Department of Justice, 110 Weber, Max, 16, 54 Weber, Paul, 85 Van Dooren, Thom, 69–70, 71n10 Weil, Simone, 9 Vernadsky, Vladimir Ivanovich, 63, 67 Weizman, Eyal, 120n17, 124n72 Vietnam War, the, 72n19, 75 Wiener, Norbert, 44n101 Vince, Gaia, 67–68 Wilke, Christiane, 122n36 violence, 27, 54, 128; and the Winter, Jay, 69 Anthropocene, 56, 62, 64–66; Winthrop-Young, Geoffrey, 174 critiques of, 71nn8–9; and drone Wittgenstein, Ludwig, 9, 10, 17, warfare, 135, 172; economy of, 41nn74–75 113; humanitarian, 118, 119; Work, Robert, 100n13, 102n50, 115, and omnivoyant warfare, 168; 117, 124n71 recognizing, 191–93, 202–3; World Trade Center bombing (1993), regulation of, 107, 108, 112, 124n72; 136 state’s monopoly on, 14, 16, 54; World War I, 69, 173 subcontraction of, 55 World War II, 1, 75, 142–43, 145, 176 Virilio, Paul, 6, 129, 131 Wortham, Robert H., 83 Visible Human Project, the, 34, 50n184 Vismann, Cornelia, 169–70, 176 Zalasiewicz, Jan, 61, 68

16028-0303e_Index.indd 229 9/24/2019 10:41:56 AM 16028-0303e_Index.indd 230 9/24/2019 10:41:56 AM About the Authors

Howard Caygill is professor of modern European philosophy at Kingston University, UK. He studied at the Universities of Bristol, Sussex, and Oxford and has taught at the University of East Anglia, Paris VIII, and Goldsmiths College, University of London. He is also a member of the faculty at the Insti- tute for Doctoral Studies in the Visual Arts (IDSVA). He has published widely on Kant, twentieth-century philosophy, aesthetics, and . Recently, his study On Resistance: A Philosophy of Defiance appeared with Bloomsbury. The book covers topics ranging from Clausewitz, anti-colonial struggles, the French Resistance, and the Zapatista movement, to surveil- lance, the manhunt, and digital resistance. Further publications include Of Force (with Andrew Benjamin), Life and Energy, and Technology and the Propitiation of Chance. Among those with an interest in and political philosophy, he is one of the best-known British philosophers.

Allen Feldman is professor of media, culture, and communication at New York University. He has published widely on the political philosophy of vio- lence and embodiment and on critical war studies. His recent book Archives of the Insensible: Of War, Photopolitics, and Dead Memory is a critical forensics of multiple crime scenes: Guantanamo terrorist tribunals, biomet- ric counterfeiting, the Kantian/Sadean imperatives of drones, the trauma of transitional justice, and biopolitical animality. Feldman is a cultural anthro- pologist who has conducted ethnographic research on carcerality and hunger striking, transitional justice in South Africa, the policing of the AIDS-affected homeless in New York City, and enforced disappearance and visual culture in the global war on terror. He teaches the politics of the gaze, the political theology of media, slavocracy and mediology, biopolitical media, and media archeology.

231

16028-0303f-Finalpass-r01.indd 231 9/24/2019 12:03:35 PM 232 About the Authors

Sara Kendall is senior lecturer and codirector of the Centre for Critical International Law at Kent Law School, University of Kent, UK. Her research focuses on the discursive forms and material practices of international law and global governance. After earning her doctorate at the University of California at Berkeley, she worked as a researcher in the Department of Public International Law at Leiden University and taught in international relations at the University of Amsterdam. Her published work, including the coedited volume Contested Justice: The Politics and Practice of Interna- tional Criminal Court Interventions, primarily focuses on international law, with a particular interest in humanitarian sentiments and practices in interna- tional criminal law, the law of armed conflict, and human rights.

Max Liljefors is professor of art history and visual studies at Lund University. He has published widely on visual historiography, art and law, video art and performance art, and the visual cultures of bioscience. He has extensive experi- ence of interdisciplinary research in the medical humanities, as a work package leader in the Linnaeus research consortium BAGADILICO, and as an ethical advisor to the EU project Multisyn. His current research concerns approaches to the numinous in contemporary art and aesthetic dimensions of existential health.

Gregor Noll is professor of international law at the Department of Law, School of Business, Economics and Law, Gothenburg University. His research covers migration law, the law of armed conflict, the impact of arti- ficial intelligence on law, and the theory of international law. Noll held the Pufendorf Chair at Lund University from 2012 to 2016 and colaunched the Gothenburg/Lund/Uppsala Migration Law Research Network in 2011. In 2014, he published the first full-length article in a refereed, A-ranked journal exploring the impact of brain–machine interfaces in weapons systems on the ability to implement the laws of war.

Daniel Steuer is research fellow at the Centre for Applied Philosophy, Politics, and Ethics (CAPPE) at the University of Brighton, UK. Until 2010, he was Senior Lecturer in German at the University of Sussex and a member of the university’s Centre for Social and Political Thought, which he also chaired for a number of years. Steuer’s research and publications attempt to straddle the divide between philosophy, literature, and science. He holds a first degree in biology, wrote his doctoral dissertation on Wittgenstein’s reception of Goethe’s Theory of Colours, and worked with a group of evolutionary biologists at the Senckenberg natural history museum in Frankfurt. He is also the translator of major works on, among others, Carl Schmitt and Jürgen Habermas.

16028-0303f-Finalpass-r01.indd 232 9/24/2019 12:03:35 PM