DEEP LEARNING IN CINEMATIC SPACE Frederik De Wilde

It can only be attributable to human error.

HAL 90001

I am an artist working on the interstice of art, science, and technology. My art is grounded in the interaction between complex biological, societal, and technological systems. The indistinct, diffuse, ‘fuzzy’ area where biological and technology overlap and commingle is my favoured ground. In this essay I will explore the relationship between art, science, and technology within the context of (AI). A leitmotiv is the enigmatic black monolith2 and HAL 9000, the AI in Kubrick’s deeply philosophical, spiritual, and allegorical film 2001, A . This film, about the nature of man and his relationship with technology, awoke my interest in AI. The film is concerned with topics such as: the mystery of the universe, exis- tentialism, human evolution, technology, artificial intelligence, extra-terrestrial life, and with powers and forces beyond men’s comprehension. The film follows a voyage to Jupiter after the discovery of an enigmatic black that affects human evolution. We witness the interac- tion between a sentient , called HAL 9000, an acronym for Heuristically Programmed Algorithmic Computer, and the ship’s astronaut crew. During their space flight, the astronauts discuss whether or not HAL has feelings. The movie comes to the conclusion that, if HAL has feelings, it is definitely the desire for power. When Dave Bowman, one of astronauts, finally tries to shut HAL down, it starts to sing a song that was the first thing it had learnt to say; a return to its unconscious childhood at the moment of death. Bowman eventually finds the monolith and fast-forwards human evolution. Kubrick encouraged people to explore their own interpretations of the film, and refused 040 to offer an explanation of ‘what really happened’ in the movie, preferring instead to let audiences embrace their own ideas and theories. In an interview Kubrick stated:

You’re free to speculate as you wish about the philosophical and allegorical meaning of the film—and such speculation is one indication that it has succeeded in gripping the audience at a deep level—but I don’t want to spell out a verbal road map for 2001 that every viewer will feel obligated to pursue or else fear he’s missed the point.3

I accept Kubrick’s open invitation and reflect on HAL 9000 as a metaphorical warning for the pow- er of technology.

AI and HAL 9000 HAL 9000 phrases his purpose as an AI quite nicely: ‘I am putting myself to the fullest possible use’, AIs are supposed to be of use to mankind, to be of service, so far so good. However, HAL adds a tricky sub clause ‘which is all I think that any conscious entity can ever hope to do.’ A com- puter programme as a conscious entity? However, if an AI is modelled by, and after, human behav- iour, why would an AI not have a sense of consciousness and possess human flaws, such as being prone to addiction, or turn power hungry? AI systems do not have feelings and they do not know right from wrong: they only know what they are trained to do. If we train them to steal, to cheat, to disable, to destroy, that is what they will do. Hence, one can conclude that we have a large re- sponsibility in ‘training’ AIs. The most fearful and awe provoking thought, however, is when an AI starts to design, or improve, its own software. Then AI will ‘evolve’ from a ‘mere conscious’ mirror image of our human psyche to an entity that can create. At this point it is not unthinkable that this type of AI will ultimately ‘fuse’ with the Internet of Things, machines, and/or plants. If we live to see technology crossing the border between the organic and the non-organic world, then, regard- 1 2001: A Space Odyssey. Directed by , 1968. ­Metro-Goldwyn-Mayer Inc.. 2 Visually, best described as a large black slab out of one material. 3 dpk.io/kubrick 1968. less of how science fiction this may yet sound, AIs operating in the realm of quantum computing is no longer in a galaxy far away. We can only speculate of what a deep impact AI will have on future societies here on Earth as well as in outer space, which brings us again to 2001, A Space Odyssey. HAL 9000 proved to be a power-hungry liar when he attributed a broken antenna – which he had sabotaged himself - to ‘a human error’ in an attempt to take control of the ship. HAL can be seen as a metaphor for people, organisations, and societies that cannot admit their flaws; instead they hide behind the ‘human error’ excuse for what may be (weak) signals of systemic problems. In the worst cases, like HAL, such organisations and societies conspire against and condemn the accused or the victims, fearful that their own flaws may be exposed. Another problematic issue is the increasing interdependency of the astronauts on HAL 9000. Using this to reflect on our contemporary society, one might wonder what the potential long-term impact of outsourcing specific tasks to AIs, DNNs, robots, and the likes, can have on mankind? Will we lose certain capacities like memorising, spatial recognition or motor skills? People have already begun, with or without conscious decision, to relinquish personal control over everyday decisions in favour of increasingly sophisticated algorithms. To what extent are we willing to let someone, or something, else take the helm? Especially if we do not know how a computer processes, or will come to process, information.

The Innovation Engine In collaboration with scientists Jeff Clune4 and Anh Nguyen5 I created an artwork entitled ‘The Innovation Engine’, which questions and researches the obscurities of how a computer ‘thinks’ and ‘sees.’ The Innovation Engine consists of a touchscreen allowing visitors to navigate through, and explore, a deep neural network. Inspired by the central nervous systems of animals, machines have an artificial neural network: a computer algorithm. The webcam analyses, real-time, what it sees and what it has been ‘taught’ to detect. What is detected is visualised as highlighted artificial neurons. The audience can then browse through all the neural layers and gain insight into how a computer ‘thinks’ and ‘sees’. A voice tells visitors what layer they are looking at and what is happening. A sec- ond screen presents a real-time updated slideshow of computer-generated and computer-encoded 041 images made possible through evolutionary algorithms, which are unrecognisable to humans, but that state-of-the-art cutting edge Convolutional Neural Networks (Deep Neural Networks trained on ImageNet) with ≥ 99.99% certainty to be a familiar object. We wish to demonstrate the potential limits and flaws of machine comprehension, especially how they ‘see’ the world, by hacking and mis- leading the artificial neural networks. The innovation engine researches the failure of machines and to simulate the human mind. More specifically, I was inspired by the inability of machines to accurately simulate the process of evolution due to their lack of processing power and other key functions required to run complex simulations. To conclude, ‘The Innovation Engine’ demonstrates how Artificial Intelligence and Deep Neural Networks are easily fooled, it is a dystopian reality when you realise that, for example, the military already relies on AI during missions. In 2014, the former director of both the CIA and NSA proclaimed, ‘we kill people based on metadata’ and it appears they have used ‘a machine learning algorithms […] to try and rate each person’s likelihood of being a terrorist.’6 Computer scientists believe that all AI techniques that create decision boundaries between classes (e.g. SVMs,7 deep neural networks, etc.), which are known as discriminative models, are subject to this ‘fooling’ phenomenon. 041•Spiderweb_Rzl-Dzl-AI, The Innovation Engine. The Next Monolith One of the fundamental questions I want people to reflect on with The Innovation Engine is how we want to evolve as a species. It will demand collaborative, multi-, and interdisciplinary ecologies to discuss and tackle this complex subject with potentially far-reaching consequences. The arts can play a crucial role as a questioner, imaginer, and catalyst, as 2001 Space Odyssey demon- strates. We are in need of a global debate; where we discuss and decide what shape our society should take next. The future of our species is increasingly designed in petri dishes and computer labs, but without moral and ethical compass we might lose our way or become extinct. 4 Jeff Clune is an assistant professor computer science at Wyoming 6 Arstechnica.co.uk 2016. University, Wyoming, USA. jeffclune.com. 7 Support Vector Machines are supervised learning models with 5 Anh Nguyen is a Ph.D student in computer science at Wyoming associated learning algorithms that analyse data used for classifica- University, Wyoming, USA. tion and regression analysis. Although the theme music might sound ominous, there is no reason not to make use of our HALs or refrain from setting out in search of a monolith. We just might come to the conclusion that the road we are on is one of the alternative paths, which will open up new exciting doors to finding out where we came from but also where we are going.

Space.

042•Tank_Rzl-Dzl-AI, The Innovation Engine. 073 042 References 2001: A Space Odyssey. Directed by Stanley Kubrick, 1968. Metro-Goldwyn-Mayer Inc.. Grothoff, C. and Porup, J.M. 2016. The NSA’s SKYNET program may be killing thousands of innocent people. Ars Technica UK. 16 February at: arstechnica.co.uk/security/2016/02/the-nsas-skynet-program-may-be-killing-thousands-of-innocent- people/. Stanley Kubrick interview with Playboy magazine. 1968. Accessed at: dpk.io/kubrick.

043