Download (430.24
Total Page:16
File Type:pdf, Size:1020Kb
http://dx.doi.org/10.14236/ewic/RESOUND19.7 Deep Listening: Early Computational Composition and its Influence on Algorithmic Aesthetics Tiffany Funk University of Illinois at Chicago Chicago, Illinois 60607 [email protected] The popularity of the program DeepDream, created by Google engineer Alexander Mordvintsev, exploded due to the open source availability of its uncanny, hallucinogenic aesthetic. Once used to synthesize visual textures, the program popularized the concept of neural network training through image classification algorithms, inspiring visual art interrogating machine learning and the training of proprietary prediction algorithms; though DeepDream has facilitated the production of many mundane examples of surreal computer art, it has also helped to produce some conceptually rich visual investigations, including MacArthur “genius” awardee Trevor Paglen’s recent installation A Study o f Invisible I mages. While the significance of trained neural networks is presently considered valuable to computer vision experimentation, a medial archaeological investigation of the conceptual underpinnings of machine learning reveals the fundamental influence early sonic experiments in computational music have in its computational and conceptual framework. Early computational music works, such as Lejaren Hiller Jr. and Leonard Isaacson’s Illiac Suite (1957), the first score composed by a computer, as well as Hiller and John Cage’s ambitious multimedia performance HPSCHD (1969), used stochastic models to automate game-like processes, such as Giovanni Pierluigi da Palestrina’s Renaissance-era polyphonic instruction, as well as the I Ching divination process of casting coins or yarrow stalks. Hiller's concerns regarding the historical use of compositional/mathematical gameplay uncovers a conceptual and performative emphasis anticipating the “training” of visual models. Through the adverse reactions of audiences to Hiller’s compositions, written by what the press deemed derogatorily “An electronic brain” in 1957 parallel public reactions to the disturbing mutations of DeepDream, popular participation in the open source project signals a growing willingness to collaborate creatively with computers to interrogate both computational and cognitive processes. Machine learning; music composition; aesthetics. 1. INTRODUCTION research in computational music composition during the 1950s and 60s reveals a compelling Just as an oft-quoted quip by Voltaire dictates, “The historical pretext to contemporary machine learning Holy Roman Empire was neither Holy, Roman, nor experimentation. [1] These early stochastic music an Empire,” one might say the same thing about compositions, painstakingly programmed on machine learning (ML): though it is a crucial branch supercomputers ill-fitted for musical of current Artificial Intelligence (AI) research, ML experimentation, were the result of two converging does not actually involve “learning,” but rather post-World War II phenomena: the first involved the projecting with the use of big data. In fact, in more sudden exuberance toward interdisciplinary art and techno-skeptical circles, ML is just a trendier technology experiments, an aftereffect of wartime definition for linear algebraic systems too boring scientific guilt. Many of these collaborations were and time-consuming for humans to parse. Despite the result of the interdisciplinary nature of the fatigue and skepticism surrounding the computing centers within research institutions intertwining fields of ML, deep learning, and pattern before the onset of recent disciplinary rigidity. recognition research, it’s hard to deny the eerie, yet seductive qualities of images constructed by trained, image classification neural networks; for example, the deliberate over-processed computer vision images from DeepDream, created by Google engineer Alexander Mordvintsev, produced an international stir on social media with the open source availability of its uncanny, psychobilic aesthetic (see Figure 1). However, current focus on computer vision Figure 1: A late-stage DeepDream processed photograph of three men in a pool. experimentation neglects its sonic precursors: a Deep_Dreamscope_(19822170718).jpg, by jessica reexamination of the institutional, algorithmic © Funk. Published by 43 BCS Learning and Development Ltd. Proceedings of RE:SOUND 2019 Deep Listening: Early Computational Composition and its Influence on Algorithmic Aesthetics Tiffany Funk mullen from austin, tx, Deep Dreamscope, is licensed 2. MARKOV CHAIN MONTE CARLO: NASCENT under CC BY 2.0. MACHINE LEARNING The second postwar phenomenon affecting Computer scientist Tom Mitchell provides the most machine learning’s sonic pre-history involved sheer concise definition of ML algorithms, stating, “A computational limitations: as post-war computers computer program is said to learn from experience lacked the means to process or store vast E with respect to some class of tasks T and quantities of data, speculative arts research in performance measure P if its performance at tasks transcribing music for predictive analysis provided in T, as measured by P, improves with experience one of the few opportunities to craft algorithms pre- E” (Mitchell, 1997, p. 2). Notably, Mitchell’s dating the training models of machine learning. definition concerns operation and performance Lejaren A. Hiller Jr. and Leonard Isaacson’s 1957 rather than concepts of “thinking” or “learning,” composition The Illiac S uite, often credited as the following Alan Turing's proposal in his seminal first score composed by a computer, was the first of paper “Computing Machinery and Intelligence”: he a series of early experimental compositions aimed substitutes the question “Can machines think?” with at exploring music history through computational “Can machines do what we (as thinking entities) predictive analysis. Assisted by Isaacson, Hiller can do?” (Turing, 1950, p. 433). developed stochastic models with a conceptual and performative emphasis that ultimately exploring Though ML and its related fields—from computational and human sensory limitations. The constructing neural networks to computer vision contemporary exploitation of these limitations is training—dominates current computational exemplified by the over-processing of research, the probabilistic methods that define the DeepDream’s “training” of visual models; both fields existed long before their popularization. visual and sonic aesthetics are ultimately shaped Stanley Ulam, inventor of the statistical sampling by the ability to hear tonal complexity or see (and technique Markov chain Monte Carlo Method comprehend) the construction of visual patterns (MCMC), recalled how his method—stringing organized through statistical probabilistic models. randomly generated variables representing the present state to model how current changes affect However, both the sensory and technological future states—was the result of a boredom-induced limitations facing Hiller reflects larger movements in round of hospital solitaire: conceptual art and atonal and serial music th composition in the mid 20 century. Hiller’s Illiac The first thoughts and attempts I made to Suite, as an attempt to translate statistical analysis practice [the Monte Carlo method] were to aesthetic analysis, may be poetically equated to suggested by a question which occurred to me Dutch Baroque painter Johannes Vermeer’s in 1946 as I was convalescing from an illness exploitation of the camera obscura “disks of and playing solitaires. The question was what are the chances that a Canfield solitaire laid out confusion.” Current theories claim Vermeer’s with 52 cards will come out successfully? … I stylistic flourish of pointillés, the tiny dabs of wondered whether a more practical method than pigment representing light refraction, mimic the ‘abstract thinking’ might not be to lay it out say imperfections of the camera obscura’s lens (Mills, one hundred times and simply observe and 1998). Just as the blurred, unfocused spots created count the number of successful plays. This was by early optical technologies define Vermeer’s already possible to envisage with the beginning style, Hiller’s own musical pointillés reflect both his of the new era of fast computers, and I computational limitations—the relatively limited immediately thought of problems of neutron processing power of the ILLIAC supercomputer in diffusion and other questions of mathematical physics, and more generally how to change combination with his relatively small data set and processes described by certain differential meticulously crafted probability tables—as well as equations into an equivalent form interpretable his own avant-garde musical influences, primarily as a succession of random operations Arnold Schoenberg and Milton Babbitt. While his (Eckhardt, 1987, 131). methods pre-date machine learning processes, Hiller’s approach to constructing the Suite’s Ulam’s idea was enthusiastically adopted by John complex probability tables anticipated von Neumann for applications to neutron diffusion, contemporary model-based machine learning the name “Monte Carlo” suggested by Nicholas practices in service of building a composition Metropolis. (Eckhardt describes the early history of exploring meta-historical music aesthetics through MCMC, and Hitchcock, 2003, constructs a brief computational means. The resulting aesthetic history of the Metropolis algorithm.) Von Neumann tested the limitations of both computer and human et. al recognized that MCMC’s strength was