<<

Action and : Embodying algorithms and the extended mind Palle Dahlstedt, August 25, 2015 [email protected] Final draft for book chapter published in A.McLean and R. Dean (Eds.): OUP Handbook of Algorithmic Music, Oxford Unviersity Press, 2018

An artist friend once asked me: "Why do they always play the musical saw in science-fiction movies?" He meant the ethereal sine waves from theremins or early synthesizers that often accompany a spaceship gliding through the void - seemingly without effort. That sound, which is similar to the sound of the musical saw, has perfect periodicity, no timbral dynamics and is difficult to directionally locate. It is seemingly detached from the physical world, from the strife against gravity and inertia that has shaped us, our movements, and the sounds that emanates from them. Sounds from acoustic instruments strongly infer causality and agency – someone is playing on something. But the space saw is a lonely, disembodied sound.

My friend's question made me think about the physicality of music and sound, and about the strong link between music-making and human effort. I became aware of my urge to "conduct" when listening to works-in-progress in my studio, and a tendency to prepare musically for sudden transitions in my music in a way that is reminiscent of physical movements. I clearly want to feel, and even anticipate, the dynamics, transitions and rhythms in my body, and to experience the music not just intellectually, but with mind and body together. I realized that what I am trying to do in my music and research is to bring that physicality and embodiment - that I know so well from my acoustic musicianship (as a pianist) - into my electronic music-making, and onto the stage.

So, there clearly is a connection between sound, mind and body, in both directions. But what about algorithmic music? Algorithms are abstract, formal procedures. Still, they are used by humans, to produce music that may be performed by and is appreciated by humans with bodies. How do algorithmic music relate to the physical and to the body? Can a musician be a part of an algorithm? And how do we think about algorithms? In this chapter I discuss this connection in the light of cognitive science, and in the light of my own artistic practice.

Introduction Most research about algorithmic music from a cognitive perspective concerns computational modeling of musical , i.e., how we can form computer models of musical thought, perception and generation and learn to perform musical tasks in machines, based on how humans do it. The other perspective, how humans think and work with algorithmic music, is equally important, but less often discussed. What are our cognitive processes like when we create musical algorithms, use them to compose, and perform side by side with them, or in direct interaction with them?

The cognitive perspective is not complete unless the whole human being is included, both the cognitive/immaterial and the bodily/physical. Human performance on/with/in algorithms necessarily involves embodied action and interaction, and some kind of internalization of the algorithms in the performer's mind. How does the cognitive relate to the physical here?

There is no single comprehensive take on the cognition of algorithmic music, nor on the embodiment of musical algorithms, so this chapter draws from a number of related fields, such as musical cognition, cognition and psychology of programming, embodied performance, neurological research, as well as from personal experience as an artist working in the field. The main topics of the chapter are:

 The relationship between the cognitive and the physical, the mind-body problem and the materiality of algorithms.

 How we think about algorithms – what mental devices do we have to think about algorithms, and form mental models of them?

 Situated and embodied cognition – how the mind depends on context, and how action and perception are integrated in the human cognitive mechanisms.

 Agency and the listener – who is the sender, and how do we communicate what goes on in algorithmic music, and in the mind?

 Algorithm as a way to extend the mind of the composer/performer.

 Performing on/with/in algorithms – can a human performer be an active part of the algorithm?

Where appropriate, I will describe works that illustrate these ideas.

The mind-body problem How do we bridge the gap from symbolic computation to physical actions? This question pertains to all computation and cognition, and is a modern equivalent to the old mind-body problem which has kept philosophers busy since ancient times. The musical results of algorithms have to take physical form to be perceived by listeners and human co-players. The symbolic code has to be translated to physical sound vibrations through computers, speakers, actuators, as part of a performative context that as some part of the chain includes humans as performers, co-pilots, interactors, or listeners. The words (the executable code) have to become flesh (physical reality), to paraphrase the words from the Gospel of John of the New Testament (1:14). There is a rich cultural history behind this transformation, which spans from the classical mind-body problem of philosophy (from Plato to Descartes, and onwards), through the idea of magic as performative words and finally to executable code in machines. Florian Cramer (2005) has given a rich and elaborate account of this cultural history in his book Words Made Flesh, from the perspective of what he calls “imaginative computation”. He writes:

“From magic spells to contemporary computing, this speculative imagination has always been linked to practical— technical and artistic—experimentation with algorithms. The opposite is true as well. Speculative imagination is embedded in today’s software culture.” (Cramer 2005, 9)

In some cases, e.g., when composing batch-generated algorithmic pieces and in live coding performances (see chapter XX), the composer’s bodily involvement with the creation of the music is minimal (mostly typing code), but still, an embodied perspective on what is a good result is crucial. And in a live setting, the bodily experience of the music certainly feeds back into the composer/performer’s actions. This is evident, for example, in the sub-field of live coding called algo-rave, i.e., live coding of dance music, where bodily perception and evaluation of the result is part of both the artists' creative process and the listeners appreciation of the music.

In other cases, the bodily engagement in the creation of the music is more obvious, and absolutely crucial, e.g., when performing with new instruments employing algorithms at various levels to generate the sounding results under user control. Such instruments are sometimes called hyper- instruments (Machover 1992), and can range from triggering pre-recorded segments to embellishing acoustically played phrases into complex textures.

Algorithms can also be used to more directly map control input (physical gestures) to synthesized or processed sound (synthesis algorithms). Then, the algorithm is not necessarily constructed to generate musical structure, but the focus is on creating a musically meaningful relationship between the performer’s physical gestures and the sound. One challenge is to not put too many layers of (temporal) processing between input and output, in order to retain the bodily presence and finger-tip control over the sound.

Such an instrument can also consist of a generative algorithm that is controlled in realtime through algorithmic mapping. For example, in my piece Circle Suite (Dahlstedt 2014), the composer is performing physically on a generative system as a musician, reacting on a millisecond scale to the music, employing sub-conscious processes, learned reaction patterns and physical gestural expressions, to control the behavior of the algorithm. It is an attempt at combining gestural expressivity with algorithmic structural complexity. Materiality of algorithms A cognitive and cultural perspective on what algorithms can be is given by Goffey (2008) in his chapter on algorithms in the anthology Software Studies – a Lexicon (Fuller 2008). He says that an algorithm is an abstraction that is independent of programming languages and implementations, and its embodiment in a particular machine implementation is irrelevant. But algorithms can also have a materiality, which was already inherent in the work of the early computer scientists, Goffey suggests, supported by Rosen (1988). This is evident, for example, in the very physical implementation (although hypothetical at the time) of the Turing machine, as an embodiment of computation in a rather simple machine that we can grasp and understand. An algorithm becomes real, just as its mental counterparts (thoughts) through its consequences and actions. Also, just like the thoughts and musical intentions of a fellow musician are mental configurations, the resulting musical patterns are material, as propagating sound waves that we can perceive. So, we can relate to, play with, and react to music coming from algorithms in very much the same way as we relate to music coming from fellow musicians. Empathetically, we can follow, find the beat, find the spaces and complement the sound with our own sounds, in a game of musical dialogue.

But the process goes both ways, according to Goffey. An algorithm requires structured data to be operable, and the critical operation of translating such data might be seen as “an incorporeal transformation, a transformation that, by recoding things, actions, or processes as information, fundamentally changes their status.” (Goffey 2008, 18) An algorithm is a statement (in Foucault's meaning of the word), that operates, also indirectly, on the world around it.

Goffey concludes: "What is an algorithm if not the conceptual embodiment of instrumental rationality within real machines?" But since formal logics are incomplete and machines break down or are hacked, algorithms are not as abstract as we want them to be.

A related argument is given by Kitchin and Dodge (2011), who describe software as both a product of the world and a producer of the world, since software mediates so many aspects of modern life. One of the founders of the field of cognitive musicology, Otto Laske (1988), argues that computers, by their very existence, provide a link between symbolical processing and the physical. They bridge the gulf between the mental and the physical, but also between natural science and the humanities (in this case, music studies). The prediction of this merger of computer science and humanities was quite prescient, since it has recently become prevalent under the name of Digital Humanities (Schreibman, Siemens, and Unsworth 2008).

The materiality of algorithms has been shown quite literally by Japanese composer Masahiro Miwa (Berry 2008). In a series of works based on an approach he calls reverse simulation music, he let a group of musicians perform simple algorithms manually. In the first piece of the series, Matarisama (2002), musicians are seated in a circle, each with the hands on the shoulders of the next, holding bells and castanets. In a circular sequence they perform a simple boolean process, resulting in complex rhythmic patterns. Each musician only has to remember and perform a small part of the algorithm. The work was first prototyped in software, and then turned into a physical performance.

Around the same time, I experimented with similar approaches. In one project, I translated a set of real-time algorithms from a public installation, originally implemented in DSP hardware, to verbal instructions for choir singers, The Time Document - Algorithmic Structures for Choir (2000). This required, e.g., inventing new ways of conducting with big circular movements to allow for gradually shifting phase patterns. As in Miwa’s piece, each singer only had to keep track of a small component of a larger algorithm, but together they produced considerable complexity.

Fig. 1. A photo from one of the Disklavier concerts, where my students’ algorithms take physical form in a player piano. The setup in the picture also includes microphones and speakers for acoustic feedback, which was part of some of the pieces. Algorithms can also be made concrete using electro-mechanical means. With my students at the Academy of Music and Drama (University of Gothenburg), I have arranged a series of projects with a modern player piano (the Yamaha Disklavier) which is a real acoustic piano possible to control from a computer. The explicit purpose of the project is to render the students' algorithms and their musical results into physical action and movement of matter. The piano is always placed in the middle of the stage with all lids removed (Fig. 1). This project would be totally meaningless to perform with a digital piano, even though it would in theory sound the same. The physicality of the algorithms playing the piano has a strong impact on the students, and it sometimes inspires them to explore the physicality to the extreme, where they abuse the piano and cause it to malfunction, producing unintended (by the manufacturer) noises.

From algorithm to instrument In 1999 I developed a tool for exploring a huge synthesis parameter space using interactive evolutionary algorithms, called MutaSynth (Dahlstedt 2001a). It operates on any sound engine, your own or an existing one, with a large number of parameters, up to a few hundred. You start with a few random parameter sets, and if you like one of them, the algorithm generates a set of "children", offspring that are slight variations of the parent. You listen to the sound of these new parameter sets and choose the one you like best, and so on. You can also mate sounds with each other to produce new sounds through recombination. Through this simple approach you can explore a huge search space guided by ear, looking for new or interesting sounds or musical material. It can be applied to almost any sound-generating algorithm having a fixed number of input variables. Later, this technique was implemented as a sound design tool in the NORD line of synthesizers, under the name Patch Mutator (Dahlstedt 2007) (see chapter XXX (Bown & Eldridge) for a more thorough description of this approach to sound design).

Using such a tool quickly becomes a physical act, approaching a performance situation. Each decision instantly changes the output. The continuous sound encourages precise timing of operations, and the unfolding search trajectory instinctively engages the physical musician in me. I start to embody the algorithm, and interact with it not only intellectually, but as a musician. MutaSynth was designed as a sound design and composition tool, but quickly became something like an instrument, with the limitation that each action is a stepwise move in the parameter space, due to the discrete nature of evolutionary algorithms.

The next step was to find a way to translate these step-wise explorations into smooth trajectories in timbre space, which coincided with the start of my duo pantoMorf (with Per Anders Nilsson). Our explicit research goal was to invent electronic instruments that would allow for physical improvisatory playing in a free improvisation setting. To enforce physicality and embodied playing we adopted a few dogma, inspired by acoustic instruments: If we lift our hands, the instrument goes quiet. Volume is proportional to physical effort. And every change in the sound corresponds to a physical gesture.

I developed an algorithm that allows for smooth exploration of a huge timbre space with a set of basic controllers (initially pressure sensors), mapped to a set of vectors in timbre space (Dahlstedt 2009). Through this mechanism, very complex timbral trajectories can be expressed, and if an interesting sound is found, the whole mapping mechanism can be moved to that point instantly. Algorithmically, it is still an interpolated genetic algorithm, but with musician-friendly control (Dahlstedt and Nilsson 2008). The instrument is called exPressure pad, and it was the first of a whole family of instruments based on the same ideas.

This search for a physical way of performing on electronics was inspired by two very different sources. First, my studies in baroque keyboard playing in my youth introduced me to the clavichord, a keyboard instrument where each finger has a direct connection to the sounding string, while maintaining the polyphony of the keyboard. Second, I wanted to achieve, in the digital domain, the same direct physicality of circuit bending instruments – analogue circuits which played through touch, allowing your body to become part of the circuit.

To bring this process one step closer to closure, I wanted to integrate this performance paradigm with the origin of my musicianship – the piano. I designed a version of the mapping algorithm that used key velocities to control a set of processing parameters for the acoustic piano sound. This became the Foldings instrument (Dahlstedt 2015), which I have since used in a large number of performances, solo and in ensemble playing. It has also been used by pianist John Tilbury.

When I perform on any of these instruments, I execute an intricate search algorithm through my physical playing, in an embodied exploration which at the same time is music-making. The trajectory through search space unfolds in front of the audience, and becomes part of the musical narrative.

Music and cognitive science Cognitive science emerged as an interdisciplinary field of enquiry in the middle of the 1950's, out of computer science, cybernetics and psychology. The actual term was coined in the early 1970's. It is concerned with the study of the mind and its processes, in particular how information is represented and processed in the brain and in machines, but also with behavior that relates to such processes. Music was one of the first art forms to be subject of computation, together with poetry (with the Hiller and Isaacson’s (1958) Illiac Suite and Strachey’s (1954) Love Letters as very early examples). Both text and event-based music are easy to represent in compact symbolic form, suitable for storage and processing in early computers. Such symbolic processing was not new, but it had previously been carried out by hand (e.g., in serial composition or cut-up poetry). One might hypothesize that the hiding of the actual execution of these rather simple algorithms inside a computer made them appear more like cognitive processes instead of the mechanical processes they were. This may have triggered an interest in both understanding and emulating musical cognitive processes. In one of the founding papers of cognitive musicology, Otto E. Laske (1975) described something akin to simple mental models (schemata) for musical behavior, including higher-level control structures. He discussed representations of musical knowledge, but also how such knowledge must be linked to musical behavior and to social context.

Laske also argued that musical competence is paradoxical, since it serves as an intermediary between logical and empirical knowledge. But it is irreducible to either, since it is symbolic, while still bound to spatio-temporal contents. Also, musical knowledge differs from linguistic competence. It deals not with the properties of sound objects, but instead with their contextual function as signifiers of something. While being symbolic and based on logical reasoning, music deals with continuous (non-discrete) objects. It is about the design of artifacts, not (only) about communicative skills. Hence, it is fundamentally different from linguistic competence. According to Laske, musical competence is more akin to poetic competence.

Also, Laske suggests that music is "intercompetential", since it is a composite knowledge form that relies on adapting extraneous knowledge forms to its own functioning. He also argues that music is pre-linguistic, being a semiotic function to acquire and manipulate symbolic representations of its external world.

As we shall see, Laske was already in 1975 thinking in terms that later would become established in the field of cognitive science, and his thinking about social context was way ahead of its time.

Thinking about algorithms The early cognitive science was focused on the idea of the mind as a computer, centered around a processing unit capable of symbolic processing, with associated memory (Pylyshyn 1984). According to this view, sometimes called cognitivism, our thoughts are like programs. This is closely related to good old-fashioned Artificial Intelligence. It is a functional theory, where the actual neural implementation is not studied. Within this view, dealing with algorithms in mental processes is not so strange, since this is the matter of thought.

Another perspective that for a long time lived in the shadow of cognitivism was connectionism, which focused more on parallelism and the neural structure of the brain. A third perspective is interactionism, concentrating on agency and interaction between thinking agents.

Early computer music developed in parallel to the development of early computer science and artificial intelligence. At this time the computational/cognitivist perspective was predominant, and computers were slow and made for batch processing of procedural programs. This had practical and aesthetic implications for the computer music of the time.

Today, thanks to the development of sensor systems, cheap and accessible physical and gestural interfaces, and faster computers, we can run complex interactive realtime systems on stage. At the same time, the connectionist and interactionist approaches to cognitive science have developed immensely. This certainly has had aesthetic implications. For this reason, and because some of the older theories still make sense, this chapter will include both old and new theories of human cognition.

A good overview of how one can think about algorithms is given by Ormerod (1990) in a text about human cognition and programming, written from a cognitivism viewpoint. Cognitive processes are determined by their function, not by the underlying hardware, which is also a reason for a later controversy (Searle 1980), since the brain with its parallelism and analog design is very different from a serial computer based on the Neumann model. Ormerod mentions connectionism, but deems it not so relevant for cognitive research into programming, since it focuses on the hardware of the mind (the brain) and not on its processes, and programming languages are serial and procedural. However, this is based on the assumption that our thinking about programming must be of the same type as programming itself. But nothing says that the creation and design of programs, which is a high-level activity (at least higher than executing the programs) must be based on the same procedural paradigm. It might be just the opposite, that creative processes would instead require a different type of process, maybe situated, parallel and non-deterministic.

In all cognitive theories, representations of knowledge and processes are crucial, and this has been one of the core problems of cognitive science. Many early theories focus on symbolic representations, but Ormerod et al. (1986) showed that different mental representations were used in different kinds of programming tasks, and that this had an effect on performance. In his dual coding theory, Paivio (1971) suggested that knowledge is represented in both symbolic/linguistic form and in visual form (or some other analog perceptually based format). Both of these are needed to capture essential properties, and they are then used together, and can be reciprocally triggered.

In Ormerod's analysis of the cognition involved in programming tasks (Ormerod 1990), he concentrated on the concepts of schemata and production rules for the representation of knowledge and processes. A schema is a set of propositions organized by their semantic contents, and it may be evoked or triggered by perceptual input (Bartlett 1932). Then the schema provide a relevant body of knowledge, which can guide further actions. A schema is used to represent declarative knowledge.

A production rule is a set of proposition pairs of the form condition-action, the first being the desired goal, and the second being a set of actions required to reach that goal. A production rule may be triggered by perceptual input, by information from long-term memory, or by the results of another rule. Essentially, large sets of stored schemata and production rules form an extensive web of knowledge and associated actions, forming the basis for carrying out complex programming tasks.

Another important theory is Shaffrin and Schneider's (1977) theory of controlled and automated processes. Controlled processes require attentional resources and are limited in capacity, while automated processes do not require any attentional resources, and are much more efficient. On the other hand, they are not modifiable. When repeating a controlled process many times, it can gradually become an automated process, increasing efficiency but reducing flexibility, for example when learning to play an instrument. In the beginning you have to think about every movement of every finger to produce a specific note sequence, but after extensive training, the task becomes automated and you can concentrate on the high-level control processes (Pressing 1988). This also applies to programming and algorithm design, so that a skilled programmer can concentrate on the higher level features of the program, while the repetitive details are handled by automated processes. However, since automated processes are pre-conscious, they may be inappropriately triggered, which may be a cause of error. Then, it may have to be performed slowly as a controlled process again, to make it possible to find the mistake, something that probably many programmers recognize.

Mental imagery and mental models As a programmer, I know well that I can perform computations and simulations of computations of limited complexity in my mind, to find out how to solve a particular programming problem. The result may sometimes be wrong (or we would have no bugs) but it is often worth the effort. But it is hard to draw any further general conclusions from such an introspective observation. How does it work, and what are we capable of in terms of mental processing, prediction and simulation? To discuss mental models in the context of algorithmic music, we must tackle this, since these are the kinds of mental models a composer will need to work consciously with programming generative algorithms and using algorithmic tools, and these are the models that an advanced listener may form in her mind from repeated listening.

Let us start with mental imagery. A mental image is a configuration of perceptual-like features, in analogy to a real-world image. It can be a memory of something previously perceived, or a construction of known elements. According to anecdotal evidence from many artists and scientists, visual imagery and visual thinking play a large role in their work. In a rather thorough review of empirical research into creative and generative use of mental imagery, Finke, Ward and Smith (1992) establish the basic characteristics of such images, and they also tell of a number of experiments in combinatorial creative tasks performed through mental images and mental models.

There seems to be a direct neural link between mental images and the actual perceptual neural circuits that deal with real . In other words, mental images can contain more than just verbal descriptions and propositions, and it has been shown that mental imagery of music is very often multi-modal (Bailes 2007). It takes longer to form a mental image of a complex entity than a simple one, and it seems directly related to the number of components and their relationship (Kosslyn et al. 1983).

Still following Finke et al., we are also are able to perform various transformations on mental images, e.g., movement and rotation. Such transformations are carried out continuously and holistically, with preserved relations between parts. So, we can draw the conclusion that there is a transformational equivalence between mental images and physical configurations, and, this makes it possible to anticipate changes and to predict future states of physical systems and processes.

Since computation can often be thought of as spatial configurations of various entities (with, e.g., variables as slots containing figures or different amounts of something, loops as repeated actions on such objects, etc.), we can assume that we can in the same way simulate and predict future states of computational processes, within certain complexity constraints.

More interestingly, we are able to mentally synthesize and transform images so that they show unexpected properties, such as novelty, ambiguity and emergence. For example, ambiguous images can be thought of as organized in several different ways, and emergent properties are found when the parts of an image are configured such that additional, unexpected features result, which were not intentionally created. Finke et al. review a series of experiments on mental image synthesis based on combinatory creative tasks, involving selection and transformation of graphical primitives, which are put together, transformed and evaluated as mental images.

The above experiments concentrated on visual mental images, but since a departure point was that mental images can contain different kinds of perceptual-like properties, similar mental processes, including transformation and synthesis of new configurations and simulations of simple physical processes, are most likely equally possible with sonic and musical structures.

Mental images represent perceptual configurations. But how do we process symbols, relations, images etc. in our mind? To understand this is crucial for understanding of programming and working with algorithms. One theoretical construction for how we do this is the idea of a mental model. The term was first used by Craik (1943) and its current meaning was established by Johnson-Laird (1983) and Gentner & Stevens (1983).

While a mental image primarily consists of a configuration of perceptual entities, a mental model is a more elaborate, dynamic, and active construction used to make predictions or generate expectations. A mental model can be thought of as a simplified mental representation of a problem space, and is an analog to the real-world representation of a problem – it has the same functional and structural nature as the system it models. It can contain propositional knowledge, relations and sets of possible operations. Mental models are important in, e.g., creative exploration, predicting results of hypothetical solutions, examining recombinations of elements, and in considering extremes of different kinds of situations. One important property of mental models is that they provide a way to represent interactions between components from different knowledge domains. They can be tested and modified, and hence form an essential tool for learning. Through what is known as analogical reasoning (Gentner 1989), we can translate the models of computational processes (algorithms) to sound and musical events. And since imagination and creativity are structured activities influenced by existing knowledge frameworks (Finke, Ward, and Smith 1992, 114-115), learned mental models, for example in the form of internalization of new tools or theoretical concepts in the field of algorithmic music, influence how we think about new algorithms, and about new music (Dahlstedt 2012). Hence, they have a profound aesthetic influence on the music we compose. Such mechanisms are at work in our minds to make it possible to work with algorithmic music.

Situated and embodied cognition Mental models as described above do not contain contextual information. This has been criticized, and already Laske (1975) recognized that musical knowledge is dependent on social context to be meaningful. To remedy this, many situated approaches to cognition have been developed. One idea is that processes that involve tools and environment are not located in our mind but dispersed between the different participating interacting entities.

For example, Greeno (1989) describes how new knowledge is generated from mental models extended with situated information. A primary problem is the insulation of symbolic knowledge. In many problem domains, symbolic notations are used to represent real objects and structural relationships, for example in math and physics. Students who do not use semantic information in their mental models risk missing important relationships that might be necessary for solving the problems. The models are less complete, and generated knowledge is less likely to be applicable to real situations, since the students are unable to infer structural relationships in the real world from them. Since music in general, and algorithmic music in particular, also uses symbolic notations of different kinds, these results seem highly relevant, e.g., when translating algorithmically generated structures to musical structures, or applying algorithms to solve real-world musical problems. If the composer does not know the physical behavior of sonic material and how our aural perception works, she risks generating musical structures that does not work as intended.

A more recent (and radical) take on embodied cognition is that external objects in the immediate environment not only are part of cognitive processes – they form an extended part of our mind (Clark and Chalmers 1998, Chemero 2011). For electronic music composers, highly dependent on external tools, this is not as strange as it may initially seem. Without the tools, or with other tools, our cognitive processes would be completely different.

Action and perception Today, the focus is on cognitive models that connect perceptual information with actions, and a number of researchers do not want to distinguish between perception and action. They mean that they are inseparable and are represented together in the brain. Some also mean that we actively perceive through action. We also perceive our actions and their results, in a continuous feedback loop.

One essential discovery that triggered this synthesis was the discovery of mirror neurons (Rizzolatti et al. 1996). These are neural circuits, forming a part of the motor neural system, which are active in similar ways both when we perform an action and when we observe somebody else performing the same action. Their purpose is probably to enable us to empathetically perceive the intentions of others, and enable prediction of their movement (Keysers 2011). From an evolutionary standpoint, these were necessary abilities for survival in the wild, for hunting and to avoid predators. The system also enables us to learn motor patterns by imitation.

It should be noted that the existence of specific mirror nurons in humans is hard to directly demonstrate due to the kinds of experiments needed, but there is indirect evidence of mirror functionality in the form of activity in certain brain regions being triggered equally by action and observation of action. See (Turella et al. 2009) for a review of the research in the area, and, e.g., (Hari et al. 2014) for some recent results.

This discovery resonates well with the of action and perception (Van Der Wel, Sebanz, and Knoblich 2013). According to this, actions and perceptions are stored together in the brain in a kind of common neural “code”. This is a functional theory, which does not concern the neural implementation of the described phenomena. There are a number of interesting results, which are relevant for the study of music-making in general, and interactive algorithmic music in particular, where a musician has to learn complicated reaction patterns. There are also implications for the notion of agency, which is a controversial concept in algorithmic music – who is playing that sound?

The main properties of action-perception coupling (according to Van Der Wel et al.) are: Mirror neurons only fire when observing goal-directed actions, and action observation induces action simulation. Predictive or internal models of one's own actions can be used to simulate someone else's actions. Action simulation is better with actions one is able to perform, but impossible with unknown actions. It is possible to improve motor expertise both with visual training and motor training, which is a strong indication that they share an underlying representation. One common property shared by both action and perception is timing information, to which we are highly sensitive. Perception of movement follows human motor system constraints, so we are conditioned by our bodies also when observing others (or possibly machines).

This focus on synchrony resonates well with the composer Paulo Chagas's descriptions of embodiment in ensemble playing:

"Musical embodiment is a temporal experience that requires the synchronization of temporal objects and events. In traditional musical practices, such as the Yoruba drum ensembles or Western symphony orchestras, the presence of performers and listeners who physically share the same time and space provides the framework for the synchronization. This mode of embodiment creates the unique ‘aura’ of the work of music, which, according to Benjamin (1977), has been eliminated by mechanical reproduction." (Chagas 2006)

Embodiment relies on more than synchrony, and it should probably be regarded as a necessary rather than a sufficient condition. Chagas connects his focus on synchrony to John Cage's ideas about that digital music should re-establish temporality by coupling to other media, and he seems to doubt the ability of electronic music to convey a sense of simultaneity and affect, saying that “electronic and digital signals are opaque” and that they “break the transparency of musical flow”. This rather dystopian view may apply to some old-fashioned ways of making and performing computer music, but as we have seen, there certainly are approaches that are able to achieve both synchronicity and flow. The technology is certainly not stopping you anymore.

According to a study with more neurological perspective (Pulvermüller et al. 2014), the mirror neuron system is not enough to explain the strong coupling between action and perception. Instead, the idea of Action Perception Circuits (APC) is proposed, or “learned mirror circuits”, which are interlinked neuronal action-perception representations, mapping learned motor patterns to perceived actions (e.g., seen or heard). APCs offer mechanisms for repetition and simulation, and when an APC is triggered by the perception of someone else performing a similar or related action, there is a kind of resonance (Zwaan and Taylor 2006), which is particularly strong when and sharing of feelings are involved (Gallese, Keysers, and Rizzolatti 2004). When action repertoires are similar between individuals, perhaps due to intense shared practice, they connect similar motor and perceptual schemas. These neuronal congruencies allow for very refined shared interactions, for example in ensemble music performance (Bangert et al. 2006, Zatorre, Chen, and Penhune 2007).

Agency and the listener The concept of agency (who caused a particular action or event) is is regarded as quite important in music. As an improviser, I respond to a particular musician's actions, not just to any sound. In electronic music, where the physical source of the sound is often an anonymous loudspeaker, this is crucial, and Pierre Schaeffer (1966), Michel Chion (1983) and Dennis Smalley (1992) has written extensively about acousmatic or reduced listening, where a listener tries to focus on the abstract sound object and not on its original cause or source. This is not easy.

Following Van der Wel et al. (2013), one proposed theory for the experience of agency (of one's own actions) is based on perceived mental causal relationship (Wenger 2002), which may lead to an illusory perception of agency under some conditions. Base on a series of studies, Van der Wel et al. instead argue that the shared sensorimotor cues can be used to determine agency, with some added benefits. When you perform an action, the motor commands match the sensory feedback from the movement. Also here, synchrony of time cues is crucial. Furthermore, when performing an action, we predict the sensory consequences of the action through internal models (Kawato and Wolpert 1998). Finally, an advantage with this theory is that it can be used also to explain detection of agency in others' actions, thanks to the common representation of action and perception based on the mirror neuron system.

Perception of agency is also important for embodied experience of music, and since the mirror neuron system also reacts to the sounds of actions (Keysers 2011), one might speculate that, when sensory information correlates to (perceived) motor information, pseudo-agency can be experienced from pseudo-physical sonic gestures in music. This may be one explanation to the strong urge to dance to music - the dancer appropriates the music by an acquired sense of agency through synchronized movement, and thus achieves a heightened sense of musical embodiment.

The altered role of the listener in electronic music, without visible cause or source, has already been mentioned above. A similar question is the perception of agency of algorithmically generated musical structures, and how much of the underlying algorithm the audience should be able to decipher. According to Collins (2003), this question of the acousmatic sound source has been transplanted to laptop music without a solution. This is a step (or rather no step) in the wrong direction, since acousmatic compositions are often human-composed collages or sequences of sounds, while laptop music relies to a high degree on algorithms and processes, the inner workings of which are harder to grasp for the unknowing ear.

This is important in relation to the various mental models discussed in the previous sections. If the listener is supposed to form a mental model of the underlying computational processes, some information in terms of program notes or explanation may be needed, and possibly also previous knowledge. If perception based on inherent physical models of movement is desired, it is possible to aid the listener by using sonic gestural content mimicking such gestures, and visual cues and displays can also help here, as long as they are causally connected, or in synchrony with the underlying algorithms. Collins compares an ordinary laptop performer to somebody who is checking her email, and McLean (cited in Collins 2003) has proposed, from a live coding viewpoint, to project the screens of the performers, both to reveal those who does not do much (maybe actually check their email), and to show the tools and actions used, which can be perceived as beautiful even if the exact meaning of the actions are not known by the listener. After all, a non-musician can appreciate empathetically the actions and efforts performed by a musician, and connect it to audible results, thanks to the mirror neuron system discussed previously, and thanks to synchronicity between visual and aural perceptual cues, or zero-order transcoding from physical gesture to sound. However, the gestural synchronicity is what differentiates gesture from algorithm. An algorithm potentially introduces layers of processing, which obscures synchronicity and causality. This dichotomy between gesture and algorithm is related to Boden and Edmonds’s (2009) distinction between interaction, actions with direct effect on the output, and influence, which only affect the internal states of the generative system, with potential long-term effects on the output. Judging from my own works, these are not always so well-defined, since mappings from input to output can be of arbitrary complexity, and consist of a combination of real-time interactions and long-term influence. Also, the mapping itself can change over time. The emphasis on effort has been defended by Joel Ryan (1992) and myself (Dahlstedt 2009), among others, as a way to aid listeners who do not know the instruments and algorithms, but also to keep the physical performative role of the musician, which is rewarding in itself (for the musician).

There are also those who think that a visual performance component is unnecessary, and that modern listeners need to get used to the idea of focusing on sound rather than action (Dean 2003). A similar viewpoint is held by Stuart (2003), who introduced the term aural performativity as a substitute for the “visual spectacle and physical gesture of musical performance”. Dean's and Stuart's stance is not in disregard of traditional musical performance practices, but rather a celebration of the new disembodied and aphysical possibilities that algorithmic music provides, which is further elaborated by d'Escriván (2006), around the idea of the effortlessness of computer music. But it does put an increased burden on listeners to re-educate themselves, or maybe this is solved by itself as new generations grow up, perfectly comfortable with the new paradigm.

A stance somewhere in between the advocates of effort-based performance and those who want us to adapt a new paradigm of electronic and algorithmic immateriality was held by David Wessel. His late experimental instruments (Wessel et al. 2007) concentrated on “intimacy” (Wessel, Wright, and Schott 2002) with the sound, i.e., where minimal finger gestures and careful but rather direct mappings provided exactly this kind of intimate sonic interaction, producing rich sonic landscapes from minimal movement. And it was free from the connection to effort. Judging from performances with Wessel and my own duo pantoMorf, his instrument lent itself well to the painting of broad generative landscapes as well as to intimate control of micro-gestures, as contrasted with the bodily, physical and effort-based instruments that we performed on (Dahlstedt 2009). It was also clear that the different capabilities and design approaches of our instruments made us assume different musical roles. Wessel controlled a virtual ensemble of detailed sound structures, each with only a few degrees of freedom, while we performed as solo musicians, controlling many parameters of a single sonic structure.

Algorithm as augmented mind Formal methods, i.e., algorithms, have been used by artists since Antiquity (Dahlstedt 2004). One purpose for which they have been used is to go beyond the known, and help generate something that cannot be directly envisioned, thanks to our limited predictive capacity. Our mental models are not perfect, and it is easy for a composer to devise a few lines of code, without being able to predict exactly what comes out. Even simple algorithms stretch beyond our cognitive resources, either because of lack of precision, lack of speed, or limited working memory. So, the only way to predict the result is to execute the algorithm.

This can be further elaborated through a spatial model of creative process (Dahlstedt 2012). If we regard a theoretical space of all possible results in the chosen medium, any tool defines pathways through this space, connecting points that differ by just one application of the tool. At any point the tool can be substituted for another, and the trajectory will follow other pathways, defined by this new tool. Hence, learning a new tool allows the artist to reach new parts of the space, or to navigate with greater precision. Forming a mental model of the tool through study and training helps in predicting results and choosing which path to take, and when using complex algorithms as tools, the steps are large, and the predictive capacity is not sufficient. Hence, the algorithmic tool can be used to transcend the limits of the known part of this space.

Because of this, the material generated from an algorithm can feel alien at first, but if the composer spends enough time to assimilate the material and form mental models of the inner logic of the generated music, the new material can gradually become part of the composer’s way of thinking, and she might be able to compose in the style of the algorithm, by ear (Dahlstedt 2001b). Then, the mind has not only become temporarily augmented, but acquired new permanent abilities thanks to the use of algorithms.

The invention of writing fundamentally changed human culture, becoming an augmentation of human mind, aiding it with memory and knowledge. Written notation, in the same way, fundamentally changed the way composers can work. Since notation allows the composer to forget without loss, she can work in a non-linear fashion, concentrating on one small part of the work at a time. We can think of algorithms as having the same effect, but regarding process. In a way similar to the previously mentioned automated mental processes (Shiffrin and Schneider 1977), we can start an algorithm and let go of it, like spawning a new thread in a computer program. The algorithm will keep going, and we can afford to forget, instead focusing on the next thread of the process. This way of working is especially evident in the genre of live coding, where the computer becomes a realtime extension of the performer's brain, allowing for more complex and exact processes than she can perform with her brain and body.

Thus, algorithms can, in several different ways, help create an augmented mind.

From a situated cognition perspective, algorithms, when considered as tools, can also be considered as carriers as intelligent behavior (Gregory 1981), or in Vygotsky's terms, as mediators of culture (Vygotsky 1978). We are shaped by the tools that are given us by our culture, and we contribute by developing new tools, to be used by forthcoming generations.

Similar views are expressed by Brown (2006) and Reybrouck (2006). They view music as embodied action, and argues that when performing with algorithms, the performer is part of cybernetic control system consisting of a feedback loop of perception-cognition-action. New instruments not only augment our abilities to produce sound, but also our abilities to perceive it, since the body can be regarded as both subject and object, and perception is an action following patterns shaped by the available tools. Following this reasoning, we should continue to develop more advanced tools, tools for sense-making and musical knowledge acquisition.

Since today's young grow up with a rich diversity of music apps in new formats, from programmable generative systems to gestural instruments, they are likely to form different mental models of compositional tools and the process of music-making. No tool is free of stylistic implications, and they navigate the space of the possible along paths defined by the tools they know, and if cognition is embodied and situated and the tools form part of their extended mind, they will create very different algorithmic music.

Performing on, with and in algorithms Algorithms can be used in many different ways, and it is hard to generalize, since an algorithm can be almost any formally described process. Apart from Masahiro Miwa’s algorithmic performances, where the algorithm is literally performed by people, as described earlier, there are three fundamentally different configurations in a live setting. You can play on, with or in an algorithm. I will describe the first two, and then elaborate a little more on the latter, since it constitutes a special situation with regards to action, perception and embodiment, and it has been an important part of my own work. If you, through some interface, interact with an algorithm, or control its parameters in some way, one can say that you perform on the algorithm. This is very similar to performing on the aforementioned hyper-instruments, or augmented instruments, and depending on the complexity of the underlying algorithm, it may include autonomous processes that evoke the feeling of extra musicians. In essence, you perform with these parts of the algorithms. They are not affected by your playing, but you have to learn its inner logic, i.e., form a mental model of it, or develop skills in reacting to it, i.e., action-perception patterns that embody the desired (or necessary) interaction. To summarize, there’s gliding scale from on to with, depending on the level and nature of interaction between human and algorithm.

In a more tight-knit situation, the musician is part of a real-time algorithmic system, where every interaction has audible results and affects other parts of the system, algorithmic or human. A musician here enter in such a close synergetic relationship with an algorithm (in a performance setting or in a studio situation) that it is perceived as something you step into and become a part of, just as you do in a free improvisation setting. If performing on an algorithm involves sufficiently strong feedback connections between algorithm and human, in both directions, you become a part of a system, and essentially perform in the algorithm. Or rather, you become part of a larger composite algorithmic system that involves the computer algorithm, yourself (and possibly other musicians), and your internal cognitive algorithms in the form of musical preferences and behavior patterns, and the links between all these.

Such realtime algorithms do not have a beginning or end, and can more be regarded as systems you step into and form a part of, as one node in a web of interactions. In lack of a better term, I call this systemic improvisation, and designing such systems and performing with them has formed a major part of my artistic output the last eight years. I will give a few examples of such works.

In Dynamic Triads (2008), three musicians are interconnected in a triangle on stage (Fig. 2). If player A plays a sound, it has three different effects: It is heard in the room, it is stored in a small circular buffer (to be used by player C), and it brings out sounds from the the buffer of player B. What happens is that as soon as you play something, it is merged with a sound from what your neighbor recently played, and since the dynamic contour of this sound is completely matched to your own sound, they merge also perceptibly, and appear as one. You play your neighbor, but you also provide sound to your other neighbor at the same time. It takes some time to get used to this unusual computer-mediated interaction model, usually a couple of days of practice. But musicians enjoy playing this system, and it has been performed in many countries with vastly different sets of musicians. Fig. 2. The connections in Dynamic Triads (2008). Each musician’s playing is heard in the room, feeds into a circular buffer and simultaneously pulls out sound from the neighbors buffer. In Brush Stroke Conversation (2011), two to four players are connected to a central computer (Fig. 3). They receive aural or visual cues either through individual headphone or screens. A cue gives them a temporal intensity shape contour, which they are supposed to interpret into a phrase. With two players, the contour shown to player A is generated from the previous phrase of a player B, and the phrase the played by A generate a new cue contour for player B.

Fig. 3. A screenshot from Brush Stroke Conversation (2011). Each musician follows one of the graphs and interprets the brush strokes as musical phrases, while new brush strokes are generated from his playing. The piece is sometimes performed with aural cues instead, in the form of time-compressed filtered noise gestures. In these works, there is no initial material, no prescribed timeline, only a characteristic system with a given interaction model. You enter the algorithm, and you have to survive in there, by learning new interaction patterns. The result is often described as “I have never heard these musicians play in this way”. Which was part of my goal.

A key feature is that the musicians are part of the algorithm. If they do not play, nothing happens. If they misinterpret a cue or an instruction, that is what counts. The actual sound is what goes into the system. And humans fail, often in characteristic ways, so these works rely on what I call the meaningful mistake.

This is especially clear in the work La Chasse Évitante (2008) for two musicians (Fig. 4), where each tries to follow (by ear) a series of cue notes played through a headphone (inaudible for the audience). They probably fail, because the cue notes move fast. The cue notes for player A are calculated from what player B plays, from a simple arithmetic formula (that can be substituted). And vice versa. The two players form essential links in a figure-eight feedback circuit. Their characteristic (in)ability to accurately follow the cue notes create the musical content of the piece.

Fig 4. A schematic of the interconnections in La Chasse Évitante (2008). Red lines indicate connections to/from to the processor, while yellow lines indicate acoustic sound. Performing in such systems is like being part of an ensemble of musicians, the reaction patters of which you do not fully know. Because of the complexity of the situation, and the high degree of interconnectedness in the system, it is difficult to consciously calculate an appropriate musical response – the situation is unpredictable in the mathematical sense. Instead you have to practice with the system and form an (however incomplete) mental model of its behavior. Alternatively, through practice you acquire a set of action-perception complexes characterizing the situation, allowing you to enter a flow-like state to "become" part of the music, of the system. In this state, reaction time is minimal and you are maximally attentative, which facilitates performance. One cognitive theory that fits playing with such a system is Rosen's anticipatory systems (Rosen 1985), where you do not react to what has happened, but to what you think will happen in the near future. The guess may be wrong, but this does not matter – the short-term predictions are still used to guide action at every instant. Even mistakes become meaningful. Essentially a feedback system where the feedback emanates from a predictive mental model of the system you interact with, essentially forming a hybrid feedback-feedforward system. This kind of works also relates closely to Roy Ascott’s view of artworks as networks or systems (Ascott and Shanken 2003).

Music as intellectual effort As a final perspective on the idea of perceived effort in musical performance, we will consider the perception of intellectual effort in a performance or composition. A traditional composer may try to create more and more intricate structures, with maximal mental effort pushing the limits of her own mind, and of what can be played. But such intricacies gain their meaning in relation to the musician's strife and effort to realize them, and, partly, in relation to what the composer can compose. But since composition is a non-linear process, there is no upper-bound for the complexities than can be composed, or for the density of the result. The intellect, when decoupled from time and space, can prescribe anything, and without the physicality of the body, it has no meaning in terms of difficulty or effort, except possibly under the constraint of limited time.

This is completely different in the act of improvisation. Here, the mind acts in real-time, and the intellectual effort is directly related to the limits of the human mind, because of the constraint of linear time. The human body and mind defines the time scale and the scope of gestures, phrases and structures. It is the measure of everything in the music. The music gets its meaning in relation to the physical and mental constraints. If these constraints did not exist, the effort would be meaningless, since also the complexity is measured in relation to what a human can perform. In contrast, a machine can generate and perform almost anything, in any speed.

So, what happens when we listen to an algorithmic composition or performance? We may try to experience it physically, through the body, with action-perception circuits triggered by sonic hints of action. We may also reflect intellectually, trying consciously or unconsciously to form mental models of the perceived processes. Maybe the combination of these two, based on the mechanisms described earlier in this chapter, is to experience the music aesthetically.

But, we can also experience the music as a human-to-human communication. Algorithms are composed, performed and perceived. Maybe the mirror neurons also react to mediated effort – the perceived intellectual effort inherent in complex musical structures or behind elaborate studio work. We appreciate empathetically the time and skill it must have taken to realize the piece, here, again, under the constraint of time. Mental models and cognitive constraints can form a basis for such a link. When forming mental models is difficult, the observed system is perceived as complex, i.e., it represents a considerable effort.

Aesthetically, this can be thought of as a cognitive and perceptional tension between human and computer – what our mind and body can create with our limited resources, versus the computational sublime (McCormack and Dorin 2001) - the algorithmically generated emergence that we cannot fathom. In algorithmic music, and in algorithmic performance in particular, both these perspectives have a place. A place where mind, body and machine make music together.

References Ascott, R., and E. A. Shanken. 2003. Telematic embrace: visionary theories of art, technology, and consciousness. Berkeley and Los Angeles, CA: University of California Press. Bailes, F. 2007. "The prevalence and nature of imagined music in the everyday lives of music students." Psychology of Music 35 (4):555-570. Bangert, M., T. Peschel, G. Schlaug, M. Rotte, D. Drescher, H. Hinrichs, H.-J. Heinze, and E. Altenmüller. 2006. "Shared networks for auditory and motor processing in professional pianists: evidence from fMRI conjunction." Neuroimage 30 (3):917-926. Bartlett, F. C. 1932. Remembering: A study in experimental and . Cambridge, UK: Cambridge University Press. Berry, D. M. 2008. "A contribution towards a grammar of code." The Fibreculture Journal 13. Boden, M. A., and E. A. Edmonds. 2009. "What is generative art?" Digital Creativity 20 (1-2):21- 46. Brown, N. 2006. "The flux between sounding and sound: Towards a relational understanding of music as embodied action." Contemporary Music Review 25 (1-2):37-46. Chagas, P. C. 2006. "The blindness paradigm: The visibility and invisibility of the body." Contemporary Music Review 25 (1-2):119-130. Chemero, A. 2011. Radical embodied cognitive science. Cambridge, MA: MIT Press. Chion, M. 1983. Guide des objets sonores. Paris, France: Buchet/Chastel/INA-GRM. Clark, A., and D. Chalmers. 1998. "The extended mind." Analysis 58 (1):7-19. Collins, N. 2003. "Generative music and laptop performance." Contemporary Music Review 22 (4):67-79. Craik, K. 1943. The nature of explanation. Cambridge, UK: Cambridge University Press. Cramer, F. 2005. Words Made Flesh: Code, Culture, Imagination. Rotterdam:: Piet Zwart Institute. d'Escriván, J. 2006. "To sing the body electric: Instruments and effort in the performance of electronic music." Contemporary Music Review 25 (1):183-191. Dahlstedt, P. 2001a. "Creating and Exploring Huge Parameter Spaces: Interactive Evolution as a Tool for Sound Generation." Proceedings of the 2001 International Computer Music Conference, Habana, Cuba. Dahlstedt, P. 2001b. "A MutaSynth in Parameter Space: Interactive Composition Through Evolution." Organised Sound 6 (2):121-124. Dahlstedt, P. 2004. "Sounds Unheard of: Evolutionary algorithms as creative tools for the contemporary composer."Doctoral thesis, Chalmers University of Technology. Dahlstedt, P. 2007. "Evolution in Creative Sound Design." In Evolutionary Computer Music, edited by E. R. Miranda and J. A. Biles, 79-99. London, UK: Springer-Verlag. Dahlstedt, P. 2009. "Dynamic Mapping Strategies for Expressive Synthesis Performance and Improvisation." LNCS 5493:227-242. Dahlstedt, P. 2012. "Between Material and Ideas: A Process-Based Spatial Model of Artistic Creativity." In Computers and Creativity, edited by J. McCormack and M. d'Inverno, 205- 233. Heidelberg/New York/Dordrecht/London: Springer. Dahlstedt, P. 2014. "Circle Squared and Circle Keys - Performing on and with an Unstable Live Algorithm for the Disklavier." Proceedings of the International Conference on New Interfaces for Musical Expression, London, United Kingdom. Dahlstedt, P. 2015. "Mapping Strategies and Sound Engine Design for an Augmented Hybrid Piano." Proceedings of the International Conference on New Interfaces for Musical Expression, Baton Rouge, LA. Dahlstedt, P., and P. A. Nilsson. 2008. "Free Flight in Parameter Space: A Dynamic Mapping Strategy for Expressive Free Impro." LNCS 4974:479-484. Dean, R. 2003. Hyperimprovisation: computer-interactive sound improvisation. Middleton, WI: AR Editions, Inc. Finke, R. A., T. B. Ward, and S. M. Smith. 1992. Creative cognition: Theory, research, and applications: Cambridge, MA: MIT Press. Fuller, M. 2008. Software studies: A lexicon. Cambridge, MA: MIT Press. Gallese, V., C. Keysers, and G. Rizzolatti. 2004. "A unifying view of the basis of ." Trends in cognitive sciences 8 (9):396-403. Gentner, D. 1989. "The mechanisms of analogical learning." In Similarity and analogical reasoning, edited by S. Vosniadou and A. Ortony, 199-241. New York, NY: Cambridge University Press. Gentner, D., and A. L. Stevens. 1983. Mental models. Hillsdale, NJ: Erlbaum. Goffey, A. 2008. "Algorithm." In Software Studies - a Lexicon, edited by M. Fuller. Cambridge, MA: MIT Press. Greeno, J. G. 1989. "Situations, mental models, and generative knowledge." In Complex Information Processing: The Impact of Herbert A. Simon, edited by D. Klahr and K. Kotovsky, 285-318. Hillsdale, NJ: Erlbaum. Gregory, R. L. 1981. Mind in Science. London, UK: Weidenfeld and Nicolson. Hari, R., M. Bourguignon, H. Piitulainen, E. Smeds, X. De Tiege, and V. Jousmäki. 2014. "Human primary motor cortex is both activated and stabilized during observation of other person's phasic motor actions." Philosophical Transactions of the Royal Society of London B: Biological Sciences 369. Hiller Jr, L. A., and L. M. Isaacson. 1958. "Musical composition with a high speed digital computer." Journal of the Audio Engineering Society 6 (3):154-160. Johnson-Laird, P. N. 1983. Mental models: Towards a cognitive science of language, inference, and consciousness. Cambridge, MA: Harvard University Press. Kawato, M., and D. Wolpert. 1998. "Internal models for motor control." In Sensory Guidance of Movement, 291-307. Novartis. Keysers, C. 2011. The empathic brain: How the discovery of mirror neurons changes our understanding of human nature: Social Brain Press. Kitchin, R., and M. Dodge. 2011. Code/space: Software and everyday life. Cambridge, MA: MIT Press. Kosslyn, S. M., B. J. Reiser, M. J. Farah, and S. L. Fliegel. 1983. "Generating visual images: units and relations." Journal of Experimental Psychology: General 112 (2):278-303. Laske, O. E. 1975. "Toward a theory of musical cognition." Journal of New Music Research 4 (2):147-208. Laske, O. E. 1988. "Introduction to cognitive musicology." Computer Music Journal 12 (1):43-57. Machover, T. 1992. Hyperinstruments: A Progress Report, 1987-1991. Cambridge, MA: MIT Media Laboratory. McCormack, J., and A. Dorin. 2001. "Art, emergence and the computational sublime." Proceedings of Second Iteration: A Conference on Generative Systems in the Electronic Arts. Melbourne: CEMA. Ormerod, T. 1990. "Human cognition and programming." In Psychology of Programming, edited by J.-M. Hoc, T.R.G. Green, R. Samurçay and D.j. Gilmore, 63-82. London, UK: Academic Press Ltd. Ormerod, T. C., K. I. Manktelow, E. H. Robson, and A. P. Steward. 1986. "Content and representation effects with reasoning tasks in PROLOG form." Behaviour & Information Technology 5 (2):157-168. Paivio, A. 1971. Imagery and verbal processes. New York, NY: Holt, Rinehart and Winston. Pressing, J. 1988. "Improvisation: methods and models." In Generative processes in music, edited by J. A. Sloboda, 129-178. Oxford, UK: Clarendon Press/Oxford University Press. Pulvermüller, F., R. L. Moseley, N. Egorova, Z. Shebani, and V. Boulenger. 2014. "Motor cognition--motor semantics: Action perception theory of cognition and communication." Neuropsychologia 55:71-84. Pylyshyn, Z. W. 1984. Computation and cognition. Cambridge, UK: Cambridge University Press. Reybrouck, M. 2006. "Music cognition and the bodily approach: Musical instruments as tools for musical semantics." Contemporary Music Review 25 (1-2):59-68. Rizzolatti, G., L. Fadiga, V. Gallese, and L. Fogassi. 1996. "Premotor cortex and the recognition of motor actions." Cognitive brain research 3 (2):131-141. Rosen, R. 1985. Anticipatory Systems-Philosophical, Mathematical and Methodological Foundations. Oxford, UK: Pergamon Press. Rosen, R. 1988. "Effective processes and natural law." In The Universal Turing Machine a Half- Century Survey, edited by R. Herken, 485-498. Oxford, UK: Oxford University Press. Ryan, J. 1992. "Effort and Expression." Proceedings of the 1992 International Computer Music Conference, San Jose, California, USA. Schaeffer, P. 1966. Traité des objets musicaux. Paris, France: Seuil. Schreibman, S., R. Siemens, and J. Unsworth. 2008. A Companion to Digital Humanities: John Wiley & Sons. Searle, J. R. 1980. "Minds, brains, and programs." Behavioral and brain sciences 3 (03):417-424. Shiffrin, R. M., and W. Schneider. 1977. "Controlled and automatic human information processing: II. Perceptual learning, automatic attending and a general theory." Psychological review 84 (2):127-190. Smalley, D. 1992. "The listening imagination: listening in the electroacoustic era." In Companion to Contemporary Musical Thought, Vol. 1, edited by T. Howell J. Paynter, R. Orton and P. Seymour, 514-554. London, UK: Routledge. Strachey, C. 1954. "The “thinking” machine." Encounter 3 (4):25-31. Stuart, C. 2003. "The object of performance: Aural performativity in contemporary laptop music." Contemporary Music Review 22 (4):59-65. Turella, L., A. C. Pierno, F. Tubaldi, and U. Castiello. 2009. "Mirror neurons in humans: consisting or confounding evidence?" Brain and language 108 (1):10-21. Van Der Wel, R.P.R.D., N. Sebanz, and G. Knoblich. 2013. "Action perception from a common coding perspective." In People watching: Social, perceptual, and neurophysiological studies of body perception, edited by K. and Shiffrar Johnson, M., 101-120. Oxford, UK: Oxford University Press. Wenger, D. M. 2002. The illusion of conscious will. Cambridge, MA: MIT Press. Wessel, D., R. Avizienis, A. Freed, and M. Wright. 2007. "A Force Sensitive Multi-touch Array Supporting Multiple 2-D Control Structures." Proceedings of The International Conference on New Interfaces for Musical Expression, New York, NY, USA. Wessel, D., M. Wright, and J. Schott. 2002. "Intimate Musical Control of Computers with a Variety of Controllers and Gesture Mapping Metaphors." Proceedings of The International Conference on New Interfaces for Musical Expression, Dublin, Ireland, 2002. Vygotsky, L. S. 1978. Mind in Society. Cambridge, MA: Harvard University Press. Zatorre, R. J., J. L. Chen, and V. B Penhune. 2007. "When the brain plays music: auditory--motor interactions in music perception and production." Nature Reviews Neuroscience 8 (7):547- 558. Zwaan, R. A., and L. J. Taylor. 2006. "Seeing, acting, understanding: motor resonance in language comprehension." Journal of Experimental Psychology: General 135 (1).