rik en v H d a pa ka ll s M a

INVESTIGATING THE POSSIBILITY OF BIAS AGAINST AI-COMPUTER- COMPOSED MUSIC

Bachelor Degree Project in Media Arts, Aesthetics and Narration

30 ECTS Spring term 2020

Anderson Lima & Andreas Blixt Supervisor: Markus Berntsson Examiner: Lars Bröndum

Bachelor Degree Project in Media Arts, Aesthetics and Narration

30 ECTS Spring term 2020

Andreas Blixt & Anderson Lima Supervisor: Markus Berntsson Examiner: Lars Bröndum

Abstract

This study explores how respondents perceive human-composed music and AI- computer-composed music. The aim was to find out if there is a negative bias against AI-computer-composed music. The research questions are 1. How is AI-computer- composed music perceived compared to human-composed music? 2. Are there prejudices towards AI-computer-composed music? If yes, what are the prejudices? Four participants took part in a qualitative experiment and a semi-structured interview. Two music pieces were used as artifacts, one was human-composed, and the AI- computer AIVA composed the other. The results showed that although the researchers have not revealed to the participants if they had chosen the AI-computer-composed song or the human-composed song as their favorite, all the participants strongly believed that their favorite song was human-composed. Thus, indicating a bias towards human-composed music

The results also showed that the two music pieces were not perceived to have the same characteristics or evoke the same emotions; furthermore, there was some skepticism, whether an AI-computer-composed song could recall the same emotions as a human- composed song. However, none of the respondents explicitly expressed negativity towards AI-computer-composed music.

Keywords: Music, AI, AI-computer, bias, human-composed, computer-composed

Table of Contents

1 Introduction ...... 1 2 Background ...... 2 2.1 The principles of computer-based music ...... 2 2.2 The historical background of computer-music ...... 2 2.2.1 Computer-based ...... 3 2.3 One approach to music intelligence ...... 4 2.4 Cope's views on the concept of recombinancy ...... 4 2.5 Different points of view regarding computer-composed music ...... 5 2.6 Algorithmic music composition ...... 6 2.6.1 Stochastic approaches ...... 6 2.6.2 Rule-based approaches ...... 7 2.6.3 Artificial intelligence (AI): ...... 7 2.6.4 Understanding artificial intelligence ...... 8 2.6.5 Fundamentals of AI ...... 9 2.6.6 AIVA - Artificial Intelligence Virtual Artist ...... 9 2.7 Art and evaluation ...... 10 2.7.1 Some different point of views regarding AI-music ...... 12 3 Problem ...... 14 3.1 Method ...... 15 3.2 Participants ...... 16 3.3 Ethics ...... 16 4 Implementation ...... 17 4.1 Artifact ...... 17 4.2 Pilot Study ...... 21 4.2.1 Solving the problem ...... 22 5 The study ...... 23 5.1 Analysis: How is AI-computer-composed music perceived compared to human-composed music? ...... 23 5.2 Analysis: Are there prejudices towards AI-computer-composed music? If yes, what are the prejudices? ...... 24 5.3 Conclusions ...... 26 6 Concluding Remarks ...... 27 6.1 Summary ...... 27 6.2 Discussion ...... 28 6.3 Ethics ...... 30 6.4 Societal use ...... 31 6.5 Future Work ...... 31 7 References ...... 32

1 Introduction

Computer programs have already been used as far back as the early 1950s by musicians and composers to create music. At the present time, the technological advancements in algorithm- computer-music have allowed music to be created without any help from humans, apart from the assistance of starting the computer program. This type of autonomous music composer, also known as (AI)artificial intelligence-computer-composer, has revolutionized how music can be created and emphasized the discussion around music as an exclusive product of human creativity and how humans perceive it in general.

The AI-computer-composer not only composes autonomously, as its musical compositions are seen quite often to be equally enjoyable to listen to, or at times even surpass compositions created by professional human composers, which also awakens questions regarding AI- computer-composed music concerning human-composed music. Currently, we have limited knowledge regarding how the two types of music are perceived among music consumers in general. Furthermore, we do not know if music's perceptiveness depends on the music itself or if there are prejudices and a bias against the AI-computer- composed music, which this research, at least in part, aimed to investigate. This research aimed to find answers to the following questions: 1. How is AI-computer-composed music perceived compared to human-composed music? 2. Are there prejudices towards AI-computer-composed music? If yes, what are the prejudices? To answer these two questions, we have developed an experiment where the participants listened to two pieces of music and answered questions during an interview.

Artificial intelligence (AI) is defined broadly as any technique that enables computers to mimic human behavior and replicate or outperform human decision-making in order to solve complex tasks autonomously or with minimal human intervention (Russell & Norvig, 2021). AI-computer-composer music programs have become significantly popular among music professionals and music consumers; therefore, AIVA, the artificial intelligence Virtual Artist (AIVA, 2020), served as the composer to one of the songs used as an artifact in this study.

1

2 Background

2.1 The principles of computer-based music The possibility of music being thought in terms of organized processes that follow a set of rules, also known as algorithmically composed music, can be dated in music history back to the ancient Greeks. That is, the belief that the harmony of sounds in music could be directly related to the laws of nature, as can be found in some of Pythagoras, Ptolemy, and Plato's writings (Grout & Palisca, 1996). Burns (1994) defines algorithmic composition as a rule- based system, a system that follows a set of rules in order to exist. An example of an algorithmic system in music composition can be observed in serial music.

Serial music is an algorithmic music composition method first seen in Josef Matthias Hauer's publication "Law of the twelve tones" in 1919 (Covach, 1990) and on Arnold Schoenberg’s version of Law of the twelve tones in 1923 (Covach, 1994). Schoenberg's composition method has the following basic rules: attribute the same importance to every 12 notes of the equal-tempered scale (C, C#, D, D#, E, F, F#, G, G#, A, A#, B), making sure that each note is played equally often as one another and eliminating the urge of having the composition in a specific key (Whittall, 2008).

Schoenberg's terms explain the general idea behind the 12-tone system: first, The Theme, a 12-tone long note assembly, goes from the left to the right. Second, the Theme's Inversion or Mirroring; if the tone theme goes up a perfect fifth, it will go down a perfect fifth in its Inversion. Third, Retrograde means literally that the theme is played backward, from right to left. Finally, there is the Retrograde Inversion, which is the Retrograde version inverted. An easy way to organize the 12-tone method is by making use of the 12-tone matrix, where the Prime, original melody is written from left to right (P). The Inversion goes from the top to the bottom (I), the Retrograde goes backward (R), and the Retrograde Inversion goes from the bottom to the top (RI) (Whittall, 2008)

A more actual example of algorithmic music composition without the use of computers can be found in John Cage's composition called "Reunion," where he applied the principles of randomness by establishing the musical sequences from the outcome of the movements found on a chess match played on a photo-receptor equipped chessboard. "The players' moves trigger sounds, and thus the piece is different each time it is performed" (Alpern, 1995, p. 2). With the help of technological advancements such as the computer, algorithmic compositions today can be written much faster and effectively (Thornely, 2012).

2.2 The historical background of computer-music Mathews was an electrical engineer at California Institute of Technology and the Massachusetts Institute of Technology and a researcher at Bell Laboratories. Mathew is among the first who created programs that could play music on a computer. "Mathews, seen by many as the forefather of , developed a program known as Music 1 (1957), which created the first few notes of synthesized music from a computer" (Hope & Ryan, 2014, p.110). The first piece produced with Mathew's program was "In a silver scale," composed by Newman Guttmann. Despite Mathew's reputation as the forefather of computer music, he acknowledges that he was not the only one working on this topic during that time (Mathews, 1969).

2

An example that pre-dates Mathew's computer music can be observed on Australia's Council for Scientific and Industrial Research's (CSIR) project to build a fully electronic digital computer that Geoff Hill assisted with the logical design of the machine. Hill programmed the CSIR Mark 1 to perform popular tunes, such as a Chopin march and an aria from Handel's Messiah, as well as a tune composed specifically for this machine. The CSIRAC sound synthesis works through the sending of digital pulses to a loudspeaker-connected computer registry. Several references put the first music on the CSIR Mark 1 as of late 1950 or early 1951 (Doornbusch, 2005, p. 24).

In 1951, Bateman Wayne wrote a book called "Introduction to computer music," where he describes the evolution of electronic music composition. Bateman discussed computer music in terms of its theoretical possibilities. He advised that anyone interested in using the computer as a music tool would need not just a deep understanding of music theory and computer programing but also a fundamental understanding of how sound is composed and transmitted so that a computer can be used to replicate or even synthesize new sounds (Bateman, 1980, p.5). Bateman referred to the composers of computer music as "Pioneers" because back then, the ability to create original sounds purely from scratch was not yet available to past generations (Bateman, 1980, p. 4).

2.2.1 Computer-based algorithmic composition Although Mathews is considered by many as the father of computer music, Bateman affirms in 1980 on the second edition of his 1951 book "Introduction to computer music," that computer-composed or algorithmic-composed music has been primarily pioneered by Lejaren Hiller and Leonard Isaacson at the University of Illinois (Bateman, 1980). Their work, described in "Experimental Music" in 1956, using the Illiac high-speed digital computer, resulted in the first substantial piece of music composed by a computer called " for string quartet." According to various rules, Hiller and Isaacson composed "raw musical materials" with the computer, then modified these musical materials according to some specific functions, so later they could select the best results from these modifications.

The Illiac Suite contains one of the first examples of random processes in computer music composition. The general idea is to accept or reject randomly composed pitches and rhythms using screening rules. The suite also includes probability distributions and Markov chain processes (Alpern, 1995). The three steps, generate, modify, and select found in Hiller and Isaacson's system, originated the idea for implementing and creating one of the first computer systems for automated composition. Hiller and Robert Baker, in the 1950s, wrote the computer system. It was called MUSICOMP, which later also served as the foundation to the realization of the Computer Cantata: "Since [MUSICOMP] was written as a library of subroutines, it made the process of writing composition programs much easier, as the programmer/composer could use the routines within a larger program that suited his or her own style" (Alpern, 1995, p.3).

This process of building small, well-defined compositional functions known as "subroutines" and having them assembled has proven efficient and has made this approach popular, as it is still being used in many algorithmic composition systems even at the present day (Alpern, 1995).

3

2.3 One approach to music intelligence David Cope has conducted a series of works in algorithmic composition, whose basic idea comes from Mozart's dice game. Mozart's compositional dice game uses musical pieces' fragments to be randomly combined according to what the dice determines (Bateman, 1951 p. 223). Cope's primary goal was to extract short musical passages from existing works by pattern matching techniques. Then the extracted passages are replicated and combined to create a good melody.

In the journal, "One approach to music intelligence," David Cope (1999) describes his work from 1981, "experiments in music intelligence." His initial idea was to develop a computer- based program that could understand and interpret his personal musical preferences. With the help of predefined music rules and his concept of recombinancy, the computer program would reproduce new music of the same quality and character. Cope describes that the computer program finally produced some simplistic and rudimentary music after some frustrating failures, but theoretically correct, according to the set of rules first stipulated (Cope, 1999).

2.4 Cope's views on the concept of recombinancy Basically, 'recombinancy' produces new music by recombining extant music into new logical successions (Cope, 1999). Cope explains that recombinancy appears everywhere as a natural evolutionary and creative process. To better illustrate this, he uses all the great books in English ever written as an example, as they too are constructed primarily from recombinations of the 26 letters of the alphabet. He adds that most of the great works of Western music exist due to the recombinations of the 12 pitches of the equal-tempered scale and their octave equivalents. Cope states that recombination does not directly require new letters or notes to be invented, as, in the elegance and subtlety of its recombination, it lays the authenticity of its final product. By simply fragmenting a musical work into smaller parts and aleatory assign these fragments into new different positions, if not always, most of the time, it will only produce nonsense. "Recombination requires extensive musical analysis and very careful recombination to be effective at even an elemental level, not to mention the highly" (Cope, 1999, p. 21).

As Cope continued refining his coding strategies, he also started recombining fragments of already known music pieces. Using different parts of a few Bach chorales, the computer program would deliver a whole new music composition, respecting Bach's musical rules and sounding character. In his next experiment with musical intelligence (EMI), in 1999, Cope came upon a new system called "Inheriting the rules" which granted the computer program the possibility of composing in a variety of music styles such as styles from Chopin, Palestrina, Stravinsky, Beethoven, and many others including his music style (Cope, 1999).

My new version of Experiments in Musical Intelligence began, therefore, by separating Bach chorales into beats and saving these beats as objects. This program also stored the name of the pitch to which each voice moved in the subsequent beat-one of that work's innate instructions. I further had the program collect these beats in lexicons, groups delineated by the pitch, and register of their entering voices (for example, Cl-Gl-C2-C3, with the numbers representing the octaves in which the pitches appear). In order to compose, the program simply chose the first beat of any chorale in its database, examined this beat's destination notes, and selected one of the stored beats with those same first notes, assuming enough chorales had been stored to make more than one choice possible.

4

Each new choice created the potential for different offbeat motions and different following chords while maintaining the integrity of Bach's original voice-leading rules (Cope, 1999, p. 21).

2.5 Different points of view regarding computer-composed music Cope (1999) tells us that his computer-composed music has delighted, angered, provoked, and terrified those who have heard it, and he believes that future composers and future audiences would probably react in the same way. Although some people have shown reluctance to accept Cope's new findings with his computer algorithms, Cope believes that his computer-composed music was as human as any music created with the so-called personal human inspirations. "The music our algorithms compose is just as much ours as the music created by the greatest of our personal human inspirations" (Cope, 1999, p. 25).

"The computer has become an instrument for the composer and not the other way around" (Thornely, 2012, p.2). According to Thornely, the composer uses the computer only to organize her/his musical ideas and speed up the composition's writing process, thus affirming that the computer has no participation in music composition's creative process. Thornely's views around computer music are generally positive, despite his reluctance to acknowledge the possibility of computers composing without human aid. Thornely presents arguments on why he thinks the development of computer-composed music is proper and addresses the evolution of musical writing from the days of Beethoven until today's computer orchestration.

Thornely (2012) takes Beethoven as an example, who wrote his music a century and a half before any computer had ever existed and would transcribe his musical ideas and melodies to a piece of paper. This process could take days, and just after he had every note on the paper, he could ask an orchestra to play the piece so he could listen and re-work it. Thornely (2012) claims that today, our contemporaneous composers can do the same job but with much more efficiency with a computer's help. Thornely refers to the computer as an essential part of modern music composing; "all these tools and programs created exclusively to help ideas became music instantaneously" (Thornely, 2012, p.3).

Today, the whole music piece can be done with the use of a keyboard and a mouse, with exceptional writing speed and excellent results; "The development of recording technology, computer software applications, the portable music, and the internet, have created a world where a song can go from a composer's head to being available to an audience of millions, within hours" (Thornely, 2012, p.5). Thornely (2012) presents his arguments that computer programs can only aid humans during the music compositional process and disagree with David Cope’s views regarding the possible roles the computer program may assume. Cope rejects the idea that only human composers and human producers participate in the creative process regarding a musical idea: "When I am composing all day, I am programming. When I am programming, I am composing" (Cheng, 2009). "We have composers that are human, and we have composers that are not human" (Moss, 2015). Cope, with these quotes, wants to affirm that a computer can also have the same amount of creative involvement in the music composition process as a human composer does. For example, Cope's perspective also differs from Alan Douglas's much older viewpoint regarding the amount of autonomy a computer program should have on the process of music creation, as seen in Douglas's text written in 1973. "Science referring to computer technology and art do not speak the same language, and it would be disastrous if music became completely automated" (Douglas, 1973, p.6).

5

2.6 Algorithmic music composition This section presents three approaches to algorithmic music composition: stochastic or machine learning, rule-based, and artificial intelligence (AI). We are also going to introduce AIVA, the AI music composer that created one of the experiment's artifacts in this study.

2.6.1 Stochastic approaches Stochastic approaches involve randomness and can be as simple as generating a random series of notes, as seen in Mozart's Dice Music (Bateman, 1951 p. 223) and John Cage's work Reunion (Alpern, 1995, p. 2). However, an enormous amount of conceptual complexity can also be added to the computer's computations with statistical theory and Markov chains. This machine learning system, Markov chains, can teach a computer to create music compositions considered to be as similar as any other human musical composition ever created (Frankel-Goldwater, 2005).

In the study "On Modelling Harmony with Constraint Programming for Algorithmic Composition Including a Model of Schoenberg's Theory of Harmony" (Anders, 2021) uses harmony to describe a difficult problem to solve in the field of algorithmic composition. Harmony theories can be quite complex, as evidenced by the sheer size of most harmony textbooks. Furthermore, different theories address different musical styles, such as classical music (Schoenberg, 1983; Piston, 1950), jazz (Levine, 1995), contemporary classical music in extended tonality (Persichetti, 1961), and microtonal music (Doty, 2002). Because of this complexity, machine learning techniques are ideal for modeling harmony. Hild, Feulner, and Menzel (1992), for example, present a neural network that can generate a four-part choral in the style of Johann Sebastian Bach, given a melody. This task is completed in stages, beginning with a harmonic skeleton (bass plus harmonic functions), progressing to a chord skeleton (four-part voicings), and finally adding ornamenting quavers. Boulanger- Lewandowski, Bengio, and Vincent (2012) propose a widely cited recursive neural network (with a variety of network architectures) that learns to compose polyphonic music from varying complexity corpora (folk tunes, chorals, piano music, and orchestral music). The model generates musical sequences that demonstrate acquired harmony knowledge on how to shape melodic lines, but it lacks long-term structure. Eigenfeldt and Pasquier (2010) present a real-time Markov chain-based method for generating harmonic progressions. The system is taught using jazz standards from the Real Book (Anders, 2021).

One of the most recent and successful results in evolutionary computer music employs an evo-devo strategy. Iamus is a computer cluster that combines bioinspired techniques: compositions evolve in a setting governed by formal constraints and aesthetic principles (Díaz-Jerez, 2011). However, compositions can also emerge from genomic encodings in a manner similar to embryological development (hence the evo-devo), resulting in high structural complexity at a low computational cost. Each composition results from an evolutionary process in which only the instruments involved and the preferred duration are specified and are included in the fitness function. Iamus is capable of writing professional contemporary classical music scores, and it released its debut album in September 2012 (Ball, 2012; Coghlan, 2012), featuring ten works performed by world-class musicians (including the London Symphony Orchestra for the orchestra piece). Melomics, the technology that powers this cutting-edge computer composer, is also mastering other genres and transferring the results to industry (Sánchez-Quintana et al., 2013). Melomics offers music as a real commodity (priced by the size of its MIDI representation) for the first time, where ownership over a piece is directly transferred to the buyer (Fernández & Vico, 2013).

6

2.6.2 Rule-based approaches The rule-based approach to algorithmic composition was described by Burns (1997) as a rule- based process that would center around a series of tests or rules through which the program progresses. These steps are designed in such a way that the product of the steps ultimately leads to the next new step (Burns, 1997). These rule-based processes can be observed in Kemal Ebcioglu's automated system called CHORAL, which generates four-part chorales in the style of J. S. Bach according to over 350 rules (Burns, 1997). Another example of rule- based programs is David Cope's system called Experiments in Musical Intelligence (EMI). Similar to Ebcioglu's CHORAL, EMI is based upon an extensive database of style descriptions and rules of different compositional strategies. However, EMI also can create its grammar and database of rules, which the computer itself deduces based on several scores from a specific composer's work that is input to it. EMI has automatically composed music that successfully evokes Bach, Mozart, Bartók, Brahms, Joplin, and many others (Cope, 1999).

A rule-based approach enables composers to define their own harmonic language (e.g., through the use of nontraditional or microtonal scales), even if this language is quite complex. Additionally, a rule-based model is human-comprehensible, which is not always the case with machine learning models (Anders, 2021).

2.6.3 Artificial intelligence (AI): Knowledge-based or artificial intelligence (AI) systems are those in which computer algorithms can learn which solutions are acceptable by performing a series of comparisons with previously stored data. Because it is intended to mimic how a human expert would handle a problem, this type of programming is frequently referred to as an expert system. The central assumption is that we learn from our mistakes and make better decisions in the future (Burns, 1997). Practically speaking, if, for instance, the task was to create a jazz reef generator, the AI computer program would be asked to analyze Charlie Parker's style, paying particular attention to his use of intervals, phrase lengths, and melodic contours. Parker's style could then be recreated by the system using the analysis data. Several composers, including Charles Ames, Otto Laske, and David Cope, have taken a similar approach. Cope has spent a significant amount of time researching classical composers' styles, and his Experiments in Musical Intelligence (EMI) system is one of the most advanced of its kind (Burns, 1998).

In "AI Methods in Algorithmic Composition: A Comprehensive Survey," Fernández and Vico (Fernández & Vico, 2013) describe that composing music entails a series of activities, including melody and rhythm definition, harmonization, writing counterpoint or voice- leading, arrangement or orchestration, and engraving (notation). To varying degrees, all of these activities, as already mentioned, can also be automated by computer, and some techniques or languages are better suited to some than others (Loy & Abbott, 1985). The focus for relatively low degrees of automation is on languages, frameworks, and graphical tools to support particular and monotonous tasks in the composition process or provide the raw material for composers as a source of inspiration, also referred to as computer-aided algorithmic composition (CAAC). It is a very active area of research in commercial software development. It has been adopted in many recent software packages and programming environments, such as SuperCollider (McCartney, 2002), (Boulanger, 2000), MAX/MSP (Puckette, 2002), Kyma (Scaletti, 2002), Nyquist (Simoni & Dannenberg, 2013), or the AC Toolbox (Berg, 2011). The development of experimental CAAC systems at IRCAM5 (such as PatchWork, OpenMusic, and their various extensions) (Assayag et al., 1999) should also be highlighted (Fernández & Vico, 2013).

7

2.6.4 Understanding artificial intelligence Anders (2021) affirms that both learning-based and rule-based programs have been very successful in algorithmically modeling music composition. These two broad approaches are modeled after aspects of how humans compose or (learn to) compose. Anders argues that given how combining rules and learning from examples aids humans in composition, it would be very interesting to combine these two algorithmic composition approaches. Anders explains that this junction could be accomplished by using a hybrid approach in which the output composed by one method is refined by another or through the development of a unifying formalism capable of implementing both rules and machine learning.

Anders (2021) discusses Jacopo Baboni-Schilling, a composer that uses Constraint Programming to refine parameter sequences composed using other algorithmic techniques. Constraint Programming (CP) is a programming paradigm that relies on explicitly encoded compositional rules. When using this paradigm, composers can directly implement traditional rules, such as rules found in music theory textbooks, and non-standard rules, such as rules formulated by composers to model their composition techniques (Anders, 2018). Anders (2021) believes that a similar hybrid approach could be applied to refine results composed by a machine learning method using a rule-based approach.

Thus, Anders suggests that one approach for developing a unifying formalism could be achieved by using machine learning to learn rules. The first steps in that direction have already been taken, he explains. For example, Morales and Morales (1995) learned counterpoint rules using Inductive Logic Programming, a machine learning technique for learning first-order logic formulas (how to avoid open parallels). Anglade and Dixon (2008) also used Inductive Logic Programming to extract harmonic rules from two music corpora, Beatles songs (Pop Music) and the Real Book (Jazz), that express the corpora's differences.

Genetic programming is a type of machine learning technique that can be used to learn formulas that contain both logic and numeric relationships. Anders and Inden (2019) used genetic programming to learn rules for treating dissonance from a corpus of Palestrina's music. Another possibility is to use machine learning as a unifying formalism, in which a machine learning technique is used to learn explicit rules (Anders, 2021).

At its heart, deep learning (Goodfellow, Bengio & Courville, 2016) is the computation of an error (cost, loss) between the intended output for a given input and the actual output of a neural network during training – combined with the ability to gradually adapt the network (typically the weights of neurons) to reduce that error. The derivative (gradient) of the cost/loss function is used to determine the direction in which the weights must be adapted in order to reduce the error. Anders (2021) believes that it would be interesting to tailor the error computation by taking into account the extent to which the actual output of a neural network violates explicitly specified rules. He comments that these rules could be formatted similarly to the heuristic rules used in music constraint systems such as Patch Work Constraints (PW Constrains) or Cluster Engine, which return a number indicating the extent to which they are violated. Anders explains that when explicit rules are added to the computation of a neural network's error, the expanded error function must remain differentiable so that the learning process "knows" how to improve the neural net's weights (Anders, 2021).

8

2.6.5 Fundamentals of AI In the study "Machine learning and deep learning," Janiesch, Zschech, and Heinrich (2021) briefly explain the fundamentals of AI, machine learning (ML) algorithms, artificial neural networks (ANNs), and deep neural networks (DP).

Janiesch et al. (2021) affirm that the fundamentals of AI deal with issues such as planning, representation, learning, and it uses wide-ranging methods; for example, case-based and rule- based (Chen, Jakeman, and Norton, 2008). Early AI research was primarily concerned with hard-coded statements in formal languages, which a computer can then reason about automatically using logical inference rules. Goodfellow, Bengio, and Courville refer to this as the knowledge base approach (Goodfellow et al., 2016). However, the paradigm has several limitations because humans are unable to articulate all of the tacit knowledge required to perform complex tasks (Brynjolfsson & McAfee, 2017), and such constraints are overcome by machine learning (Janiesch et al., 2021).

In general, ML denotes that a computer program's performance improves with experience in relation to a set of tasks and performance measures (Jordan & Mitchell, 2015). It seeks to automate the process of developing analytical models for cognitive tasks such as object detection and natural language translation, and it can be accomplished through the use of algorithms that learn interactively from problem-specific training data, allowing computers to discover hidden insights and complex patterns without having to be explicitly programmed (Bishop, 2006).

Janiesch et al. (2021) affirm that ML is beneficial for tasks involving high-dimensional data, such as classification, regression, and clustering. It can produce reliable and repeatable decisions by learning from previous computations and extracting regularities from massive databases. As a result, machine learning algorithms have found success in a wide range of applications, including fraud detection, credit scoring, next-best offer analysis, speech and image recognition, and natural language processing (Janiesch et al., 2021). Janiesch et al. (2021) categorize machine learning into three types based on the given problem and available data: supervised learning, unsupervised learning, and reinforcement learning.

2.6.6 AIVA - Artificial Intelligence Virtual Artist This study is mainly concerned with algorithmic composition with higher degrees of automation of compositional activities than typical computer-aided algorithmic composition (CAAC). We used the computer program AIVA to generate part of the artifact in our experiment. AIVA uses techniques, languages, and tools to encode human musical creativity computationally or to automatically carry out creative compositional tasks with minimal or no human intervention.

According to AIVA Technologies, their product called AIVA (Artificial Intelligence Virtual Artist) is artificial intelligence capable of composing and producing music in vastly different genres and styles. It can make music for video games, films, commercials, et cetera (AIVA, 2020). AIVA Technologies was founded by the computer scientist and composer Pierre Barreau, the researcher, and composer Denis Stefan, as for Vincent Barreau, engineer and musician (AIVA, 2020).

AIVA uses several deep learning algorithms to learn techniques that are essential to compose different music styles. Deep learning is a machine with multiple programmed layers of "neural networks" to process information between various inputs and outputs. By this 9 method, AIVA can understand and process high-level abstractions in data, such as the pattern in a melody or the features of a person's face (Kaleagasi, 2017). Artificial Neural networks are conventional in machine learning systems to resemble learning in biological and living lifeforms (Aggarwal, 2018).

The type of algorithm neural network use is modeled after the human brain with the sole purpose of being able to recognize patterns. Deep learning is one of the main algorithms within artificial neural networks. Its name is used instead of "stacked neural networks;" as networks with several layers (Nicholson, 2019). Although the neural network is often seen as a simulation of a lifeform's learning process, a more distinct and clear view of the neural network can be achieved by viewing it as multiple computational graphs. Graphs that, with simple functions, perform recurrent compositions in order to be able to learn more intricate functions. Then it rinses and repeats (Aggarwal, 2018, p. 48).

AIVA Technologies have not specifically disclosed what types of deep learning techniques the program uses to compose music. In this study, we had no intentions to explore how AI such as AIVA compose its songs but solely use its compositions as an artifact. Our study investigated how this type of music is perceived compared with music composed by humans. On AIVA's website, we can see the board of employees who currently works on the technical development team, but there is no information regarding what they have contributed to creating AIVA.

2.7 Art and evaluation Artificial Intelligence has matured and grown in scope, progressing from academic research to numerous industrial applications. Simultaneously, key projects and challenges, such as driverless cars, natural language and speech processing, and computer players for board games, have captivated the public imagination. The introduction of formal methods was critical in consolidating numerous fields of artificial intelligence; however, this has a disadvantage for fields whose subject matter is difficult to define in formal terms, as they inevitably become marginalized. That is the case with computational creativity (alternatively referred to as artificial creativity), which can be loosely defined as the computational analysis or synthesis of works of art in a partially or fully automated fashion. Compounding the marginalization problem, the two communities naturally interested in this field (AI and the arts) speak different languages (at times very different!) and have very different methods and goals, creating significant barriers to collaboration and idea exchange between them. Even though small and occasionally fragmented communities are engaged in research on computational creativity's various aspects (Fernández &Vico, 2013).

Computational creativity is a subfield of artificial intelligence. Computational creativity is divided into its respective subfields, and one of these subfields addresses the scientific approach in music creation, called musical metacreation (MuMe). Musical metacreation studies, specifically musical tasks such as composition, performance, and improvisation. MuMe systems can, according to their relative levels of autonomy, be classified from completely user-dependent to autonomous generative computer music systems and tools (Pasquier, Eigenfeldt, Brown & Dubnov, 2016).

Minsky (1982) explained that once humans understand how any given subject functions, they no longer regard it as particularly intelligent but instead see it as a straightforward,

10 mechanical process and can be objectively evaluated. On the other hand, products of human artistic creativity, such as musical compositions, have their qualities evaluated subjectively by the artists themselves, by other composers, audiences, and by the media, as Dickie (1974) has proposed, "an institutional definition of art, according to which the status of art depends on whether it receives appreciation by the art world" (Dickie, 1974, p. 431). Colton (2008) explains that: "people do not always seem to look at the aesthetic qualities of artwork but are really celebrating the creativity of the artist rather than the value of the artifact" (Colton, 2008, p. 2), which suggests that the romantic notions of art may be fundamental, explicitly connected with the notion of artistic genius. Hypothetically some may have a more sophisticated sense of appreciation and relies on a discovery process including effort, ingenuity, and skill (Colton, 2008).

In the study "An investigation into people's bias against computational creativity in music composition," Moffat and Kelly (2006) found that there is a common bias against computer- composed pieces and that the bias was more substantial in musicians than in non-musicians (Moffat & Kelly, 2006). Moffat and Kelly investigated the following research questions: 1. Can people distinguish computer-composed music? 2. Do people prefer human-written music? 3. Is there a predisposition to dislike computer music? 4. Are people so blatantly biased that simply informing them that a computer created a piece is enough to sway their aesthetic judgment? All participants heard six Classical music pieces in order. Half of the compositions were created by a computer and the other half by a human. The musical compositions used in their study were chosen from public web pages and orbited around pieces of jazz and Classical music. The computer- and human-composed pieces were surface similar, hoping that the computer-composed compositions would be mistaken for human compositions. The strategy worked as planned, as evidenced by Moffat and Kelly's experimental results. Twenty people took part in the experiment, divided into two groups of ten "musicians" and ten "non-musicians." Participants in the "musicians" group had some kind of formal music training. Moffat and Kelly (2006) wanted to categorize participants in this way to see if the more experienced ones would be able to tell the difference between human- and computer-composed music. The two groups were nearly identical when it came to age distribution and sexes. The researchers asked the participants what they thought about the pieces and whether they thought they were computer-composed or not. The researchers could not be open about all of the experiment's goals since they wanted to investigate any eventual bias. For ethical reasons, the researchers revealed the study's aims in the second phase of the study. As the participants listened to each piece of music, they were given a brief questionnaire to fill out. Once the piece's origin was revealed, the researcher hoped to see value judgments shift. They expected participants to like a piece more if they were told that it was composed by humans and would like less if they were told a computer system composed it (Moffat & Kelly, 2006).

In the study "Emotional context of music produced by humans versus computer-composed music" by Nijs, Abeelen, Roex, Bonné, and Hammond (2015), a relatively new research field regarding the differences between human-made music composition and computer-composed music can be observed. The study's primary purpose was to analyze consumers' emotional responses to these two different types of music and see if those responses related to each other at any level. The musical compositions used in their study were chosen from public web pages. The participants were divided into two groups; one group would be listening to human-composed music, while in the other group, the participants would be listening to computer-composed music. The researchers informed the participants about what kind of

11 music they would be listening to, and based on the data acquired; three hypotheses had been composed:

H1: Will people find it easier to assign emotions to a song when they believe a human produced it? H2: Will people evaluate music as better when they believe humans produced it, even though it was actually computer-composed? H3: Are people able to pinpoint whether the composition was human-composed or computer-composed? (Nijs et al., 2015, p.3).

Classical music was chosen to be the experiment's music style. A computer-composed classical composition was created with an automated computer program called ANTON 2.0, as a completely unknown human-composed classical piece was also selected. The participants were only allowed to listen to those two compositions' fragments and were later asked to answer a brief questionnaire. The experiment showed that most participants identified themselves mainly with the human-composed piece and seemed to accept the computer-composed composition quite well. Their study concluded that people had an easier time assigning emotions to human-composed music than computer-composed music. Thus, they found a connection between emotions and beliefs. The study had the intuit to analyze if the participant's reactions towards these two types of music composition would present any bias against computer-composed music (Nijs et al., 2015).

2.7.1 Some different point of views regarding AI-music In the article "Creative AI: Computer composers are changing how music is made," Moss (2015) talks about how algorithmic music-composition programs can compose songs that do not necessarily need any editing or polishing from human composers. He gives us "Omusic," as an example; a song composed in 2008 by the computer program Melomics109, a music- focused AI computer program created by researchers at the University of Malaga. Later after Melomics109, the same team at Malaga University developed in 2010 Iamus a new version of the previous AI, which is supposed to be even better than its predecessor. Both use strategies modeled on biology to learn and evolve (Moss, 2015).

The artificial composer's learning process is not much different from the human composer's learning process. It learns the rules of composition: for example, instruments that have physical constraints, extended chords with more than six notes cannot be played with one hand on the piano, certain polyphonic combinations fit in a given style while others do not (Sánchez et al., 2013, p. 102).

A negative review of Illamus can be observed in an article written by Tom Service at The Guardian (Service, 2012); the writer mentions that even though the bias against a faceless computer might have influenced him, he did not find the technology amusing. "Iamus's Hello World! For piano, clarinet, and violin, ought to pose existential questions about the integrity of musical composition, to blow holes in the fallacy that every note a human composer writes comes from a wellspring of emotion and deep thought unique to our consciousness, and to show the difference between human genius and automatically composed algorithms in a modernist composition is effectively naught. Now, maybe I am falling victim to a perceptual bias against a faceless computer program, but I just do not think Hello World! is especially impressive" (Service, 2012).

12

The perceptual bias against "a faceless computer program" may occur due to many different reasons, and the fear of progress sometimes can be a common factor. Cope affirms in Moss (2015) that AI will not put professional composers, songwriters, or musicians out of work. "We have composers that are human, and we have composers that are not human" (Moss, 2015). The presence of bias against algorithm computer-composed music can also be seen in an interview with Eduardo Miranda, one of the leading researchers in AI-music answered that he is not interested in AI systems that compose entire pieces of music automatically "I find pieces of music that are entirely composed by a computer rather unappealing" (Trandafir, 2016, p. 1).

Trevor Wishart shared his opinion regarding algorithm computer music, "for me it is not music" (Wishart, 2008, p. 7), and justifies his disbelief by saying that once the algorithm is set in motion, its uncertain initial conditions and unpredictable inputs make the sound or graphics output also unpredictable. Wishart concludes that the result of algorithmic composition is "epiphenomena or by-products of the process rather than its goal" (Wishart, 2008, p. 7).

Another case of bias against algorithm-Computer-composed-music can be better observed on the repercussion of experiment with musical intelligence (EMI), popularly known as Emmy; Cope's computer program developed to compose songs based on styles of past famous composers. For not being a human composer, Emmy starts suffering harassment from audiences, which also caused some record companies and musicians to decline offers of including Emmy's compositions in their work, and despite having received positive reviews from the media, was severely criticized by some professionals in the field of music and technology, one in particular, Douglas Hofstadter (Blitstein, 2010, p. 8). Hofstadter, a Pulitzer Prize-winning cognitive scientist at Indiana University and a reluctant champion of Cope's work, has recounted in dozens of lectures worldwide during the past two decades, "Emmy really scares him" (Blitstein, 2010, p.7). "Like many art aficionados, Hofstadter views music as a fundamental way for humans to communicate profound emotional information" (Blitstein, 2010, p.8). Blitstein comments that: "Cope has developed a complex relationship with his critics, and with people like Hofstadter, who are simultaneously awed and disturbed by his work. He denounces some as focused on the wrong issues. He describes others as racists, prejudiced against all music created by a computer" (Blitstein, 2010, p.8).

13

3 Problem

As far back as the early 1950s, engineers and musicians alike have aspired for new technologies that could help them compose music and make the compositional process faster and more efficient. Today the technological advancements regarding the use of computers in music-composition have allowed humans not only to use the computer as an aiding-tool for helping them organize their ideas or find musical inspirations for their musical pieces but also to extract full music compositions from, which, at times, seems to outsmart many of their human music creations.

In the early decades into computer music, discussions regarding the possibilities of computers autonomously, one day, compose music emerged. There were several points of view, as we can observe in the following: "Science referring to computer technology and art do not speak the same language, and it would be disastrous if music became completely automated" (Douglas, 1973, p.6). As time progressed, and almost two decades later, the narrative starts to change, as David Cope, creator of a computer-based-music-program, shares his point of view regarding the matter: "We have composers that are human, and we have composers that are not human" (Moss, 2015). With this quote, Cope wants to affirm that a computer program can also share the same amount of creative involvement in the music compositional process as human composers do. Cope (1999) also argues that his computer-composed music is as human as human-composed music, changing the narrative that technology and art do not speak the same language, as previously once believed.

Almost thirty years later, we can still observe that Douglas was not alone when believing that computers could never compose music as humans can. Thornely (2012) affirms that he favors the technological developments in music and understands that computers play an essential role in music composition today but doubts the computer program's ability to write songs autonomously. He argues that computer-music compositions are not a match to the human- composed songs. Thornely also affirms that composer-computer programs are nothing more than tools to help him compose.

The differences of opinion between Douglas, Thornely, and Cope regarding whether computer programs could ever compose music as good as humans can are still open-ended questions. Taking into consideration what Colton (2008) explained in chapter 2.7 Art and evaluation, regarding how people usually perceive art in general: "people do not always seem to look at the aesthetic qualities of artwork but are really celebrating the creativity of the artist rather than the value of the artifact" (Colton, 2008, p. 2), we discuss if some of the reasons behind the fact that some people might choose the human composition over the AI- computer-composition, could be interlinked with the necessity of validating and celebrate the human creativity rather than the artifact itself.

This study's primary objective is to determine if people react differently towards two types of musical compositions: the human-composed music and the AI-computer-composed music, and investigate if bias against AI-computer-composed music exists. The research questions are 1. How is AI-computer-composed music perceived compared to human-composed music? 2. Are there prejudices towards AI-computer-composed music? If yes, what are the prejudices?

14

3.1 Method In order to answer the research questions, we made use of qualitative methods. An experiment was conducted where the participants answered questions during a semi- structured interview. Firstly, the AI computer music composer AIVA independently composed one song, without any human assistance, except for starting the program. Thereafter, a song with the same parameters was composed by humans. Previous to the interview, the participants listened to the two songs without knowing the existence of an AI- computer composition among the two and answered questions in a semi-structured interview regarding their perception of the two compositions.

The purpose when experimenting is to show how the dependent factor is affected by an independent factor (Denscombe, 2009). In this experiment, the dependent factor was the music experience, and the independent factor is the possible bias, which we, in this study, aimed to control by not telling the participants what exactly they were listening to, what differs our study from Nijs et al., (2015) study, where the researchers informed their participants about what kind of music they would be listening to.

Experimenting can be beneficial when considering its reusability. Having the procedure well explained and all the variables controlled, the experiment again could be reapplied, strengthening the study's validity (Denscombe, 2009). The challenge, though, was actually to control the variables. This experiment was not a laboratory experiment, and it included humans whose behavior can be both unpredictable and be dependent on the context. It is well-known that humans change their behavior once knowing they have been observed (Denscombe, 2009). The interviews might have compensated for this since they provided the interviewer with the opportunity to follow up the answers, compared to an experiment, which should be conducted in the same way every time.

The interviews were audio-recorded and subsequently transcribed. The researchers thoroughly read the transcriptions to become familiar with the data acquired, as suggested by Williamson and Bow (2002). Later it was broken apart into central themes, so-called categories (Williamson & Bow, 2002), and these categories were analyzed more in-depth. Examples of categories here emerged were "prejudices," "bias," and "positive emotions." The central answers from each question in the interview guide were also analyzed. The interviews were semi-structured, which means that we were using the interview guide, but it also allowed for some flexibility. The participants had the opportunity to develop their ideas and explain their answers (Denscombe, 2009). The disadvantage with semi-structured interviews, compared to structured interviews, is that it is difficult to standardize the answers. Nevertheless, since this was a small-scale study, the semi-structured format did provide answers with more depth.

It is essential to be aware that the researcher's bias and identity could affect the participant's responses. The participants might try to please the researcher by answering what he/she thinks the researcher wants to hear. Gender, age, ethnicity, or language are other factors that might affect the answers (Williamson, Bow & Sturt, 2002). So, while conducting our experiment, it was crucial that we, the interviewers, acted neutral to the participants' responses and tried to have a passive role that would not provoke them (Denscombe, 2009).

Different methods serve different purposes, but they have the common goal of providing us with new knowledge. This study was a qualitative study, which means that the main goal was

15 to gain a deeper understanding of what was being investigated (Holme, Solvang & Nilsson, 1997). A qualitative study does not intend to investigate whether the results have general validity, and the results can be generalized to a larger population – instead, the aim is to investigate the complexity and give a richer analysis (Holme, Solvang & Nilsson 1997). Qualitative analysis is how the researcher makes sense of the collected data (Williamson & Bow, 2002). The data analysis in this research used a qualitative method, and the words in the interviews were central – rather than eventual statistics (Denscombe, 2009). Another characteristic was that the findings were descriptive rather than analytic since it was a small- scale study (Denscombe, 2009).

3.2 Participants A number of four participants were recruited to participate in the study, which was an appropriate number for a qualitative study due to limited resources and the quantity of data composed (cf. Holme, Solvang & Nilsson, 1997). The only requirement for participation was to have a good understanding of the English language, some interest in music, as well as having some knowledge of the music genre that we were using. Two of the four participants had knowledge of musical theory, while the other two did not. The participants in this experiment were men with age between 25 and 40. They had different educational and professional backgrounds and different nationalities. The volunteers here interviewed were not a representative selection of participants, and their opinions might not have been typical for a larger population, but this is not either the aim of a qualitative study (Holme, Solvang & Nilsson, 1997). The participants were recruited on our social media platforms.

3.3 Ethics Following the guidelines from Vetenskapsrådet (2011), participation was voluntary, and the participants had the right to withdraw their participation at any time during the experiment without further explanation. The volunteers were asked to consent to participate in the study by assigning a consent contract and were informed that the interviews would be audio- recorded (See appendix A). We also granted the participants confidentiality, and they beforehand had the chance to ask for any additional information. No sensitive information was collected.

16

4 Implementation

In this project, two soundtracks were created: one composed with an AI-computer-composer and the other composed by human-composers. The conducted study aimed to investigate whether there was a bias against AI-computer composed music. Therefore, we wanted to explore how two pieces of music were perceived when the study's participants did not know if the piece of music they were listening to was AI-computer composed or human-composed. One of the main challenges in creating this artifact was to find a way to compose two songs that were comparable in the sense that they included more or less the same features but still not too identical.

According to previous studies, AI-computer music provokes many different feelings (Cope, 1999). Having two slightly similar songs, the participant's personal preferences regarding musical dynamics and structure should not compromise their ability to be impartial towards these aspects that did not matter when answering the questions during the interview. Therefore, we tried to eliminate factors in the experiment that might have affected how the two songs were perceived.

Using AIVA, it was possible to obtain a substantial number of potential artifacts suitable for this experiment. First and foremost, we decided to use songs from the genre of classical music. Although AIVA can also be programmed to compose in a specific classical music period, for example, Medieval, Baroque, Romantic, among others, considering the artifact's main purpose in this study, the classical period for the composed composition was not relevant. We decided to let AIVA choose to compose freely within the boundaries of the chosen music genre. This decision seemed to be the most appropriate choice since this study aims to observe human responses towards music in general and not responses towards baroque, romantic, or medieval music.

4.1 Artifact AIVA (AIVA, 2020) chose to create its composition in C Major key, rhythm 4/4, tempo/bpm 90, Modern Cinematic – Symphonic Orchestra including strings: Cello, Viola, Bass, Violin, and as Brass: Horn, Trumpet, Trombone, and Tuba. Form: Intro, A, Bridge, and A1. AIVA's composition is 1:24 minutes long. The song was downloaded in mp3 format and named: Soundtrack 2#. See figure 1 for an overview of the settings in AIVA.

17

Figure 1 AIVA (2020)

To test AIVA's ability to generate new compositions without being repetitive, AIVA was tasked to compose 15 tracks with the same parameters and preset style. The program did not duplicate any of the compositions, reassuring its ability to generate new compositions every time. From the 15 new compositions, one was chosen to serve as an artifact and source of guidance while creating the human composition. The decision to use the specific song was based on our taste and not because of any other specific technical reasons since all of the 15 songs were more or less equal when it came to the musical characteristics. Once we decided which AI-computer-composed music to use, the human composition was then written. See figure 2 for an overview of the composed songs.

Figure 2 AIVA-composed songs 18

To produce a human artifact that would not sound remarkably different from AIVA's musical piece, AIVA's music key, rhythm, tempo, instruments, assembly, form, and length were also used in our human composition. We mapped out the nuances and dynamics of the different colors in AIVA's composition so our human composition would follow a similar structure. See figures 3 and 4 for the notes that AIVA use

Figure 3 AIVA composition

19

Figure 4 AIVA composition 2

The human-sound track was composed in C Major key, rhythm 4/4, tempo/bpm 90, Modern Cinematic – Symphonic Orchestra including Strings: Cello, Viola, Bass, Violin, and Brass: Horn, Trumpet, Trombone, and Tuba. Form: Intro, A, Bridge, and A1. The human composition is 1:27 minutes long. The song was downloaded in mp3 format and named: Soundtrack 1.

Cubase 9.5 (2017) DAW program was used to assist in the human soundtrack's final production compositional process. A part of the MIDI instruments found in Cubase 9.5 (2017) other MIDI instruments were also taken from The Orchestra (2017). In figure 5, one can follow the compositional process on Cubase.

20

Figure 5 Cubase

It was more challenging than we thought to create a new soundtrack using the same presets AIVA used in its composition since it made it too easy for us to fall into the trap of copying the AI song. Being aware of this risk, we decided to divide the human-music compositional process between the two of us. Composer-one worked on the intro and bridge composition, while composer-two worked on creating parts A and A1. Once the compositional process in the human-sound track was concluded, we then listened to both songs (the AI-computer- composition and the human-composition) several times to identify and remove any elements that were too similar between the two. We did not want a copy of the AI-computer song but only compose a song with the same characteristics. In order to accomplish the task, it was necessary to somehow "forget" how the AI composition sounded like.

The amount of human input used in the creation of our AI artifact typically represents the amount of information given to a human composer when a piece of music is to be written for a film or a game or similar commission; for example, the theme it is supposed to accommodate and approximate duration. In sum, two soundtracks were created to be used as artifacts in this study, one composed by AIVA and one composed by human composers.

4.2 Pilot Study In order to be certain that the artifact developed would serve its purpose here in this study, a pilot study was developed. Two participants were selected to test the artifact. They listened to two songs, one composed by AIVA, the artificial intelligence computer, and one composed by human composers. Later they were invited to answer a few open questions regarding their experience while listening to the two soundtracks; the participants were never told about the existence of an AI-computer-composed music among the two compositions. A videoconference took place for about 10 minutes, where the musical compositions were discussed using a shortened version of the official interview guide.

21

Primarily the idea was to ask all questions in the interview guide, but the answers obtained from the first two questions helped us understand that one of the artifacts required more work. The two questions asked were: 1. Which soundtrack was most enjoyable to listen to? 2. Which song from the two soundtracks is your favorite? In the following four excerpts from the pilot interview, it became evident that the two pieces of music were too identical:

Excerpt 1. "I believe both of them are equally enjoyable. I believe that there is no one and two, I believe that both are telling the same story, so for me, they are one." Excerpt 2. "I would classify them both as one; to be honest, they are like a multi-part track. I don't know if there is a part three or four as well." Excerpt 3. "Humm, they sounded very similar." Excerpt 4. "I would say probably number one, but I wouldn't know why, hum, I did not see a big difference between the two."

While creating the human composition, my colleague and I discussed the implications of composing a song by using another song as a guide and the danger of becoming too influenced by AIVA's composition and copying it instead. The pilot study helped us understand that the first version of our human composition was too similar to AIVA's music composition. The participants did not focus on the experiences provided by each soundtrack; rather, their focus was on whether they were listening to two different compositions or not. As previously explained, to create the human soundtrack, we did analyze AIVA's composition by paying close attention to its tempo, rhythm, instruments, key, style of music, and formula. We have also mapped out the nuances and dynamics of the different colors in AIVA's composition so our human composition would follow a similar structure.

4.2.1 Solving the problem Creating a new soundtrack using the same presets AIVA has used in its composition was a real challenge since it made it too easy for us to fall into the trap of coping with the AI song. We had lengthy discussions about how to minimize the risk of committing plagiarism while composing a song that should only resemble the AI-computer composition. We concluded that instead of each of us creating individual soundtracks and choosing the one believed to be the most suitable regarding this study's purpose, the human-music composition should be divided between the two of us. The composer-one would work on the composition's Intro and Bridge, while the composer-two would create parts A and A1. Later, one of us would collect all parts and unify them into only one soundtrack.

The result of this music-composition strategy was what the participants in the main study listened to while the experiment took place. Another compositional strategy also considered by us to solve the issue was to invite other music composers and give them instructions to compose a soundtrack with all the same instruments, parameters, and music-pre-set AIVA used while generating its composition. These composers should not be allowed to listen to the AI composition before their composition was ready. This way, the possibility of these new human compositions to sound like or be identical to the AI-computer composition should also be drastically reduced, but this was not possible due to limitations in time and resources.

22

5 The study

In the following, the findings from the conducted study will be presented. Excerpts from the interviews will also be presented to represent and support the main findings. Four male participants of age between 25 and 40 took part in the study.

Firstly, the AI computer music composer AIVA independently composed one song, without any human assistance, except for starting the program. Thereafter, a song with the same parameters was composed by humans. Previous to the interview, the participants listened to the two songs without knowing the existence of an AI-computer composition among the two and answered questions in a semi-structured interview regarding their perception of the two compositions.

The participants could discuss their choices departing from their individual musical experience and perceptive abilities, as for any basic knowledge regarding technical characteristics in music such as melody, harmony color, tempo, and rhythm. We conducted individual interviews with each participant, using an interview guide with the following questions:

1. Which soundtrack was most enjoyable to listen to? 2. What made you choose this specific song and not the other song? Was it the melody, harmony, tempo, rhythm, or something else? 3. Did you like the other song? 4. The song you have chosen, would you buy it? 5. Do you recall if the song you have chosen as your favorite, while listening to it, assigned any type of emotion that you could describe using only one word?

Once part one was concluded, we then engaged the participants in the final stage of the experiment. It consisted of us revealing the existence of a non-human composition among the two songs they have listened to and asking them to identify the one composed by AIVA. Due to national health recommendations of social distancing related to an ongoing pandemic at the time of data collection, the participants were recruited online using our networks on different social media platforms. The only information provided about the study was that there was an ongoing study aiming to investigate the perception of music.

The online video-conference-platform Messenger was used as a communication tool between the researcher and the participants. We suggested a meeting time that better suited both parts, and the two pieces of music were sent by email to the participants about half an hour before the interview. The study's primary purpose was not to be revealed; instead, the participants were told just to form an opinion about the two pieces of music while listening to it. Thereafter, a videoconference took place for about 15 minutes, where the musical compositions were discussed using the interview guide.

5.1 Analysis: How is AI-computer-composed music perceived compared to human-composed music? Two of the participants chose the first song, the AI song, as their favorite, while the other two participants chose the second song, the human-composed song, to be their favorite. Some of

23 them used technical terms to motivate their choice, like in the following excerpt, where the participant liked more the transitions between parts, in the second song:

Excerpt 1: In the second song, I felt that the transitions between the different parts of the song were smoother or made more sense than in the first song, while in song number one, the transitions would kind of leap from one section to the other.

One participant thought the second song helped him to forget about the hardship that he was going through at the moment, while another participant thought the first song helped him to visualize another physical space, namely the sea: "it gave me the idea of sailing and to be at sea or in front of the sea, anything involving the sea generally." The results show that both of the songs were capable of evoking emotions in the participants.

During the interviews, one theme lifted by the participants regarded their emotional connections towards AI-computer-composed music compared to human-composed music. A significant part of the answers given by the participants in the interviews was somehow related to how they "felt" about the music compositions, rather than how they perceived the music compositions; as excerpt seven exemplifies:

Excerpt 7: The capacity of fulfilling human needs, such as the necessity of being understood or the need expressed by humans in pertain or belong to a group, can only occur human to human. I believe. Or that is how I see it.

In general, the participants assumed that art created by other fellow humans is automatically loaded with emotions related to real-life experiences, while art created by AI computers is not.

5.2 Analysis: Are there prejudices towards AI-computer- composed music? If yes, what are the prejudices?

The participant's accuracy in identifying the AI-computer-composed music was never revealed to them. Whether the participant chose the first or the second song as their favorite and independently how they motivated their choice, they did believe that the song they liked best was human-composed. They were all convinced about it, and as one of the participants expressed, "it got to be! [the second song]." Once asked whether they were sure about their guesses and if they would like to change their mind once they knew that one of the songs was AI-computer-composed, one participant expressed some insecurity whether he had guessed "right" and whether it would matter or not:

Excerpt 2: On one hand, I feel like I want to know which one was developed by AI, but my other perception is that it does not really matter; I have enjoyed both of them.

Once revealed that one of the two songs were AI-computer-composed, the participants instantly expressed been surprised and amazed, which is exemplified in the following excerpt:

Excerpt 3: I would say that I am amazed; I did not think that an AI would be able to create something that feels to me like a pretty strong human experience just as well as the human compositions do.

24

All the participants tried to motivate their choices, either by arguing for their choice to be human-composed or explicitly claiming that it would not matter.

None of the participants explicitly claimed to be against AI-computer-composed music, and they did believe that the AI-computer program had the capacity to create music. They had no problems with it "as long as it is good," as one of the participants illustrated. However, unanimously the participants expressed in their ways that AI-computer-composed music and human-composed music would not fulfill the same purposes because one of them suffers from a lack of genuine human emotions. Another aspect linked to human emotions stressed quite frequently by the participants was what they called "the human connection" between the music consumer and the composer. It became evident that music is perceived to be more than a pleasant experience while listening. For example, one participant stressed the relationship that he believes he would miss with AI-computer-composed music:

Excerpt 4: I believe that I could never have such a strong connection or relationship with an AI computer as I have with a human being […] I would want to shake that person's hand and let the person know that I appreciate, you know?

Another participant compared the AI-computer-composed music with a beautiful partner without personality. In contrast, human-composed music would be a beautiful partner with personality, saying that there are many aspects beyond the music itself related to how it is perceived once you know which music is composed by the AI.

Once revealed to the participants that an AI-computer had composed one of the songs, we asked them to generally comment on their point of view or share any concerns they might have experienced after been informed on the existence of AI-computer-composed music among the two. All the participants expressed some ambivalence towards the phenomena of AI-computer in music composition, even if they at first defended their choice of music to argue that it would not matter if they were AI-computer composed or not. As mentioned earlier, none of the participants wanted to change their choice of favorite soundtrack. Still, they did express some concerns regarding a few different possible ethical dilemmas, for example, whether human composers would lose their jobs and if AI-computer-composed music would be the primary source of music, to be found in films and games of the future, which is exemplified in excerpt 5:

Excerpt 5: I kind of feel bad for the person that might get replaced by the AI composer, […] but for my own sake, I do not mind that music could be composed by an AI computer.

There was also some ambivalence regarding the economic value that AI-computer-composed music should have. One of the participants speculated what could be the possible financial cost attached to the development of an AI-computer-composer "the technology probably has cost a lot to be created […]." There was a consensus that they would not exclude themselves from paying for AI-computer-composed music, but they were unsure if they would be willing to pay the same amount of money for it as they would for human-composed music. The participants also expressed that they would like to be informed beforehand if the song they intend to buy was composed by an AI-computer-composer or a human composer, so they would not feel "cheated" by the music industry. They would like to be given a chance to pay what they believe to be fair regarding each of the products, as excerpt six exemplifies.

25

Excerpt 6: I guess it should be ok to pay the same price for both, I would be getting a similar product after all, but no, I would try to get it cheaper; otherwise, I would also feel like I have been cheated.

5.3 Conclusions The respondents at the very beginning of the experiment, while still not aware of the existence of an AI-computer-composed song among the tracks they were asked to listen to, did not behave or expressed any thoughts that could indicate that any of the songs did not connect to them emotionally, concluding that at the given moment, both songs were perceived to be human-composed.

In the second part of the experiment, once revealed to the participants the existence of an AI-computer-composed song among the tracks, all of a sudden, the two music pieces were not perceived to have the same characteristics or evoke the same emotions, leading us to the conclusion of some indications of an existent bias against AI- computer-composed music. Even though the researchers have not revealed to the participants if they had chosen the AI-computer-composed song or the human- composed song as their favorite, all participants thought their favorite song was human- composed, displaying a possible bias in favor of the human-composed music. Also, The participants assumed that AI-computer-composed music could not offer the same emotional experience. Thus, indicating a possible bias against computer-composed music.

Another finding is that the participants expressed some ambivalence towards the AI- computer-composed music. Although nobody claimed to be against the AI-computer, all respondents stressed that music is perceived to be "more" than the audio experience indicating prejudice against the AI-computer-composed music. Prejudice could also be seen while some of the participants showed ambivalence regarding if they were willing to pay the same price for both types of musical compositions. In sum, the participants did not believe that AI-computer-composed music could replace human-composed music and that music was perceived to include more aspects than musical quality.

26

6 Concluding Remarks

6.1 Summary Today, AI-computer programs can be very efficient when autonomously composing music. The involvement of the human-composer can be as little as only starting the program for the AI-computer to start composing. However, the use of computer in music composition is, and have been a controversial issue. There are those that claim that there are no differences between the human-composed and computer-composed music (see, for example, Cope, 1999), while others strongly affirm that computers can only aid the human composer in their creations, not the other way around; and also argue that only the human composer can evoke human emotions through music (see for example Thornely, 2012). The objectives of this study were to investigate more in-depth how respondents perceived human-composed music compared to AI-computer-composed music and find out if a negative bias against AI-computer-composed music does exist. The research questions were 1. How is AI-computer-composed music perceived compared to human-composed music? 2. Are there prejudices towards AI-computer-composed music? If yes, what are the prejudices?

Four participants took part in a qualitative experiment and a semi-structured interview. The participants were all males between 25 and 40 years old, had different nationalities and professional backgrounds. They were recruited in the authors’ social media networks. The interviews took place online and endured for about 15 minutes each. The interviews were audio-recorded and later transcribed. Previous to the interviews, the participants were informed that they were about to listen to two pieces of music and give their opinions about the music, but they unaware of the absence of an AI-composed piece of music. Two music pieces from the classical music genre were used as artifacts, one was human- composed, and the AI-computer AIVA composed the other.

The main results showed that although the researchers have not revealed to the participants if they had chosen the AI-computer-composed song or the human-composed song as their favorite, all the participants strongly believed that their favorite song was human-composed, thus, indicating a bias towards human-composed music. The results also showed that the two music pieces were not perceived to have the same characteristics or evoke the same emotions; furthermore, there was some skepticism, whether an AI- computer-composed song could recall the same emotions as a human-composed song. Once informed that one of the two songs was composed by an AI computer program, the participants were granted the option of changing their favorite choice of music. Unanimously, they decided to stay with their first chosen favorite song; however, they did not believe that AI-computer-composed music could replace human-composed music; to them, music was perceived to include more aspects than musical quality. In sum, AI-computer-composed music was perceived by the participants as outstanding but inferior to human-composed music. The participants assumed that AI-computer-composed music could not offer the same emotional experience as the human-composed music usually do and stressed that it might even threaten human jobs and, for that, it should have less economic value. All the participants expressed ambivalence towards the AI-computer-composed music, even though they believed that AI-computer-composed music also has great potential. The results from this study are in line with previous research on the perception of AI-computer-composed music, Nijs et al. (2015) and Moffat and Kelly (2006).

27

6.2 Discussion This study analyzed how musicians and music listeners perceive AI-computer-composed music. This research aimed to find answers to the following questions: 1. How is AI- computer-composed music perceived compared to human-composed music? 2. Are there prejudices towards AI-computer-composed music? If yes, what are the prejudices?

Initially, the experimental design aimed to see if there was a bias in the choice of music and whether the participants would change their minds once they knew that one of the two artifacts was AI-computer-composed music. Despite being quite surprised, all participants stayed with their first choice, claiming that it did not matter if it was AI-computer-composed music. Still, all of the participants truly believed they had chosen the human-composed song as their favorite. Later on, ambivalences regarding AI-computer-composed music were revealed. Considering this finding, we argue that there are some indications that there is a bias against AI-computer-composed music and that the methods used in this experiment managed to capture it. However, this is a small-scale study, and the results cannot be generalized. The participants explicitly expressed no formal bias against the AI-computer- composer. Still, based on their answers, it could be observed that the majority believes that human-composed music is the only music capable of truly connecting to them emotionally.

Although the researchers have not revealed if the participants had chosen the AI-computer- composed song or the human-composed song as their favorite, all the participants strongly believed that their favorite song was human-composed. Taking into consideration what Colton (2008) explained regarding how people usually perceive art in general: "people do not always seem to look at the aesthetic qualities of artwork but are really celebrating the creativity of the artist rather than the value of the artifact" (Colton, 2008, p.2), we could speculate that some of the reasons behind the fact that all participants believed they had chosen the human-composed music over the AI-computer-composed music, could be interlinked with the necessity of validating and celebrate the human creativity rather than the artifact itself.

Cope (1999) argues that his computer-composed music is as human as the human-composed music but also writes about some ambivalent reactions that the computer-composed music may provoke in some humans. The same can also be said in relation to the results found in this study; in the early stages of the experiment, none of the participants complained that one of the songs did not sound like human-composed music. All the participants perceived the AI computer-composed music to be as human as the human-composed music. However, once revealed the existence of an AI computer-composed song among the two soundtracks, it did provoke some ambivalence about whether it should be assigned the same value or not.

Thornely (2012) argues that music composing today involves computers. He favors the technological developments but affirms that the computer is no more than a tool to help him compose, as the computer could never be an independent composer, once he did not believe that computer-music compositions could alone be as good as the human-composed songs. Blitstein (2010) affirms that the reason why Hofstadter, the Pulitzer Prize-winning cognitive scientist at Indiana University and reluctant champion of Cope's work, who has in dozens of lectures worldwide during the past two decades, recounted that Emmy scares him, believes that computer-composed music will never be better than human-composed music is that "he

28 views music as a fundamental way for humans to communicate profound emotional information" (Blitstein, 2010, p.8).

Thornely (2012) does not precisely address why he has invested that much effort trying to prove that computers cannot compose music as humans can. Despite the ambivalent reactions expressed by the participants in this study regarding AI-computer-composed music and human-composed music, and how they perceived AI-computer-composed music to be inferior to human-composed music, unanimously, they did believe that the AI-computer program composed great music. Perhaps, Thornely and the participants in our study have something in common, which could be directly related to their mutual disbelief that music composed by computers can be as good as music composed by humans.

Most participants in our study had also shown signs of disbelief related to the capacity of the AI-computer-composed music to provoke on them "human-emotional responses" as human- composed music does, which are quite in line with Hofstadter's view regarding the role he believes music exert in human lives, as reported by Blitstein (2010). The participants affirmed to have exclusively experienced these responses while listening to the human- composed song, which according to their testimony, goes beyond the music itself and could be illustrated as the sense of having an abstract personal bound with the composer's musically expressed emotions, or even an aspiration to meet the music composer in person, as exemplified in excerpt 4.

Excerpt 4: I believe that I could never have such a strong connection or relationship with an AI computer as I have with a human being […] I would want to shake that person's hand and let the person know that I appreciate, you know?

In Nijs et al. (2015) and Moffat and Kelly (2006), we could observe that automated- computer-compositions and human-composed classical songs served as artifacts in their research. None of the compositions found in their study were explicitly designed for their experiment. Both types of songs were collected from free websites, basing their criteria of choice solely on their similarities to the automated system pieces. It is easy to find AI- computer-music online nowadays, and we could have used such AI-music pieces in this study. Undoubtedly, it would have saved us time and resources, but it rarely offers any description of how it was created or how much human input it may contain. So, composing the human soundtrack as an artifact was a decision taken based on the principle of having as much control as possible over the experiment, highlighting an aspect that differs our study from Nijs et al. (2015) and Moffat and Kelly (2006).

By designing the artifact, we could eliminate variables such as the participant's preferences in music, such as tempo, rhythm, and dynamic. We believe that controlling these variables gave the study more impartial answers by providing the participants the chance to focus mainly on what they were experiencing while listening to the compositions rather than their personal preferences in music per se. It can be discussed whether this was the most appropriate procedure since the AI-computer-composed music was created first, and maybe one can argue that both pieces of music somehow have "traces" of the AI-computer-composed music. On the other hand, we believe this was the best available solution. One aspect worth mentioning regarding AI-composer use in music is that AI-computer programs such as AIVA are still at their early ages. Some technical issues may still need adjustments, such as when an

29 abrupt change between transitions occurs; probably nothing that everyone can perceive or either disrupt the musical experience, but indeed something AIVA could look into.

As earlier discussed in chapter 4, classical music was also used as an artifact in this paper. The most important aspect of why the classical music genre was chosen was that this type of music usually does not present lyrics and can be perceived as "neutral" and would benefit the experiment to discuss music solely and make comparisons between two songs. We discussed the possibility of using, for example, a pop song or a rock song but concluded that there might be a risk that a person's opinions about the genre itself might influence the answers. However, it cannot be excluded that the choice of using the classical music genre might have affected the result as well.

Another essential aspect observed while analyzing our research's results showed that our study is also in line with Nijs et al. (2015) and Moffat and Kelly (2006) previous studies, regarding that the participants in their study identified themselves mostly with the human- composed musical pieces, but also seemed to accept the computer-composed compositions quite well. Also, some of their participants mistook the computer-composed-music for human-composed-music.

The interviews in this study were conducted in English, and one of the requirements in order to be eligible to take part in the experiment was to be able to use English in the interview. None of the participants were native English speakers; however, the interviews did flow smoothly, except when in one of the interviews, a participant could not find the appropriate English word to describe his thoughts and expressed that he wanted to use a Swedish word to help him describe it. Since the interviewer understood Swedish, this was not a problem. Still, it cannot be excluded that the language might be a limitation, depending on the English proficiency and the experience of discussing the actual topic in the language.

Initially, the interviews were planned to occur in a specific chosen physical space, and the interviews were to be conducted face-to-face. However, as already explained in chapter 3.1, due to national health recommendations of social distancing related to an ongoing pandemic at the time of data collection, the interviews were conducted online. One concern was whether the online format would give less developed answers due to the somehow limited possibilities to confirm the participants and engage in the conversation in the same way as during a physical interview. However, this was not experienced as an obstacle, and it might even have been an advantage since it was easier to recruit participants and conduct the interviews this way. According to Denscombe (2009), there is no difference regarding honesty when it comes to phone interviews and live interviews. We conducted a video interview, but we do believe that the degree of honesty was high.

6.3 Ethics Many companies are developing and using AI systems, which raises several ethical issues. Balasubramaniam, Kauppinen, Kujala & Hiekkanen (2020) investigated the ethical guidelines defined by companies handling AI systems. The results show that transparency, explainability, fairness, and privacy are critical aspects to consider when developing an AI system. Transparency refers to the openness of the AI system in relation to customers, partners, and stakeholders, and explainability is about explaining the system behind it. Fairness refers to the respect for human rights and ensuring inclusion, while the aspect of privacy is concerned about the collection, storage, and use of personal data

30

(Balasubramaniam et al., 2020). Since we used the AI system AIVA, we contacted the company to ask about their ethical guidelines. AIVA does not have a specific ethical guideline but a privacy policy. In the privacy policy, it is stated how, for example, personal data is stored and used. AIVA is then considering the aspect of privacy, while transparency, explainability, and fairness (or other ethical issues) are not discussed.

6.4 Societal use AI has existed for several decades and is today used on a large scale in society. The use of AI is not decreasing, and the systems are getting more advanced. When it comes to music, AI has already impacted the industry all the way from creation to distribution (Sturm, Iglesias, Ben-Tal, Miron, Gómez, 2019) and will continue to do so. AI systems are, for example, used for music recommendation and in music creation. The need for research in this area in society is therefore critical. The knowledge about AI should not only be driven by commercial interest but also include a societal perspective. In this study, we chose to focus on the perception of AI, which could be of interest to companies, but also give new knowledge about the complexity of music perception and the many more layers included in music listening beyond the audio experience.

6.5 Future Work Further studies need to be conducted to generalize the results of this study. A large-scale study with a more heterogeneous population, when it comes to, for example, gender and age, is needed to prove whether the bias exists. This study also indicates that there is more to investigate regarding the ethical aspects and the consumer perspective of AI-computer- composed music since this came to be a central topic in the narratives. The participants frequently mentioned terms such as emotional-response, human-emotional-responses, and human-emotions to justify their answers, so studies focusing on human emotions related to human-composed music and AI-computer-composed music could also bring interesting results to the world-music community and this field of study.

31

7 References

AIVA (2020). Retrieved from https://www.aiva.ai/ on 2020-03-15.

Aggarwal, C. (2018). Neural networks and deep learning. New York: Watson Research Center. International Business Machines. Yorktown Heights.

Alpern, A. (1995). Techniques for the algorithmic composition of music. Hampshire College. Retrieved from http://alum.hampshire.edu/~adaF92/algocomp/algocomp95.html on 2020- 04-18.

Anders, T. (2018). Compositions Created with Constraint Programming. In: A. McLean and R. T. Dean (Eds.), The Oxford Handbook of Algorithmic Music.Oxford University Press, pp. 133–154.

Anders, T., & Inden, B. (2019). Machine learning of symbolic compositional rules with genetic programming: dissonance treatment in Palestrina. Peer Computer Science 5(244).

Anders, T. (2021). On Modelling Harmony with Constraint Programming for Algorithmic Composition Including a Model of Schoenberg's Theory of Harmony. Retrieved from: https://www.researchgate.net/publication/344714458_On_Modelling_Harmony_with_Con straint_Programming_for_Algorithmic_Composition_Including_a_Model_of_Schoenberg' s_Theory_of_Harmony on 2021/02/16.

Anglade, A., & Dixon, S. (2008). Characterisation of Harmony With Inductive Logic Programming. ISMIR, pp. 63–68.

Assayag, G., Rueda, C., Laurson, M., Agon, C., & Delerue, O. (1999). Computer-Assisted composition at IRCAM: From PatchWork to OpenMusic. Computer Music Journal, 23(3), 59–72.

Bateman, W. (1980). Introduction to computer music. New Jersey: J. Wiley

Bateman, W. (1951), Introduction to Computer Music. New Jersey: J. Wiley.

Ball, P. (2012). Computer science: Algorithmic rapture. Nature, 488(7412), 458.

Berg, P. (2011). Using the AC Toolbox. Institute of Sonology, Royal Conservatory, The Hague.

Bishop, C. M. (2006). Pattern recognition and machine learning. Springer-Verlag, New York, Inc.

Blitstein, R. (2010). Triumph of the cyborg composer. Pacific Standard, on February 22, 2010. Retrieved from: https://psmag.com/social-justice/triumph-of-the-cyborg-composer- 8507, on 2020-06-01.

Balasubramaniam N., Kauppinen M., Kujala S., Hiekkanen K. (2020) Ethical Guidelines for Solving Ethical Issues and Developing AI Systems. In: Morisio M., Torchiano M.,

32

Jedlitschka A. (eds) Product-Focused Software Process Improvement. PROFES 2020. Lecture Notes in Computer Science, vol 12562. Springer, Cham. https://doi- org.ezproxy.ub.gu.se/10.1007/978-3-030-64148-1_21

Boulanger-Lewandowski, N., Bengio, Y., & Vincent, P. (2012). Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription. In: Proceedings of the 29th International Conference on Machine Learning (ICML 2012).

Boulanger, R. C. (Ed.). (2000). The Csound Book: Perspectives in Software Synthesis, Sound Design, Signal Processing, and Programming. The MIT Press.

Brynjolfsson, E., & McAfee, A. (2017). The business of artificial intelligence. Harvard Business Review, 1–20.

Burns, K, H. (1994). The History and Development of Algorithms in Music Composition, 1957- 1993. Ph.D. thesis, Ball State University, Muncie, Indiana, Ann Arbor, 1994.

Burns, K, H. (1997). Algorithmic composition, a definition. Florida International University Retrieved from http://music.dartmouth.edu/~wowem/hardware/algorithmdefinition.html on 2020-04-19.

Burns, K, H. (1998) Music by the numbers, Electronic Musician, 14(5).

Chen, S. H., Jakeman, A. J., & Norton, J. P. (2008). Artificial intelligence techniques: An introduction to their use for modeling environmental systems. Mathematics and Computers in Simulation, 78(2–3), 379–400. https://doi.org/10.1016/j.matcom.2008.01.028.

Cheng, J. (September 30, 2009). Virtual composer makes beautiful music—and stirs controversy. Ars Technica. Retrieved from https://arstechnica.com/science/news/2009/09/virtual-composer-makes-beautiful- musicand-stirs-controversy.ars on 2021/03/05.

Coghlan, A. (2012). Computer composer honours Turing's centenary. New Scientist, 215(2872), 7.

Colton, S. (2008). Creativity Versus the Perception of Creativity in Computational Systems. AAAI Spring Symposium: Creative Intelligent Systems.

Cope, D. (1999). One approach to musical intelligence. IEEE Intelligent Systems and their Applications, 14(3), pp.21–25.

Covach, J. (1990). The music and theories of Josef Matthias Hauer. Diss. The University of Michigan.

Covach, J. (1994). The Quest of the Absolute: Schoenberg, Hauer, and the Twelve-Tone Idea. In: J. M. Spencer, (ed.,) Theomusicology, special issue of Black Sacred Music: A Journal of Theomusicology 8/1. Duke University Press, 158-177.

33

Cubase 9.5[PC] version 9.5(2017) Steinberg Media Technologies (1983). [software]

Doornbusch, P. (2005). The Music of CSIRAC: Australia's First Computer Music. Melbourne, Australia, Common Ground Publishing.

Denscombe, M. (2009). Forskningshandboken: för småskaliga forskningsprojekt inom samhällsvetenskaperna. 2. uppl. Lund: Studentlitteratur

Díaz-Jerez, G. (2011). Composing with Melomics: Delving into the computational world for musical inspiration. Leonardo Music Journal, 21, 13–14.

Dickie, G. (1974). Art and the aesthetic: an institutional analysis. Ithaca: Cornell University Press.

Doty, D. B. (2002). The Just Intonation Primer. An Introduction to the Theory and Practice of Just Intonation. San Francisco, CA: Just Intonation Network.

Douglas, A. (1973). Electronic music production. London: Pitman Publishing.

Eigenfeldt, A. & Pasquier, P. (2010). Real-time generation of harmonic progressions using controlled Markov selection. In: Proceedings of ICCCX- Computational Creativity Conference, 16–25.

Fernández, J. D. & Vico, F. (2013). AI Methods in Algorithmic Composition: A Comprehensive Survey. Journal of Artificial Intelligence Research 48, 513–582.

Frankel-Goldwater, L. (2005). Computers Composing Music: An Artistic Utilization of Hidden Markov Models for Music Composition. Rochester: Department of Computer Science, University of Rochester.

Goodfellow, I., Y. Bengio, & A. Courville (2016). Deep Learning. En. MIT Press.

Grout, D. J., & Palisca C. V. (1996). A History of Western Music. 5th ed. W. W. Norton & Company: New York. Florida International University.

Hild, H., Feulner, J., & Menzel, W. (1992). HARMONET: A neural net for harmonizing chorales in the style of JS Bach. Advances in Neural Information Processing Systems 4 (NIPS 4). Morgan Kaufmann Publishers, 267–274.

Holme, I. M., Solvang, B.K. & Nilsson, B. (1997). Forskningsmetodik: om kvalitativa och kvantitativa metoder . (2., ed.) Lund: Studentlitteratur.

Hope, C. & Ryan, J. (2014). Digital arts: an introduction to new media. New York: Bloomsbury Academic.

Janiesch, C., Zschech, P., & Heinrich, K., (2021). Machine learning and deep learning Electronic Markets. Retrieved from: https://www.researchgate.net/publication/350736557_Machine_learning_and_deep_learni ng on 10/04/2021.

34

Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science, 349(6245), 255–260. https://doi.org/10.1126/science.aaa8415.

Kaleagasi, B. (2017). A New AI Can Write Music as Well as a Human Composer. Futurism March 9, 2017. Retrieved from: https://futurism.com/a-new-ai-can-write-music-as-well- as-a-human-composer on 2020-05-17.

Levine, M. (1995). The Jazz Theory Book. En. Sher Music Co.

Loy, G., & Abbott, C. (1985). Programming languages for computer music synthesis, performance, and composition. ACM Computing Surveys, 17(2), 235–265.

Mathews, M. (1969). The technology of computer music. Cambridge, Mass

McCartney, J. (2002). Rethinking the computer music language: SuperCollider. Computer Music Journal, 26(4), 61–68.

Minsky, M. L. (1982). Why people think computers cannot. AI Magazine 3(4).

Moffat, D., & Kelly, M. (2006). An investigation into people's bias against computational creativity in music composition. In: Proceedings of the third joint workshop on Computational Creativity (as part of ECAI 2006), Rivia del Garda, Italy.

Morales, E., & Morales, R. (1995). Learning Musical Rules. In: Widmer, G (Ed.). Proceedings of the IJCAI-95 International Workshop on Artificial Intelligence and Music, 14th International Joint Conference on Artificial Intelligence (IJCAI-95). Montreal, Canada.

Moss, R. (January 26, 2015). Creative AI: Computer composers are changing how music is made. New Atlas magazine. Retrieved from: https://newatlas.com/creative-artificial- intelligence-computer-algorithmic-music/35764/ on 2021/03/05

Nicholson, C. (2019) A Beginner's Guide to Neural Networks and Deep Learning. Retrieved from: https://pathmind.com/wiki/neural-network# on 2020/05/20.

Nijs, Y., Abeelen, J., Roex, C., Bonné, D., & Hamond, R. (2015). Emotional context of music produced by humans versus computer-composed music. Music and Technology. Paris: La Revue, Musicale. 93-115.

Pasquier, P., Eigenfeldt, E., Brown, O., & Dubnov, S. (2016). An Introduction to Musical Metacreation. Comput. Entertain. 14(2) DOI:https://doi.org/10.1145/2930672

Persichetti, V. (1961). Twentieth-Century Harmony: Creative Aspects and Practice. W. W. Norton & Company.

Piston, W. (1950). Harmony. En. Victor Gollancz Ltd.

Puckette, M. (2002). Max at Seventeen. Computer Music Journal, 26(4), 31–43.

35

Russell, S. J., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Hoboken, Pearson.

Sánchez Quintana, C., Moreno Arcas, F., Albarracín Molina, D., Jose David, Fernandez Rodriguez, J. D., & Vico, F. (2013). Melomics: A Case-Study of AI in Spain. AI. Magazine, 34(3), 99-103 https://www.aaai.org/ojs/index.php/aimagazine/article/view/2464

Service, T. (2012). Iamus's Hello World! Review. The Guardian July 1, 2012. Retrieved from: https://www.theguardian.com/music/2012/jul/01/iamus-hello-world-review on 2020/04/17.

Simoni, M., & Dannenberg, R. B. (2013). Algorithmic Composition: A Guide to Composing Music with Nyquist. University of Michigan Press.

Scaletti, C. (2002). Computer music languages, Kyma, and the future. Computer Music Journal, 26(4), 69–82.

Schoenberg, A. (1983). Theory of Harmony. University of California Press.

Sturm, B. L. T., Iglesias, M., Ben-Tal, O., Miron, M., & Gómez, E. (2019). Artificial intelligence and music: Open questions of copyright law and engineering praxis. Arts, 8(3) doi:http://dx.doi.org.ezproxy.ub.gu.se/10.3390/arts8030115

The Orchestra (2017). Best Service, (1954). [software]

Thornely, S. (2012). The Impact of Computer Music Technology on Music Production. Retrieved from: https://stevethornely.files.wordpress.com/2012/05/the-impact-of- computer-technology-on-world-music-steven-thornely.pdf on 2019-11-06.

Trandafir, L. (2016). On Creativity, Music and Artificial Intelligence: Meet Eduardo R. Miranda [Blog] August 18, 2016. Retrieved from: https://blog.landr.com/meet-eduardo- miranda/ on 2020-04-16.

Vetenskapsrådet (2011). Good Research Practice. Stockholm: Vetenskapsrådet. https://www.vr.se/download/18.5639980c162791bbfe697882/1555334908942/Good- Research-Practice_VR_2017.pdf

Whittall, A. (2008). The Cambridge introduction to . New York: Cambridge University Press.

Williamson, K, Bow, A., & Sturt, C. (2002) Survey research. In: K. Williamson & A. Bow (Eds.). Research methods for students, academics, and professionals: information management and systems. (2. ed.) Wagga Wagga: Centre for Information Studies.

Williamson, K. & Bow, A. (2002). Analysis of quantitative and qualitative data. In: Williamson, K. & A. Bow (Eds.). Research methods for students, academics, and professionals: information management and systems. (2. ed.) Wagga Wagga: Centre for Information Studies.

36

Wishart, T. ICMC (2008). Keynote Address given at Queen's University, Belfast, Northern Ireland August 27, 2008

37

Appendix 1

Contract of consent for participating in the study.

Participation is voluntary, and the participants have the right to withdraw their participation at any time during the experiment without further explanation. We will grant the participants confidentiality, and they beforehand will have the chance to ask for any additional information. No sensitive data is to be collected. The interviews will be audio recorded to be transcribed and facilitate the acquired data to be studied.

By signing this contract, you consent to your participation in this study. The data here acquired will be used as a source of research in our bachelor thesis.

Name:

Age:

Sex:

Profession:

Signature: ______Date:______

I