Computer-Assisted Composition a Short Historical Review

Total Page:16

File Type:pdf, Size:1020Kb

Computer-Assisted Composition a Short Historical Review MUMT 303 New Media Production II Charalampos Saitis Winter 2010 Computer-Assisted Composition A short historical review Computer-assisted composition is considered amongst the major musical developments that characterized the twentieth century. The quest for ‘new music’ started with Erik Satie and the early electronic instruments (Telharmonium, Theremin), explored the use of electricity, moved into the magnetic tape recording (Stockhausen, Varese, Cage), and soon arrived to the computer era. Computers, science, and technology promised new perspectives into sound, music, and composition. In this context computer-assisted composition soon became a creative challenge – if not necessity. After all, composers were the first artists to make substantive use of computers. The first traces of computer-assisted composition are found in the Bells Labs, in the U.S.A, at the late 50s. It was Max Matthews, an engineer there, who saw the possibilities of computer music while experimenting on digital transmission of telephone calls. In 1957, the first ever computer programme to create sounds was built. It was named Music I. Of course, this first attempt had many problems, e.g. it was monophonic and had no attack or decay. Max Matthews went on improving the programme, introducing a series of programmes named Music II, Music III, and so on until Music V. The idea of unit generators that could be put together to from bigger blocks was introduced in Music III. Meanwhile, Lejaren Hiller was creating the first ever computer-composed musical work: The Illiac Suit for String Quartet. This marked also a first attempt towards algorithmic composition. A binary code was processed in the Illiac Computer at the University of Illinois, producing the very first computer algorithmic composition. Later Hiller collaborated with John Cage to create HPSCHD, a programme able to define equal-temperament scales, then choose pitches and durations in them, and finally produce sounds. Back at the Bell Labs, James Tenney worked on a composing programme, the PLF 2, which he used to compose his Four Stochastic Studies in 1962. The software he produced was capable of compositional decisions, a fundamental concept in computer-assisted and algorithmic composition. At the same time Music IV was completed. It was written for an IBM 7094, one of the first computers to use transistors. Hubert Howe and Godfrey Winham at the Princeton University improved Music IV, calling their new version Music IVB. When IBM introduced its new computers in 1965, new challenges for computer music appeared. Music I-IV had been written in low-level, machine- specific assembly language, hence they would not run in other computers. In 1967, the Princeton group presented a version of Music IVB written in FORTRAN, the Music 4BF. FORTRAN was major high-level language at the time. Meanwhile, Max Matthews, Jean-Claude Risset, Richard Moore and Joan Miller were developing the FORTRAN-based Music V at the Bell Labs. Music V was the culmination of the previous programming environments. It included advanced software-defined unit generators that played notes, discrete sounds containing transient information for the unit generators. The notion of the score, previously presented in Music III, was further developed here, including note lists and function tables. Music V marked the end of the first breakthrough of computer music. It was obvious that new ideas had to be followed. Hiller was not the first to work on algorithmic composition. Karlheinz Stockhausen, a pioneer of electronic music, formulated new statistical criteria for composition, focussing on the aleatoric directional tendencies of sound movement. Xenakis’ Metastasis for orchestra was premiered in 1955. Xenakis used stochastic formulas that he worked out by hand to compose the piece. What Hiller showed, was the ability of the computer to model algorithms. In the late 60s Xenakis went on developing his automated composition programme, the Stochastic Music Programme (SMP). SMP eventually generates a score for conventional instruments using complex stochastic formulas. Meanwhile, Gottfried Michael Koenig was studying computer music programming at the WDR studio in Cologne. In 1971 he went to the Institute of Sonology in Utrecht, where he completed PR1 (Project 1), a programme for algorithmic composition. PR1 generates a score following both deterministic and aleatoric composition techniques. Koenig used the programme for both his electronic and instrumental compositions. Unlike SMP and PR1, Barry Truax made a series of programmes exclusively for direct digital sound synthesis. Those were named POD after POisson Distribution, because the distribution of events in time and frequency in the programme follows the Poisson distribution (Statistics and Probability Theory). In the next years, however, algorithmic composition more likely became a (standard) part of more general programming environments. 1 MUMT 303 New Media Production II Charalampos Saitis Winter 2010 One could argue that ‘new music’ has been a three-step adventure. These steps have always been taken simultaneously, and most importantly interconnecting and interacting with each other. First it was the challenge of new instrumentation beyond the orchestra. Second came the computers, which could either compose or provide the parameters for the composer. The third step was the need for new sounds that are not realizable with acoustical or analog electronic instruments. Sound synthesis provided the ground for this ultimate step. John Chowning of Stanford University got involved with the work in Bell Labs in the mid 60s. He brought Music IV in Stanford and had it run in a PDP-1 computer at the Artificial Intelligence Lab of the University. When the PDP-1 was replaced by the PDP-6, Chowning together with David Poole wrote MUS10. Meanwhile, he was working on new techniques for sound synthesis. After several experiments and mathematical confirmation he introduced frequency modulation (FM) as a synthesis technique with a great advantage: “extreme economy”, i.e. producing sound with rich spectra from just two oscillators. It was a milestone in computer music. FM was soon licensed by Yamaha and later patented. During the same period, digital technology as well as psychoacoustics were making rapid jumps. This created ground for the cultivation of digital sound synthesis, the new computer music adventure. The objective was the design of digital synthesizers that would take advantage of FM as well as other computer music developments. On the grounds of more systematic research, two poles were created in the late 70s and became the leading centres of research and experimentation in computer music: CCRMA in Stanford, and IRCAM in Paris. Another significant development of that time was speech synthesis, introduced by Charles Dodge. Gerald Bennett and Xavier Rodet at IRCAM worked towards generating a singing voice with a computer. In 1978, they presented CHANT. CHANT was a programme using the human vocal tract as a model of synthesis. Soon it proved capable of non-vocal sounds synthesis. CHANT also introduced the new area of physics- based synthesis, also referred to as physical modelling, where a natural sound is synthesized according to the physics of the vibration structure that produces it. In 1981, the same people developed FORMES, an improved programme based on object-oriented programming. It was used by Jean-Baptiste Barriere to compose Chreode in 1983. During that period, computer music was becoming highly popular. More research centres were formed, and composers-researchers – or researchers-composers – were exploring all possibilities of using the computer as a musical instrument and/or a compositional tool. However, access to the established centres was limited and moreover expensive if there was no funding. As a result, the need for a compositional workstation for the individual composer was growing greater and greater. In the early 80s a group of composers living in York (UK) was formed. The group was called Interface and its first members were Trevor Wishart, Richard Orton and Tom Endrich. By 1986, more composers joined the group. The vision of the group was to build a compositional environment that would be affordable and accessible by the individual composer. They started working on the Atari ST computer. What attracted them to it were its affordability, its 16-bit technology, and a built-in MIDI interface. With the software contribution of Martin Atkins, and the hardware contribution of Dave Malham, the Composer’s Desktop Project (CDP) was created. Almost a year ago, Barry Vencoe made Csound, a unit-generator-based software synthesis language. Csound was an attempt to ‘adjust’ the ideas of Music I-V to personal computers. Csound was a non-realtime environment until 1989, when it was turned into a real-time control language. Before David Zicarelli became involved with Max/MSP, he contributed in the design of Intelligent Music’s M in 1987. M was a graphical algorithmic environment allowing the composer to manipulate sounds recorded through a MIDI keyboard. However, as Digital Signal Processing was becoming a standard in computer music, more sophisticated programmes were needed to include DSP techniques. Several years earlier, in 1981, Pierre Boulez presented Repons, a composition for a twenty-four-piece orchestra and six soloists. The novelty was that each soloist was independently connected to 4X, where the sound was processed and routed to different loudspeakers around the concert hall. 4X was built by Giuseppe Di Giugno at IRCAM and is regarded the first ever made digital signal processor. It was quite successful and composers such as Luciano Berio, Pierre Henry, and Jean-Baptiste Barriere. In 1985 Miller Puckette went to IRCAM and started programming software for 4X. While working together with the composer Philippe Manoury on the later’s Jupiter, they faced timing issues with the existed software. They were interested in having the performer triggering electronic sounds. The main challenge was to time 2 MUMT 303 New Media Production II Charalampos Saitis Winter 2010 musical events independently of one another.
Recommended publications
  • Chuck: a Strongly Timed Computer Music Language
    Ge Wang,∗ Perry R. Cook,† ChucK: A Strongly Timed and Spencer Salazar∗ ∗Center for Computer Research in Music Computer Music Language and Acoustics (CCRMA) Stanford University 660 Lomita Drive, Stanford, California 94306, USA {ge, spencer}@ccrma.stanford.edu †Department of Computer Science Princeton University 35 Olden Street, Princeton, New Jersey 08540, USA [email protected] Abstract: ChucK is a programming language designed for computer music. It aims to be expressive and straightforward to read and write with respect to time and concurrency, and to provide a platform for precise audio synthesis and analysis and for rapid experimentation in computer music. In particular, ChucK defines the notion of a strongly timed audio programming language, comprising a versatile time-based programming model that allows programmers to flexibly and precisely control the flow of time in code and use the keyword now as a time-aware control construct, and gives programmers the ability to use the timing mechanism to realize sample-accurate concurrent programming. Several case studies are presented that illustrate the workings, properties, and personality of the language. We also discuss applications of ChucK in laptop orchestras, computer music pedagogy, and mobile music instruments. Properties and affordances of the language and its future directions are outlined. What Is ChucK? form the notion of a strongly timed computer music programming language. ChucK (Wang 2008) is a computer music program- ming language. First released in 2003, it is designed to support a wide array of real-time and interactive Two Observations about Audio Programming tasks such as sound synthesis, physical modeling, gesture mapping, algorithmic composition, sonifi- Time is intimately connected with sound and is cation, audio analysis, and live performance.
    [Show full text]
  • Chunking: a New Approach to Algorithmic Composition of Rhythm and Metre for Csound
    Chunking: A new Approach to Algorithmic Composition of Rhythm and Metre for Csound Georg Boenn University of Lethbridge, Faculty of Fine Arts, Music Department [email protected] Abstract. A new concept for generating non-isochronous musical me- tres is introduced, which produces complete rhythmic sequences on the basis of integer partitions and combinatorics. It was realized as a command- line tool called chunking, written in C++ and published under the GPL licence. Chunking 1 produces scores for Csound2 and standard notation output using Lilypond3. A new shorthand notation for rhythm is pre- sented as intermediate data that can be sent to different backends. The algorithm uses a musical hierarchy of sentences, phrases, patterns and rhythmic chunks. The design of the algorithms was influenced by recent studies in music phenomenology, and makes references to psychology and cognition as well. Keywords: Rhythm, NI-Metre, Musical Sentence, Algorithmic Compo- sition, Symmetry, Csound Score Generators. 1 Introduction There are a large number of powerful tools that enable algorithmic composition with Csound: CsoundAC [8], blue[11], or Common Music[9], for example. For a good overview about the subject, the reader is also referred to the book by Nierhaus[7]. This paper focuses on a specific algorithm written in C++ to produce musical sentences and to generate input files for Csound and lilypond. Non-isochronous metres (NI metres) are metres that have different beat lengths, which alternate in very specific patterns. They have been analyzed by London[5] who defines them with a series of well-formedness rules. With the software presented in this paper (from now on called chunking) it is possible to generate NI metres.
    [Show full text]
  • Contributors to This Issue
    Contributors to this Issue Stuart I. Feldman received an A.B. from Princeton in Astrophysi- cal Sciences in 1968 and a Ph.D. from MIT in Applied Mathemat- ics in 1973. He was a member of technical staf from 1973-1983 in the Computing Science Research center at Bell Laboratories. He has been at Bellcore in Morristown, New Jersey since 1984; he is now division manager of Computer Systems Research. He is Vice Chair of ACM SIGPLAN and a member of the Technical Policy Board of the Numerical Algorithms Group. Feldman is best known for having written several important UNIX utilities, includ- ing the MAKE program for maintaining computer programs and the first portable Fortran 77 compiler (F77). His main technical interests are programming languages and compilers, software confrguration management, software development environments, and program debugging. He has worked in many computing areas, including aþbraic manipulation (the portable Altran sys- tem), operating systems (the venerable Multics system), and sili- con compilation. W. Morven Gentleman is a Principal Research Oftcer in the Com- puting Technology Section of the National Research Council of Canada, the main research laboratory of the Canadian govern- ment. He has a B.Sc. (Hon. Mathematical Physics) from McGill University (1963) and a Ph.D. (Mathematics) from princeton University (1966). His experience includes 15 years in the Com- puter Science Department at the University of Waterloo, ûve years at Bell Laboratories, and time at the National Physical Laboratories in England. His interests include software engi- neering, embedded systems, computer architecture, numerical analysis, and symbolic algebraic computation. He has had a long term involvement with program portability, going back to the Altran symbolic algebra system, the Bell Laboratories Library One, and earlier.
    [Show full text]
  • 1 a NEW MUSIC COMPOSITION TECHNIQUE USING NATURAL SCIENCE DATA D.M.A Document Presented in Partial Fulfillment of the Requiremen
    A NEW MUSIC COMPOSITION TECHNIQUE USING NATURAL SCIENCE DATA D.M.A Document Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Musical Arts in the Graduate School of The Ohio State University By Joungmin Lee, B.A., M.M. Graduate Program in Music The Ohio State University 2019 D.M.A. Document Committee Dr. Thomas Wells, Advisor Dr. Jan Radzynski Dr. Arved Ashby 1 Copyrighted by Joungmin Lee 2019 2 ABSTRACT The relationship of music and mathematics are well documented since the time of ancient Greece, and this relationship is evidenced in the mathematical or quasi- mathematical nature of compositional approaches by composers such as Xenakis, Schoenberg, Charles Dodge, and composers who employ computer-assisted-composition techniques in their work. This study is an attempt to create a composition with data collected over the course 32 years from melting glaciers in seven areas in Greenland, and at the same time produce a work that is expressive and expands my compositional palette. To begin with, numeric values from data were rounded to four-digits and converted into frequencies in Hz. Moreover, the other data are rounded to two-digit values that determine note durations. Using these transformations, a prototype composition was developed, with data from each of the seven Greenland-glacier areas used to compose individual instrument parts in a septet. The composition Contrast and Conflict is a pilot study based on 20 data sets. Serves as a practical example of the methods the author used to develop and transform data. One of the author’s significant findings is that data analysis, albeit sometimes painful and time-consuming, reduced his overall composing time.
    [Show full text]
  • F. Richard Moore, Dreams of Computer Music: Then And
    Dreams of Computer Music: Then and Now Author(s): F. Richard Moore Source: Computer Music Journal, Vol. 20, No. 1 (Spring, 1996), pp. 25-41 Published by: The MIT Press Stable URL: http://www.jstor.org/stable/3681267 Accessed: 28/01/2010 22:31 Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/action/showPublisher?publisherCode=mitpress. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. The MIT Press is collaborating with JSTOR to digitize, preserve and extend access to Computer Music Journal. http://www.jstor.org F.Richard Moore of Department of Music, and Center for Research Dreams Computer in Computing and the Arts University of California, San Diego Music-Then and Now La Jolla, California 92093-0037, USA [email protected] Computer music was first created nearly 40 years observations, first published in 1514, that led him ago in universities and research laboratoriesby to suggest that the earth revolves around the sun people dreaming of magical instruments for music.
    [Show full text]
  • Maurice Fleuret Une Politique Démocratique De La Musique
    Annexe 3 sont les seuls à échapper à la délinquance juvénile). À l’heure où la société est légitimement sollicitée de protéger toujours mieux l’individu dans son quotidien, la création artistique reste le seul domaine où, sans mettre en péril la perpétuation et le développement de la communauté, chacun peut prendre les risques fondamentaux, nécessaires à sa conscience et à son existence. Cependant, outre l’accomplissement indi- viduel qu’entraîne le fait de s’investir dans l’acte de création, de se pro- jeter dans l’objet créé et d’y risquer beaucoup, il y a là le facteur le plus dynamique du développement de la communication, puisque chacun crée pour l’autre, et donc de formation de la conscience collective indispen- sable à l’évolution de la société ; échangeant plus et, ainsi, se connais- sant mieux, le groupe est mieux à même de choisir son destin et de mesu- rer les conséquences de sa décision. À l’heure du bilan, il écrit : « La vie musicale en France : une révolution culturelle ?* » Les Italiens chantent Verdi dans la rue, les Anglais vont au concert comme ils iraient au cinéma, les Allemands font de la pratique chorale le ciment de la société, et les Autrichiens célèbrent la musique savante comme fondement de la civilisation. Dans le concert européen des nations musicales, la France reste absente : depuis au moins Lully, il est admis que le Français n’est pas musicien, même pas mélomane, et qu’il faut, pour se consoler, s’en tenir ici à quelques grands noms qui font exception mais confirment la règle.
    [Show full text]
  • Composing Interactions
    Composing Interactions Giacomo Lepri Master Thesis Instruments & Interfaces STEIM - Institute of Sonology Royal Conservatoire in The Hague The Netherlands May 2016 “Any musical innovation is full of danger to the whole State, and ought to be prohibited. (...) When modes of music change, the State always change with them. (...) Little by little this spirit of licence, finding a home, imperceptibly penetrates into manners and customs; whence, issuing with greater force, it invades contracts between man and man, and from contracts goes on to laws and constitutions, in utter recklessness, ending at last, by an overthrow of all rights, private as well as public.” Plato, The Republic 1 Acknowledgements First of all, I would like to express my gratitude to my family. Their love, support and advice are the most precious gifts I ever received. Thanks to Joel Ryan & Kristina Andersen, for their ability to convey the magic, make visible the invisible and p(l)ay attention. Thanks to Richard Barrett, for his musical sensitivity, artistic vision and gathering creativity. Thanks to Peter Pabon, for his outstanding teachings, competences and support. Thanks to Johan van Kreij, for his important practical and conceptual advises, suggestions and reflections. Thanks to Alberto Boem & Dan Gibson, for the fruitful and worthwhile discussion that produced many of the concepts introduced in chapter 2. Thanks to Kees Tazelaar, for the passion, foresight and expertise that characterise his work. Thanks to all the people that contribute to the Institute of Sonology and STEIM. Thanks to their ability to valorise the past and project the future. Thanks to my fellow students Semay and Ivan, for the joy and sharing.
    [Show full text]
  • Introduction to Algorithmic Composition and Sound Synthesis! Instructor: Dr
    !MAT 276IA: Introduction to Algorithmic Composition and Sound Synthesis! Instructor: Dr. Charlie Roberts - [email protected]! !Assisted by: Ryan McGee - [email protected]! !Credits: 4! Time: Wednesday / Friday 10:00AM - 11:50AM! Room: Elings Hall, 2003 (MAT Conference Room)! !Office Hours: TBD! // Course Synopsis! This course provides an introduction to techniques of electroacoustic music production through the lenses of sound synthesis and the algorithm. We will begin with basic acoustics and digital audio theory, and !advance to sound synthesis techniques and algorithms for exploring their use.! The course will explore these topics using the browser-based, creative coding environment Gibber (http:// gibber.mat.ucsb.edu). The language used in Gibber is JavaScript and a basic overview of JavaScript will be provided. No programming experience is required (although some small amount of previous programming is preferred, please contact the instructor if you have questions about this), and Gibber is written with beginning programmers in mind. Despite this, it offers a number of advanced features not available in most contemporary music programming languages, including sample-accurate timing and intra-block audio graph modification, audio-rate modulation of timing with concurrent clocks, and powerful abstractions for sequencing and defining musical mappings. It possesses a versatile graphics library and !many interactive affordances.! JavaScript is becoming an increasingly important language in the electroacoustic landscape. It is used in a variety of DAWs (such as Logic Pro and Reaper) and is an important part of the Max/MSP ecosystem. Learning the basics of JavaScript also means that you can create interactive compositions for the !browser, the best vehicle for widespread dissemination of audiovisual works.! Students will leave the course with a high-level understanding of various synthesis techniques: additive, subtractive, granular, FM, and physical modeling, as well as knowledge of digital audio effects.
    [Show full text]
  • Algorithmic Composition with Open Music and Csound: Two Examples
    Algorithmic Composition with Open Music and Csound: two examples Fabio De Sanctis De Benedictis ISSM \P. Mascagni" { Leghorn (Italy) [email protected] Abstract. In this paper, after a concise and not exhaustive review about GUI software related to Csound, and brief notes about Algorithmic Com- position, two examples of Open Music patches will be illustrated, taken from the pre-compositive work in some author's compositions. These patches are utilized for sound generation and spatialization using Csound as synthesis engine. Very specific and thorough Csound programming ex- amples will be not discussed here, even if automatically generated .csd file examples will be showed, nor will it be possible to explain in detail Open Music patches; however we retain that what will be described can stimulate the reader towards further deepening. Keywords: Csound, Algorithmic Composition, Open Music, Electronic Music, Sound Spatialization, Music Composition 1 Introduction As well-known, \Csound is a sound and music computing system"([1], p. 35), founded on a text file containing synthesis instructions and the score, that the software reads and transforms into sound. By its very nature Csound can be supported by other software that facilitates the writing of code, or that, provided by a graphic interface, can use Csound itself as synthesis engine. Dave Phillips has mentioned several in his book ([2]): hYdraJ by Malte Steiner; Cecilia by Jean Piche and Alexander Burton; Silence by Michael Go- gins; the family of HPK software, by Didiel Debril and Jean-Pierre Lemoine.1 The book by Bianchini and Cipriani encloses Windows software for interfacing Csound, and various utilities ([6]).
    [Show full text]
  • La Nascita Degli Anelli Di Accumulazione Per Elettroni E Positroni
    LA NASCITA DEGLI ANELLI DI ACCUMULAZIONE PER ELETTRONI E POSITRONI CARLO BERNARDINI Dipartimento di Fisica, Universita La Sapienza, Roma Nell'ormai lontano 1958, i Laboratori di Frascati stavano per mettere in funzione, sia pure in una versione più moderna e potente, quell'acceleratore di particelle che il gruppo romano di via Panisperna aveva a lungo vagheggiato1 prima di disperdersi nel mondo. In realtà, il più forte promotore dell'elettrosincrotrone da 1.100 MeV era stato l'ex fiorentino Gilberto Bernardini, successivamente chiamato a Roma che, con Edoardo Amaldi e i colleghi del neonato INFN (Antonio Rostagni, Piero Caldirola e Gleb Wataghin), appoggiandosi al CNRN di Felice Ippolito, avevano investito l'allora giovanissimo Giorgio Salvini della direzione dei laboratori; e Salvini l'aveva assunta circondandosi di neolaureati ventenni. L'INFN aveva coordinato egregiamente la preparazione degli apparati sperimentali che sarebbero andati sui fasci (di gamma di bremsstrahlung) della macchina, prevaletemente per esperienze di fotoproduzione :finalmente l’affrancamento da raggi cosmici, banco di formazione dei fisici italiani delle particelle elementari. Mentre queste attività di "ordinaria ricerca programmata" andavano avanti, la temperatura intellettuale dei laboratori veniva però mantenuta alta dal bombardamento delle novità che proveniva senza sosta da tutti i laboratori attivi, particolarmente da quelli americani: voglio ricordare l'antiprotone, la fisica dei K, il fattore di forma del protone e la non- conservazione della parità; ma anche l'invenzione del focheggiamento forte nelle macchine circolari e l'idea delle collisioni fascio-fascio. C'erano molti risultati sperimentali e molte idee relative a strumenti, mentre la fisica teorica sembrava un po' impantanata in rappresentazioni fenomenologiche non molto produttive o in rappresentazioni troppo generali per produrre risultati confrontabili con gli esperimenti.
    [Show full text]
  • Assessment: Computer Music
    1 Computer Music Assessment: Computer Music There are two parts to the assessment. 1. Task 1 (Sound Synthesizer with GUI) Programming project with 1000 word write- up. Submitted Thursday Week 7 SPRING term (50%) 2. Task 2 (Algorithmic Composition) Programming project with 1000 word write-up. Submitted Assessment Block 2 (50%) All submissions will be electronic, via the Study Direct site for the module. Programming Tasks: You will have to submit everything required to compile and run the programs, with the writeup. It is strongly suggested you zip everything up into one submission file. The time of submission will be strictly adhered to as recorded on Study Direct. When you submit your SuperCollider code, don’t forget to include any sound samples and to indicate any third party dependencies! (do not use obscure third party libraries unless you have cleared with me that I can get hold of them). You must either use Mac OS X, or your work must be compatible in principle with cross-platform compilation. Direct assessment will be undertaken of your code; it must run out of the box. Note that there is an upload limit on Study Direct. If you need very large audio files as part of the project, you should place them online within another zip (e.g. just one download link) and provide a link to download this in a README document with your submission. The only external data that will be accepted in this fashion is auxiliary audio or video data; no programming code should be placed externally. Each task should be accompanied by a 1000 word writeup in a PDF document (you can export as PDF from Word, for example) detailing your solutions to the exercise.
    [Show full text]
  • Algorithmic Composition of Popular Music
    Algorithmic Composition of Popular Music Anders Elowsson,*1 Anders Friberg*1 *1 Speech, Music and Hearing, KTH Royal Institute of Technology, Sweden [email protected], [email protected] every composer of popular music. But perhaps the most ABSTRACT important lesson, often neglected in scientific research, is the Human composers have used formal rules for centuries to compose emphasis on the composer as a listener. In this project, a music, and an algorithmic composer – composing without the aid of constant evaluation of the sounding results has been used to human intervention – can be seen as an extension of this technique. An shape the songs into music that better corresponds to the genre. algorithmic composer of popular music (a computer program) has Let us take a look at the more scientific approaches to music been created with the aim to get a better understanding of how the composition. A lot of work has been done on classical music composition process can be formalized and at the same time to get a (Cope, 2000; Tanaka et al, 2010; Farbood & Schoner, 2001). A better understanding of popular music in general. With the aid of reason for the interest in this type of music may perhaps be an statistical findings a theoretical framework for relevant methods are already established formalization such as species counterpoint presented. The concept of Global Joint Accent Structure is introduced, made famous by Johann Joseph Fux. Another reason may be the as a way of understanding how melody and rhythm interact to help the special position that classical music has reached within music listener form expectations about future events.
    [Show full text]