Modular Synthesis to Digital Artifacts, Current Practices in Computer Music
Total Page:16
File Type:pdf, Size:1020Kb
Modular Synthesis to Digital Artifacts, Current Practices in Computer Music Jamie Pawloski ICAM110 Computing in the Arts: Current Practice Lisa Naomi Spellman June 10, 2013 Technology improvements and cheaper computer components have allowed synthesizers of today to become smaller and more complex. Engineers Bob Moog and Don Buchla, along with artists Aphex Twin, Tristan Shone, Amy Alexander, and Moldover, have led music into interesting territories because of the technology they implemented into music culture. Acoustic instruments morphed into analog computing machines creating new tones not possible by natural means. After the introduction of the computer in to music, analog components were no longer a necessity. Again, new tones surfaced out of pushing the boundaries of audio waveforms1 within computer programs. New ways to manipulate sound with computer programs became more accessible to the non-computer programmer within the past fifteen years. From this level of control, new genres of music known as Glitch Culture came to be. These artists needed new instruments to harness their abilities to manipulate sound, leading to an explosion of hand made controllers and starting a practice known as Controllerism. In current practices, the ability to build a synthesizer or controller from scratch has become accessible due to the ingenuity of the forefathers of computer instruments and also from the imaginative ideas of today’s artists who developed genres such as Glitch Culture and controllerism. An electrical engineer, Bob Moog, fueled artist’s fire by creating the first affordable musical synthesizer (Balora 2009). He was a pioneer who provided a musical approach to interacting with sound generators, and within a price range that professional musicians could afford. In the early 1970s, musical acts could often be spotted using these enormous synthesizers with wires sprawled across their faceplates, mimicking the confusing array of 1 A graphical representation of the amplitude (loudness) of a sound over a period of time. connections on a telephone operator’s switchboard. These giant switchboards were known as Modular Synthesizers2 and allowed the user to hook an almost infinite array of options to sculpt sound to their liking. Originally, computerized sound generators like the MUSIC IV system at Princeton University’s Sound Lab were designed as educational tools to mimic the tonal qualities of acoustic instruments like violins or cellos. They could also imitate birdcalls or the sound of wind (Licklider 1991). However, these machines could fill an entire room and were not practical for composing music at home. In 1964, Bob Moog was determined to fix this problem by experimenting with circuit boards in a modular fashion. Each module was set up to be separate from one another so when connected via patch cables, voltages would flow from one module to the other in the same way voltages would flow through a circuit board. This groundbreaking control of sound allowed the composer or user to use a technique known as Subtractive Synthesis3. With this technique, a simple waveform of sound could be manipulated into a complex waveform mimicking acoustic instruments and even creating tones that were unnatural and new sounding (Crowell 2012). Though not very cheap, Moog created the first electronic instrument to be customizable by the user; 2 The modular synthesizer is a type of synthesizer consisting of separate specialized modules. The modules are not hardwired together but are connected with patch cords, to create a patch (Henry 1987). 3 “The subtractive approach to synthesis assumes that an acoustic instrument can be approximated with a simple oscillator—which produces waveforms with different frequency spectrums. The signal is sent from the oscillator to a filter that represents the frequency-dependent losses and resonances in the body of the instrument. The filtered (or unfiltered) signal is shaped over time by the amplifier section of the synthesizer. The distinctive timbre, intonation, and volume characteristics of a real instrument can theoretically be recreated by combining these components in a way that resembles the natural behavior of the instrument you are trying to emulate.” http://documentation.apple.com/en/logicexpress/instruments/index.html#chapter=A%26section=3%26tasks=true opening the doors for future generations on what could be accomplished by designing one’s own signal path for sound. Figure 1: Moog Modular 55 Synthesizer While Bob Moog was busy creating different marketable models, another electrical engineer, Don Buchla, was creating his own modules to automate musical patterns with modular synthesizers. Buchla’s history is separately linked to modular synthesis and in comparison to Moog because of his distaste of the clavier keyboard (western scale standard piano). To fight the need of writing music within the confines of western musical scales, Buchla created the Model 223 Tactile Input Port. Not only did this instrument force the user to think differently about what music is supposed to be, but also how you are to interact with sound. The Tactile Input Port was not set up linear, like most instruments. There are 27 keys lined up in three rows, allowing the user to play in two dimensions; the keys are also pressure and velocity sensitive, allowing for a quasi third dimension (Balora 2009). With Moog’s modular synthesizer, and Buchla’s control of the modular systems, these two engineers began expanding the minds of sound designers. Musicians were no longer confined to hardware that was designed to do one thing and could buy electronic modules designed to build their own music generating system. Buchla’s success was not herald in the same light as Moog during the time they were releasing their music machines. It was not until a few decades later, Buchla’s impact on synthesizer building became most apparent. Figure 2: Buchla Tactile Input Port The introduction of the home computer introduced new digital machines that could take sounds a little further by introducing waveforms not possible by analog means. However, even though the sound designer had more control over the acoustic parameters, a learning curve split the programmer from the user. A new design was desperately needed, and like Moog and Buchla, artists began to go back to the drawing boards to create something that worked specifically for their needs toward expression. During the 1990s, an electronic musician, Richard D. James (Aphex Twin), began experimenting with a visual based program language called MAX/MSP. This program was designed specifically for sound designers to skip the tedious process of low level programming and let the user focus more on the sound production. James originally began with commercial hardware, but found himself rewiring and modifying the units to produce mangled and far from normal sounds (Rule 1997). Widely inspirational to other artists bored with their commercially designed music equipment, he exhibited programming was no longer localized to engineers and computer scientists. Apart from his programming skills, James was also at the forefront of a new genre present both in the art and music world, labeled Glitch Culture. In the musical realm of glitch, the term came from experimenting with digital artifacts created from the computerized information stored within a sound file, and then exploiting their boundaries (Cascone 2000). No longer at the mercy of pre-programmed digital synths, James could program sound to do whatever he pleased: cut, paste, stretch, freeze, magnify, and bit crunch (Rule 1997). A new form of modularity came from this type of programming as well. Files could be interchanged between users, re-used and re-routed for a later project; programming was becoming the new composing. In an issue of a MIT Computer Music Journal, artist and writer Kim Cascone states that the music itself is becoming less of the message, and now the program and instruments designed by programmers becomes the vein of expression. “The tools themselves have become the instruments, and the resulting sound is born of their use in ways unintended by their designers.” (Cascone 2000). With a few months of understanding program languages designed for sound creators, making a synth from scratch has become extremely accessible. Programming did not satisfy the creative, and to use a mouse or keyboard to play their pieces live felt systematic and restrained. The next natural course of action was to build a controller tailored to the program to be performed, birthing a new trend labeled controllerism. At first, controllerism was mainly artists cutting up previously recorded samples and mixing them like a DJ would. However, sound programming is not confined to electronic music, and musicians singing or playing guitars wanted control over their audio signal path as well. Moldover, one of these types of musicians, has been nicknamed the godfather of controllerism. He often states he has little knowledge of electrical engineering, but still constructs controllers arranged with arcade buttons, faders, knobs, and switches that activate parameters of his preprogrammed music (Morse 2012). This level of control over his songs, allows him to improvise over his music on the fly and change how the song may switch to another bar of music or loop. Figure 3: Moldover Mojo Controller Harkening back to Buchla’s Tactile Input Port, artists such as Tristan Shone take the idea of laying the controls of an instrument in non-traditional forms, allowing for different forms of expression. Shone was tired of the tradition that comes with writing metal music on traditional instruments like guitar, and was trying different ways to express his taste of music. After throwing different ideas around, he came up with tactile computer program controllers to activate drums, but realized they fell short of the impact his “Doom Metal” genre should display. Where Shone differs from other artists creating synthesizers from scratch, he has a background in mechanical engineering. This enables him to fabricate controllers that not only visually mimic his music, but respond to his motions as well (Deal 2012).