Rendering Timbre Variance and Time Deviance in MIDI-Controlled Contemporary Drum Performances
Total Page:16
File Type:pdf, Size:1020Kb
2005:017 C-UPPSATS Rendering Timbre Variance and Time Deviance in MIDI-Controlled Contemporary Drum Performances KAHL HELLMER Luleå University of Technology School of Music • Audio Technology 2004:017 • ISSN: 1402 - 1773 • ISRN: LTU - CUPP - - 04/17 - - SE Rendering Timbre Variance and Time Deviance in MIDI-Controlled Contemporary Drum Performances Kahl Hellmer Division of Sound Recording School of Music in Piteå Luleå University of Technology December 6, 2004. Abstract This essay is about the development of a drum machine that adds intentional and random errors in time using the statistical method Normal Frequency Distribution. The drum samples triggered are randomized from a multi-sampled drum kit thus forcing a variation in timbre. It is not in the aim of this thesis to emulate human contemporary percussion performances in neither general nor any specific percussionists, but to circumvent the dullness of ordinary drum machines. The drum machine was developed in c++ as a set of VST-plug-ins using the Steinberg VST Software Development Kit. The drum machine itself consists of seven different plug-ins that can be used either independently or as a group in any VST host application. 2 1 Introduction 6 1.1 Background 6 1.2 What is a VST Plug-in? 7 1.3 The VST SDK 8 1.4 Previous research 8 1.5 Aim 8 1.6 Limitations 10 2 Sound Sample Recording 11 3 Main Structure of the Plug-ins 12 3.1 Time Deviation Function 14 3.2 Velocity Categories 16 3.3 Timbre Variation Function (Sample randomization) 17 3.4 Sequence of Events and Functions 17 4 Results 18 5 Conclusion 19 5.1 Usage 19 5.2 Glitch origin 19 5.3 Future Research 19 5.4 Reflections and Final Remarks 20 6 References 21 7 Appendices 22 3 List of Abbreviations DLL Dynamic Link Library DSP Digital Signal Processing GUI Graphical User Interface HDD Hard Disk Drive MIDI Musical Instrument Digital Interface PCM Pulse-Code Modulation RAM Random Access Memory SDK Software Development Kit VST Virtual Studio Technology VSTi Virtual Studio Technology Instrument 4 Acknowledgements The author wishes to thank the following, in alphabetical order: Jonas Ekeroot, Stefan Hellmer, Henri Hurtig, Tomas Nilsén, Robert Stjärnström and Stefan Sundström. 5 1 Introduction 1.1 Background The computer technology of today enables musicians to easily record their own music in their very homes. What was once limited to professional recording studios can to some extent be achieved using a desktop computer equipped with an affordable soundcard and one of many software sequencers. The home computer of today delivers enough DSP power to suit the needs of an amateur mixing engineer, providing a moderate yet competitive solution to the professional recording studio. Even for professional purposes, the recording of various musical instruments can be done at home. Today there are several different approaches as to how the electric guitar and bass should be recorded when there are both software and hardware amplifier simulators where the engineer can just double-click or spin a jog-wheel to choose amplifier, box type, microphone and its displacement. Although some prefer to use a “real” amplifier, the recording of these instruments will still not be a major problem. Recording the vocal track is also a task that can be done at home with very reasonable results provided that the recording room is acoustically prepared in some way by means of dampening and such, and for obvious reasons is not located in the same space as the control room. The real problem that the amateur recording engineer faces is recording the drum track. The drums are the only instrument in contemporary music that requires fairly large amount of equipment. First of all a pair of high-end condenser microphones over the cymbals and dynamic microphones for toms, snare and kick drum, a mixer console for phantom power feeding and level control and most important of all; a decent set of drums. The second problem is where to record the tracks. As we all know, drums are violently loud and if played inside the living room of an apartment they will cause an eviction of the recording engineer. The majority of the existing commercial drum machines for sale as plug-ins today are limited in the way of humanization. Most have a shuffle-function that adds a swing to the beat and some have a humanizing -function in the sense of adding intentional and random errors to every note onset in the time domain. The sample randomization is also an issue. When most drum sets are multisampled today, the objective is to have samples from each snare hit through a variety of microphone types and displacements instead of having several similar hits on the same snare drum through identical microphones and displacements. What also should be added to a drum machine is the ability to reconstruct the feeling of a room. Most recording engineers when recording drums place an additional microphone (or two - in an A/B or X/Y-fashion) somewhere inside the 6 room, sometimes it can even be placed outside the room for a clearer effect. This is solely done for one purpose; to record how the whole drum set sounds when colored by the room of which they are recorded in. Since when only using near field microphone displacements the sound can sometimes be to “dry”, the possibility of adding the room microphone to the mix can sometimes be a much better, but most of all more fun alternative than adding in what the context of low- budget recording usually is - a digital reverb. This room microphone will henceforth be referred to as ambience or ambience microphone . 1.2 What is a VST Plug-in? A VST-Plug-In is a software application, it is not a stand-alone program; it is an application that is used “inside” a host application. It can vaguely be decribed as the digital equivalence of the analogue domain’s hardware signal processing or analyzing devices. VST-Plug-Ins are developed by both companies with commercial interests and by amateur programmers, who mostly do it for the fun of sharing their produce with fellow mixing engineers. On the Internet there are great amounts of various plug-ins free for download by anyone. A Plug-In can be made to accomplish almost any task at all; except for making coffee - the sky is the limit. The VST specification has been developed by Steinberg Media Technologies GmbH, and following quotes are taken from the VST 2.0 specification: “In the widest possible sense a VST Plug-In is an audio process. A VST Plug-In is not an application. It needs a host application that handles the audio streams and makes use of the process the VST Plug-In supplies […] Generally speaking, it can take a stream of audio data, apply a process to the audio and send the result back the host application […] From the host application’s point of view, a VST Plug-In is a black box with an arbitrary number of inputs, outputs (Midi or Audio), and associated parameters. The Host needs no knowledge of the Plug-In process to be able to use it […]” [1]. This specification is supported by the vast majority of audio sequencers and editors which are focused on the aesthetic approach to audio mixing and editing. This makes VST plug-ins very versatile since if an audio effect plug-in or a synthesizer plug-in is developed, they are not limited to just one host application, or for that fact one single Operating System. This means that if someone develops a reverberation plug-in, as an example, it is compatible with all host applications that support the VST-specification, no matter what platform it uses. 7 1.3 The VST SDK The Virtual Studio Technology Plug-in Specification 2.0 Software Development Kit [1] (VSTSDK) can be downloaded free of charge from the Steinberg website. It consists of sample c++ code, a few simple example projects that can be compiled into ready to use VST plug-ins and an empty starter project that compiles into a “do-nothing plug-in”. With the help of the SDK it is much easier to get started with the programming since you can peek at the example code and see how different basic tasks (e.g. how a variable can be controlled by a fader in the GUI) are handled efficiently. 1.4 Previous research The research conducted in the field of rendering humanization to computer performances is in the realm of academe referred to as Expressive Musical Performance Systems . The three systems that the author has encountered are Director Musices [2], Widmer’s System [3] and Osamu Ishikawa’s System [4]. In general the two latter systems extract performance rules by comparing the notated score of a musical piece with the performance of the same piece by a musician. The system is then able to extract expressive rules by looking at deviations in the domains of time, amplitude and duration on both a note-to-note level and overall and apply these to unseen scores. Widmer uses a method for extracting performance rules called Analysis by Machine Induction which basically is an inductive learning algorithm that analyses human musical performances and tries to develop performance rules of its own. Ishikawa’s approach to the problem is the statistical method Multiple Regression Analysis which measures how different variables are coherent to each other. Director Musices differs from these two systems vastly. Instead of being armed to its teeth with complex mathematical and statistical algorithms that extracts the performance rules, it is already equipped with pre-written ones.