Implementing an M-Fold Wah-Wah Filter in Matlab

Total Page:16

File Type:pdf, Size:1020Kb

Implementing an M-Fold Wah-Wah Filter in Matlab “What if we had, not one Wah Wah Filter, not two, but 20?”: Implementing an M-Fold Wah-Wah Filter in Matlab Digital Audio Systems, DESC9115, 2020 Master of Interaction Design & Electronic Arts (Audio and Acoustics) Sydney School of Architecture, Design and Planning, The University of Sydney ABSTRACT An M-fold Wah-Wah filter can be described as an effect where multiple Wah-Wah filters are applied to a signal, each at a certain frequency range. This report describes the implementation of such a filter in Matlab. By using preexisting code on a single state-variable bandpass filter, multiple bandpass filters are implemented across a defined frequency spectrum. The filter is adjustable through a number of variables, these being: the number of bandpass filters (M), the damping factor of each filter, the spectrum for which the filters are applied, as well as the Wah Frequency, i.e. the number of cycles through each bandpass. 1. INTRODUCTION The Wah-Wah filter is commonly used by guitarist to alter the shape and tone of the note(s) they are playing. The effect can be described as the combination of ‘u’ and ‘ah’ sounds created by human voice. The mouth’s shape here going from a small O to a big O. The center frequencies are called “formants”. The Wah-Wah pedal works in a similar manner, the formants shifts creating a “wah” sound. The Wah-Wah filter is a time-varying delay line filter. Each filter has a set of unique characteristics such as the range of frequencies the effect is applied to and its Wah- Frequency, i.e. how fast the wah sound shifts. Often advanced guitarist tamper with these characteristics to create a Wah effect they’d prefer. Recently, advanced guitarists have experimented with two Wah pedals to achieve the effect at different ranges (Chapman, 2019) (Figure 1). What if guitarists were to use 5, 20, or even 100 Wah-Wah filters simultaneously? In analogue terms this would be difficult, as one Wah-Wah pedal, is the size of an average foot and using two requires a lot of space. In digital terms, this is achievable and is described by Zölzer (2011) as an M-fold Wah-Wah filter, where multiple filters are combined together. {Figure 1 – Guitarists experimenting with two Analogue Wah-Wah Pedals} 1.1 Details of the Filter As mentioned, the Wah-Wah Filter is a time-varying delay line filter, i.e. implemented in the time domain. Zölzer (2011) describes three types of Wah-Wah filters: • A basic Wah-Wah, “variations can be derived by modifying its control”, this is the most popular form of the Wah-Wah effect, where the control is through a foot pedal; • an automatic Wah-Wah, where the control is periodic through a low frequency oscillator (LFO); • a sensitive Wah-Wah, where the control is adaptive, “based on the attack of each note”. For this assignment, I will be focusing on the auto Wah-Wah. The LFO being a triangular wave that will control the center frequencies. The Wah-Wah effect is implemented using a bandpass filter with a small bandwidth, the center frequency being variable. To describe it lightly, a bandpass filter is implemented using a low pass filter combined with a high pass filter (Chamberlain, 1980). The response of this type of filter is presented in Figure 2. Chamberlain (1980), describes that by making the bandwidth filter very small, “only one harmonic will be emphasized by the narrow peak, and others will be greatly attenuated” and by varying the center frequency a distinct wah effect is created “as each harmonic is approached and passed by”. {Figure 2 – Frequency response of bandpass filter} Diagrammatically this is can be shown by mixing a signal that is passed through a bandpass with the unfiltered signal (Figure 3). Zölzer (2011) describes that by “replacing this unit delay in the bandpass filter by an M tap delay leads to the M-fold Wah-Wah filter” (Figure 4). The M-fold Wah-Wah filter sweeps the entire frequency range and simultaneously modulates the center frequency of each bandpass filter, as with a single wah-wah filter. {Figure 3 – Wah Wah Filter with single time-varying bandpass filter} {Figure 4 – M-fold Wah Wah Filter} Figure 5 presents the effect of passing a white noise signal through an M-fold Wah-Wah on a spectrogram. It illustrates how the response is periodic. {Figure 5 – Spectrogram of White noise signal passed through an M-fold Wah-Wah filter} Zölzer (2011) mentions how M-fold Wah-Wah filters contain between 5-20 bandpass filters, where a low damping factor (Q) is required. A Bell effect may be achieved by implementing 100 bandpass filters. These parameter settings are presented in Table 1, Δf is the difference between the cut-off frequencies of the bandpass filters. {Table 1 – Parameter settings for M-fold Wah Wah Filter} 2. LAB WORK 2.1 Bandpass Filter Function The first step in creating the M-fold Wah-Wah filter in Matlab was to build an Auto Wah-Wah filter, more specifically a variable bandpass filter where the centre frequency is controlled by an LFO. A code base online created by Marshall (n.d.) provided the basis for this implementation. The code reads a sound file (x) and sets the base parameters of the filter. This being the damping factor (damp), the cut-off frequency of the single bandpass filter (minf and maxf), as well as the wah frequency (Fw) of the overall filter. damp = 0.01; minf = 500; maxf = 3000; Fw = 2000; With these parameters set, the code then calculates Δf (delta), by dividing Fw and the sampling frequency (fs), and creates a triangle wave of centre frequencies (Fc). This is how the auto-wah effect is implemented, where Fc sweeps up and down the bandpass. The triangular wave is trimmed to fit the size of the input. % change in centre frequency per sample (Hz) delta = Fw/fs; % create triangle wave of centre frequency values Fc=minf:delta:maxf; while(length(Fc) < length(x) ) Fc= [ Fc (maxf:-delta:minf) ]; Fc= [ Fc (minf:delta:maxf) ]; end % trim tri wave to size of input Fc = Fc(1:length(x)); Then by using the difference equation described in Chapter 2 of DAFX: Digital Audio Effects (Zölzer, 2011), the bandpass filter is implemented. This requires the coefficient of the difference equation to be recalculated each time Fc changes. A Q value is determined using the input parameter damp. A vector of zeros is created for the highpass (yh), bandpass (yb) and lowpass (yl) filters for the difference equations. The first sample of each filter is set to avoid negative referencing. % difference equation coefficients % must be recalculated each time Fc changes F1 = 2*sin((pi*Fc(1))/fs); % this dictates size of the pass bands Q1 = 2*damp; % create emptly out vectors yh=zeros(size(x)); yb=zeros(size(x)); yl=zeros(size(x)); % first sample, to avoid referencing of negative signals yh(1) = x(1); yb(1) = F1*yh(1); yl(1) = F1*yb(1); The difference equation is finally applied to obtain the outputs yh, yb and yl across the whole input (x). On each loop they are recalculated when Fc changes. % apply difference equation to the sample for n=2:length(x) yh(n) = x(n) - yl(n-1) - Q1*yb(n-1); yb(n) = F1*yh(n) + yb(n-1); yl(n) = F1*yb(n) + yl(n-1); F1 = 2*sin((pi*Fc(n))/fs); end Finally, the output (yb) is normalised to give the final wah-wah effect. %normalise maxyb = max(abs(yb)); yb = yb./maxyb; The above code describes the implementation of a single bandpass filter. Aside from the parameter settings, it was used to create a bandpass filter function. This function is used to implement multiple bandpass filters. The function call looks as follows: yb = bandpass(x, fs, damp, minf, maxf, Fw) 2.2 M-Fold Wah-Wah Filter Function Therefore, to implement the described M-fold Wah-Wah filter, a for loop was used to apply multiple bandpass filters across a defined frequency spectrum. An increment variable is derived by subtracting the low end (Fmin) from the high end (Fmax) and dividing this by the desired number of bandpass filters (M). The output (yb) is a vector that has M columns each defining the output of a single bandpass filter operation on the input. % Increment value that is used on Fmin to deteremine low and high end % bandpass filter Finc = (Fmax-Fmin)/M; for i = 1:M yb(:,i) = bandpass(x, fs, damp, Fmin, Fmin +Finc, Fw); Fmin = Fmin + Finc; % Increment Fmin to next bandpass end Finally, the signals are mixed together and normalised to provide the final output. %Mix signals and normalise y_out = sum(yb,2); max_y_out = max(abs(y_out)); y_out = y_out./max_y_out; The function call is as follows: y_out = mFoldWahWah(x, fs, M, damp, Fmin, Fmax, Fw); 2.3 Output and Parameter Settings The function was first tested on a white noise signal to confirm it was operational. Figure 6, 7 and 8 below presents the spectrogram for a processed white noise signal with M = 5, 20 and 100, respectively. As seen, the filter does work showing multiple bandpass filters operating at their respective frequency ranges. With M = 100 the effect isn’t visible. This would be due to the high number of M and the really small bandwidth required. It can be deduced from the first two figures that M = 100 does work. {Figure 6 – Spectrogram of processed white noise, M = 5} {Figure 7 – Spectrogram of processed white noise, M = 20} {Figure 8 – Spectrogram of processed white noise, M = 100} The filter was then tested on an acoustic guitar sample.
Recommended publications
  • Estimation of Direction of Arrival of Acoustic Signals Using Microphone
    Time-Delay-Estimate Based Direction-of-Arrival Estimation for Speech in Reverberant Environments by Krishnaraj Varma Thesis submitted to the Faculty of The Bradley Department of Electrical and Computer Engineering Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering APPROVED Dr. A. A. (Louis) Beex, Chairman Dr. Ira Jacobs Dr. Douglas K. Lindner October 2002 Blacksburg, VA KEYWORDS: Microphone array processing, Beamformer, MUSIC, GCC, PHAT, SRP-PHAT, TDE, Least squares estimate © 2002 by Krishnaraj Varma Time-Delay-Estimate Based Direction-of-Arrival Estimation for Speech in Reverberant Environments by Krishnaraj Varma Dr. A. A. (Louis) Beex, Chairman The Bradley Department of Electrical and Computer Engineering (Abstract) Time delay estimation (TDE)-based algorithms for estimation of direction of arrival (DOA) have been most popular for use with speech signals. This is due to their simplicity and low computational requirements. Though other algorithms, like the steered response power with phase transform (SRP-PHAT), are available that perform better than TDE based algorithms, the huge computational load required for this algorithm makes it unsuitable for applications that require fast refresh rates using short frames. In addition, the estimation errors that do occur with SRP-PHAT tend to be large. This kind of performance is unsuitable for an application such as video camera steering, which is much less tolerant to large errors than it is to small errors. We propose an improved TDE-based DOA estimation algorithm called time delay selection (TIDES) based on either minimizing the weighted least squares error (MWLSE) or minimizing the time delay separation (MWTDS).
    [Show full text]
  • TA-1VP Vocal Processor
    D01141720C TA-1VP Vocal Processor OWNER'S MANUAL IMPORTANT SAFETY PRECAUTIONS ªª For European Customers CE Marking Information a) Applicable electromagnetic environment: E4 b) Peak inrush current: 5 A CAUTION: TO REDUCE THE RISK OF ELECTRIC SHOCK, DO NOT REMOVE COVER (OR BACK). NO USER- Disposal of electrical and electronic equipment SERVICEABLE PARTS INSIDE. REFER SERVICING TO (a) All electrical and electronic equipment should be QUALIFIED SERVICE PERSONNEL. disposed of separately from the municipal waste stream via collection facilities designated by the government or local authorities. The lightning flash with arrowhead symbol, within equilateral triangle, is intended to (b) By disposing of electrical and electronic equipment alert the user to the presence of uninsulated correctly, you will help save valuable resources and “dangerous voltage” within the product’s prevent any potential negative effects on human enclosure that may be of sufficient health and the environment. magnitude to constitute a risk of electric (c) Improper disposal of waste electrical and electronic shock to persons. equipment can have serious effects on the The exclamation point within an equilateral environment and human health because of the triangle is intended to alert the user to presence of hazardous substances in the equipment. the presence of important operating and (d) The Waste Electrical and Electronic Equipment (WEEE) maintenance (servicing) instructions in the literature accompanying the appliance. symbol, which shows a wheeled bin that has been crossed out, indicates that electrical and electronic equipment must be collected and disposed of WARNING: TO PREVENT FIRE OR SHOCK separately from household waste. HAZARD, DO NOT EXPOSE THIS APPLIANCE TO RAIN OR MOISTURE.
    [Show full text]
  • Re-20 Om.Pdf
    RE-20_e.book 1 ページ 2007年6月8日 金曜日 午後4時32分 Thank you, and congratulations on your choice of the BOSS RE-20 Space Echo. Before using this unit, carefully read the sections entitled: “USING THE UNIT SAFELY” and “IMPORTANT NOTES” (separate sheet). These sections provide important information concerning the proper operation of the unit. Additionally, in order to feel assured that you have gained a good understanding of every feature provided by your new unit, this manual should be read in its entirety. The manual should be saved and kept on hand as a convenient reference. Main Features ● The RE-20 uses COSM technology to faithfully simulate the characteristics of the famed Roland SPACE ECHO RE-201. ● Faithfully reproduces the characteristics of the RE-201, including the echo’s distinctive wow- and flutter-induced wavering and the compressed sound obtained with magnetic saturation. ● The Mode Selector carries on the tradition of the RE-201, offering twelve different reverberation effects through various combinations of the three playback heads and reverb. ● You can set delay times with the TAP input pedal and use an expression pedal (sold separately) for controlling parameters. ● Equipped with a “Virtual Tape Display,” which produces a visual image of a running tape. About COSM (Composite Object Sound Modeling) Composite Object Sound Modeling—or “COSM” for short—is BOSS/Roland’s innovative and powerful technology that’s used to digitally recreate the sound of classic musical instruments and effects. COSM analyzes the many factors that make up the original sound—including its electrical and physical characteristics—and creates a digital model that accurately reproduces the original.
    [Show full text]
  • Subwoofer Arrays: a Practical Guide
    Subwoofer Arrays A Practical Guide VVVeVeeerrrrssssiiiioooonnnn 111 EEElEllleeeeccccttttrrrroooo----VVVVooooiiiicccceeee,,,, BBBuBuuurrrrnnnnssssvvvviiiilllllleeee,,,, MMMiMiiinnnnnneeeessssoooottttaaaa,,,, UUUSUSSSAAAA AAApAppprrrriiiillll,,,, 22202000009999 © Bosch Security Systems Inc. Subwoofer Arrays A Practical Guide Jeff Berryman Rev. 1 / June 7, 2010 TABLE OF CONTENTS 1. Introduction .......................................................................................................................................................1 2. Acoustical Concepts.......................................................................................................................................2 2.1. Wavelength ..........................................................................................................................................2 2.2. Basic Directivity Rule .........................................................................................................................2 2.3. Horizontal-Vertical Independence...................................................................................................3 2.4. Multiple Sources and Lobing ...........................................................................................................3 2.5. Beamforming........................................................................................................................................5 3. Gain Shading....................................................................................................................................................6
    [Show full text]
  • Effect Types and Parameters
    Effect Types and Parameters © 2017 ZOOM CORPORATION Manufacturer names and product names are trademarks or registered trademarks of their respective owners. The names are used only to illustrate sonic characteristics and do not indicate any affiliation with the Zoom Corporation. Effect explanation overview Pedal control possible icon Effect type Effect explanation Parameter range PDL Vol The volume curve of the volume pedal can be set. VOL Adjusts the volume. 0 – 100 P Min Adjusts the volume when the pedal is at minimum position. 0 – 100 Max Adjusts the volume when the pedal is at maximum position. 0 – 100 Curve Sets the volume curve. A , B Effect Screen Parameter Parameter explanation Tempo synchronization possible icon Contents DYNAMICS 3 FILTER 4 DRIVE 6 AMP 8 CABINET 9 MODULATION 10 SFX 12 DELAY 13 REVERB 14 PEDAL 15 Additional tables 16 2 3 Effect Types and Parameters [ DYNAMICS ] SlowATTCK This effect slows the attack of each note, resulting in a violin-like performance. Time Adjusts the attack time. 1 – 50 Curve Set the curve of volume change during attack. 0 – 10 Tone Adjusts the tone. 0 – 100 VOL Adjusts the volume. 0 – 100 ZOOM's unique noise reduction cuts noise during pauses in playing without affecting ZNR the tone. GTRIN , DETCT Sets control signal detection level. EFXIN Depth Sets the depth of noise reduction. 0 – 100 THRSH Adjusts the effect sensitivity. 0 – 100 Decay Adjust the envelope release. 0 – 100 This is a simulation of the Demeter COMP-1 Compulator. BlackOpt Added parameters allow you to adjust the tone. Comp Adjusts the depth of the compression.
    [Show full text]
  • Digital Guitar Effects Pedal
    Digital Guitar Effects Pedal 01001000100000110000001000001100 010010001000 Jonathan Fong John Shefchik Advisor: Dr. Brian Nutter Texas Tech University SPRP499 [email protected] Presentation Outline Block Diagram Input Signal DSP Configuration (Audio Processing) Audio Daughter Card • Codec MSP Configuration (User Peripheral Control) Pin Connections User Interfaces DSP Connection MSP and DSP Connections Simulink Modeling MSP and DSP Software Flowcharts 2 Objective \ Statements To create a digital guitar effects pedal All audio processing will be done with a DSP 6711 DSK board With an audio daughter card All user peripherals will be controlled using a MSP430 evaluation board Using the F149 model chip User Peripherals include… Floor board switches LCD on main unit Most guitar effect hardware that is available on the market is analog. Having a digital system would allow the user to update the system with new features without having to buy new hardware. 3 Block Diagram DSP 6711 DSK Board Audio Daughter Card Input from Output to Guitar ADC / DAC Amplifier Serial 2 RX Serial 2 [16 bit] Data Path Clock Frame Sync Serial 2 TX MSP 430 Evaluation Board [16 bit] GPIO (3) Outlet MSP430F149 TMS320C6711 Power Supply GND +5V Data Bus (8) RS LEDs (4) EN Voltage +3.3V to MSP R/W Switch Select (4) +5V to LCD LCD Footboard Regulator +0.7V to LCD Drive Display Switch 4 Guitar Input Signal Voltage Range of Input Signal Nominal ~ 300 mV peak-to-peak Maximum ~ 3 V Frequency Range Standard Tuning 500 Hz – 1500 Hz 5 Hardware: DSP
    [Show full text]
  • Microkorg Owner's Manual
    E 2 ii Precautions Data handling Location THE FCC REGULATION WARNING (for U.S.A.) Unexpected malfunctions can result in the loss of memory Using the unit in the following locations can result in a This equipment has been tested and found to comply with the contents. Please be sure to save important data on an external malfunction. limits for a Class B digital device, pursuant to Part 15 of the data filer (storage device). Korg cannot accept any responsibility • In direct sunlight FCC Rules. These limits are designed to provide reasonable for any loss or damage which you may incur as a result of data • Locations of extreme temperature or humidity protection against harmful interference in a residential loss. • Excessively dusty or dirty locations installation. This equipment generates, uses, and can radiate • Locations of excessive vibration radio frequency energy and, if not installed and used in • Close to magnetic fields accordance with the instructions, may cause harmful interference to radio communications. However, there is no Printing conventions in this manual Power supply guarantee that interference will not occur in a particular Please connect the designated AC adapter to an AC outlet of installation. If this equipment does cause harmful interference Knobs and keys printed in BOLD TYPE. the correct voltage. Do not connect it to an AC outlet of to radio or television reception, which can be determined by Knobs and keys on the panel of the microKORG are printed in voltage other than that for which your unit is intended. turning the equipment off and on, the user is encouraged to BOLD TYPE.
    [Show full text]
  • Designing and Composing for Interdependent Collaborative
    Louisiana State University LSU Digital Commons LSU Doctoral Dissertations Graduate School May 2019 Designing and Composing for Interdependent Collaborative Performance with Physics-Based Virtual Instruments Eric Sheffield Louisiana State University and Agricultural and Mechanical College, [email protected] Follow this and additional works at: https://digitalcommons.lsu.edu/gradschool_dissertations Part of the Composition Commons, Music Performance Commons, and the Other Music Commons Recommended Citation Sheffield, Eric, "Designing and Composing for Interdependent Collaborative Performance with Physics-Based Virtual Instruments" (2019). LSU Doctoral Dissertations. 4918. https://digitalcommons.lsu.edu/gradschool_dissertations/4918 This Dissertation is brought to you for free and open access by the Graduate School at LSU Digital Commons. It has been accepted for inclusion in LSU Doctoral Dissertations by an authorized graduate school editor of LSU Digital Commons. For more information, please [email protected]. DESIGNING AND COMPOSING FOR INTERDEPENDENT COLLABORATIVE PERFORMANCE WITH PHYSICS-BASED VIRTUAL INSTRUMENTS A Dissertation Submitted to the Graduate Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements for the degree of Doctor of Philosophy in The School of Music by Eric Sheffield B.M., University of Wisconsin-Whitewater, 2008 M.A., University of Michigan, 2015 August 2019 c 2019 Eric Sheffield ii ACKNOWLEDGEMENTS Thank you to: Teachers and mentors, past and present|Dr. Edgar Berdahl, Dr. Jesse Allison, Derick Ostrenko, Dr. Stephen David Beck, Dr. Michael Gurevich, and Dr. Sile O'Modhrain, among others|for encouraging and guiding my music, design, and research interests. The LSU School of Music and CCT for letting me host a festival, funding my conference travel, providing great spaces to work and perform, and giving me an opportunity to teach computer music to LSU students.
    [Show full text]
  • Flanger, Chorus, Slapback, Echo)
    Initial Technology Review 1 Delay Based Audio Effects (Flanger, Chorus, Slapback, Echo) Sam Johnson #420081870 Digital Audio Systems, DESC9115, Semester 1 2012 Graduate Program in Audio and Acoustics Faculty of Architecture, Design and Planning, The University of Sydney ABSTRACT o m In many physical acoustic spaces, sound waves will often b reflect off surfaces causing a perceived delay to the human ear. These delays have various effects in the physical domain, but F are sometimes desired in the digital domain for musical effects, i to compliment a post production visual for continuity, or even l just to make something sound different and unique. It is t important to replicate these delayed sounds in a realistic way e and realistic way to let the listener know what is happening, r rather than having an unnatural sounding output. By using a mix of acoustical phenomena and digital algorithms, a digital system will be able to take an input signal and replicate some of these sounds caused by a “digital delay”. INTRODUCTION When a sound wave travels from the source it will disperse in different directions. The sound waves will continue until it is either out of energy or it hits a surface and reflects a different direction. When the receiver (an ear or microphone) hears the sound, it hears the direct sound coming straight from the sound FIR Comb Filter source before hearing a delayed signal from a reflected surface. If the sound must travels further then there is more time for the The behavior of this type of comb filter is interesting in the way direct signal to be heard, creating an echo, where as if it is close that the time domain affects the frequency domain.
    [Show full text]
  • IM69D130 High Performance MEMS Microphone with 69 Db(A) SNR and Lowest Distortion
    Product brief IM69D130 High performance MEMS microphone with 69 dB(A) SNR and lowest distortion The IM69D130 is designed for applications where low self-noise (high SNR), wide dynamic Key features range, low distortion and a high acoustic overload point is required. ››69 dB(A) signal-to-noise ratio Infineon’s Dual Backplate MEMS technology is based on a miniaturized symmetrical micro­ ››Below 1 percent distortion phone design, similar to studio condenser microphones, and results in high linearity of at 128 dBSPL (130 dBSPL AOP) the output signal within a dynamic range of 105 dB. The microphone distortion does not ››Digital (PDM) interface with 6 µs exceed 1 percent even at sound pressure levels of 128 dBSPL. The flat frequency response group delay at 1 kHz (28 Hz low-frequency roll-off) and tight manufacturing tolerance result in close ››Tight sensitivity (-36 ±1 dB) and phase matching of the microphones, which is important for multi-microphone (array) phase (±2°) tolerances applications. ››28 Hz low-frequency roll-off ››980 µA current consumption The digital microphone ASIC contains an extremely low-noise preamplifier and a high-per- (300 µA in low-power mode) formance sigma-delta ADC. Different power modes can be selected in order to suit specific current consumption requirements. Key benefits Each IM69D130 microphone is trimmed with an advanced IFX calibration algorithm, resulting in small sensitivity tolerances (±1 dB). The phase response is tightly matched ››Far field and soft audio signal (± 2°) between microphones, in order to support beamforming applications. pick-up ››Clear audio signals even at high Signal-to-noise ratio (SNR) THD below 1% sound pressure levels 70 130 ››Enabling precise steering of audio beams for advanced audio 69 125 algorithms 68 67 120 SPL] [dB(A)] 66 [dB 115 Typical applications 65 110 ››Voice User Interface (VUI): 64 e.g.
    [Show full text]
  • A Study of Selected Works for Marimba with Electronic Effect Brett Bernard Landry University of South Carolina
    University of South Carolina Scholar Commons Theses and Dissertations 6-30-2016 The lecE tric Marimba: A Study Of Selected Works For Marimba With Electronic Effect Brett Bernard Landry University of South Carolina Follow this and additional works at: https://scholarcommons.sc.edu/etd Part of the Music Performance Commons Recommended Citation Landry, B. B.(2016). The Electric Marimba: A Study Of Selected Works For Marimba With Electronic Effect. (Doctoral dissertation). Retrieved from https://scholarcommons.sc.edu/etd/3446 This Open Access Dissertation is brought to you by Scholar Commons. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of Scholar Commons. For more information, please contact [email protected]. THE ELECTRIC MARIMBA: A STUDY OF SELECTED WORKS FOR MARIMBA WITH ELECTRONIC EFFECT by Brett Bernard Landry Bachelor of Music Louisiana State University, 2005 Bachelor of Music Education Louisiana State University, 2005 Master of Arts Indiana University of Pennsylvania, 2012 Submitted in Partial Fulfillment of the Requirements For the Degree of Doctor of Musical Arts in Performance School of Music University of South Carolina 2016 Accepted by: Scott Herring, Major Professor Reginald Bain, Committee Member Greg Stuart, Committee Member Scott Weiss, Committee Member Lacy Ford, Senior Vice Provost and Dean of Graduate Studies © Copyright by Brett Bernard Landry, 2016 All Rights Reserved. ii ACKNOWLEDGEMENTS This document would not have been possible without the help of many people. To my major professor, Scott Herring – thank you for encouraging me to play Fabian Theory as your student, and for supporting me to create and continue this project.
    [Show full text]
  • Action-Sound Latency: Are Our Tools Fast Enough?
    Action-Sound Latency: Are Our Tools Fast Enough? Andrew P. McPherson Robert H. Jack Giulio Moro Centre for Digital Music Centre for Digital Music Centre for Digital Music School of EECS School of EECS School of EECS Queen Mary U. of London Queen Mary U. of London Queen Mary U. of London [email protected] [email protected] [email protected] ABSTRACT accepted guidelines for latency and jitter (variation in la- The importance of low and consistent latency in interactive tency). music systems is well-established. So how do commonly- Where previous papers have focused on latency in a sin- used tools for creating digital musical instruments and other gle component (e.g. audio throughput latency [21, 20] and tangible interfaces perform in terms of latency from user ac- roundtrip network latency [17]), this paper measures the tion to sound output? This paper examines several common latency of complete platforms commonly employed to cre- configurations where a microcontroller (e.g. Arduino) or ate digital musical instruments. Within these platforms we wireless device communicates with computer-based sound test the most reductive possible setup: producing an audio generator (e.g. Max/MSP, Pd). We find that, perhaps \click" in response to a change on a single digital or ana- surprisingly, almost none of the tested configurations meet log pin. This provides a minimum latency measurement for generally-accepted guidelines for latency and jitter. To ad- any tool created on that platform; naturally, the designer's dress this limitation, the paper presents a new embedded application may introduce additional latency through filter platform, Bela, which is capable of complex audio and sen- group delay, block-based processing or other buffering.
    [Show full text]