Toward User-Friendly, Sophisticated New Musical Instruments

Total Page:16

File Type:pdf, Size:1020Kb

Toward User-Friendly, Sophisticated New Musical Instruments Sergi Jorda` FMOL: Toward Music Technology Group, Audiovisual Institute Pompeu Fabra University Passeig de la Circumval•lacio´8 User-Friendly, 08003 Barcelona, Spain [email protected] Sophisticated New Musical Instruments The design of new instruments and controllers for New Musical Instruments and New performing live computer music is an exciting field Music-Making Paradigms of research that can lead to truly new music- making paradigms. The ever-increasing availability The conception and design of new musical inter- of sensing technologies that enable virtually any faces is a burgeoning multidisciplinary area where kind of physical gesture to be detected and tracked technological knowledge (sensor technology, sound has indeed sprouted a wealth of experimental in- synthesis and processing techniques, computer pro- struments and controllers with which to explore gramming, etc.), artistic creation, and a deep under- new creative possibilities. However, the design of standing of musicians’ culture must converge to these new controllers is often approached from an create new interactive music-making paradigms. If essentially technical point of view in which the new musical interfaces can be partially responsible novelty of the sensing technologies deployed over- for shaping future music, these new musical para- shadows the attainable musical results. Although digms should not be left to improvisation. We are new instruments are not constrained to the physi- not asserting that the design of new instruments cal restrictions of traditional ones, an integrated ap- must follow any given tradition, but when trying to proach in which the instrument is designed as a define new models, we cannot ignore existing ones. whole and not as a combination of arbitrary input A wide knowledge of these, together with the per- and output devices can lead to the creation of more sonal beliefs and intuitions of each designer, should rewarding and more musical instruments. provide orientation about what could be changed This article describes an attempt at an integrated and what could be kept in incoming designs. conception, called F@ust Music On-Line (FMOL), Prior knowledge may originate from different which is a simple mouse-controlled instrument fields or practices, but we could roughly consider that has been used on the Internet by hundreds of that new instruments designers have three major musicians during the past four years. FMOL has backgrounds to deal with: the first is a millennial been created with the complementary goals of in- one, as old and rich as the history of music- troducing the practice of experimental electronic making, while the two others are less than one music to newcomers while trying to remain attrac- half-century-old. We are talking about computer tive to more advanced electronic musicians. It has music and the realm of human-computer interac- been used by musicians of diverse skills for the col- tion (HCI). What we are suggesting is that new mu- lective composition of the music of two important sical controllers must obviously include new shows, including one opera of the Catalan theatre capabilities that have been unavailable and even group La Fura dels Baus. It is also being played in unimaginable in traditional mechanical instru- live concerts. ments. However, they should also try to recover es- sential components from the acoustic legacy that have been left out during the recent decades of computer music owing to the former technical lim- itations in digital technologies. Moreover, they could benefit as well from all the existing corpus of knowledge and research in non-musical areas of Computer Music Journal, 26:3, pp. 23–39, Fall 2002 HCI (Wanderley 2001; see also www.csl.sony.co.jp ᭧ 2002 Massachusetts Institute of Technology. /ϳpoup/research/chi2000wshp/papers/orio.pdf). Jorda` 23 Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/014892602320582954 by guest on 30 September 2021 Musical Interfaces Are Not Musical Instruments provements while remaining within MIDI’s con- straints. A recent study toward that direction was Musical instruments transform the actions of one offered by Goebl and Bresin (2001), who investi- or more performers into sound. An acoustic instru- gated the measuring and reproducing capabilities of ment consists of an excitation source under the a Yamaha Disklavier MIDI piano. They concluded control of the performer(s) and a resonating system that the weakest point of that system does not lie that couples the vibrations of the excitation to the in the recorded MIDI information but in how this surrounding atmosphere. This affects, in turn, the information is used for reproduction. precise patterns of vibration. In most acoustic in- Our complaint is much more banal but at the struments (apart from organs and other keyboard same time frequently overlooked, if we consider instruments) the separation between the input and the number of generic new musical controllers de- the excitation subsystems is unclear. Digital musi- signed today. Is it possible to design nonspecific cal instruments, on the other hand, can always be controllers, to develop highly sophisticated inter- divided into a gestural controller or input device faces without a profound assumption of how the that takes control information from the per- sound or music generators will work? Apart from former(s) and a sound generator that plays the role the truism that not all combinations of controllers– of the excitation source. Digital instruments may generators are equally satisfying (nothing will also include a virtual resonating system, but real match a MIDI wind-controller when driving a resonators are usually missing and substituted by a physical model synthesizer of a woodwind instru- digital-to-analog converter (DAC) that feeds an am- ment), the final expressiveness and richness of any plifier connected to a speaker system. Often, when musical interface (controller) cannot be indepen- discussing new digital instruments, we are referring dent of its generator and the mapping applied be- in fact only to an input device component. In that tween them. sense, the Theremin is a complete musical instru- We firmly believe that a parallel design between ment, while a MIDI Theremin is only a controller controllers and generators is needed to create new or an interface, and not necessarily a musical one. sophisticated musical instruments and music- The separation between gestural controllers and making paradigms. Both design processes should sound generators as standardized by MIDI led to therefore be treated as a whole. A complementary the creation and development of many new alterna- approach to this separation problem is proposed by tive controllers with which to explore new creative Ungvary and Vertegaal (2000), creators of the possibilities (e.g., Chadabe 1996; Wanderley and SensOrg Cyberinstrument, a modular and highly Battier 2000). However, this separation should not configurable assembly of input/output hardware always be seen as a virtue or an unavoidable tech- devices designed for optimal musical expression. nical imposition. The most frequent criticisms of this division deal with the limitations of the proto- col that connects these two components of the in- strument chain. Are today’s standard music Mapping and Feedback Strategies communication protocols (e.g., MIDI) flexible enough, or are the potential dialogues between con- Whether following a parallel design or maintaining trollers and generators limited or narrowed by two independent approaches, the physical and logi- these existing protocols? MIDI does indeed have cal separation of the input device from the sound many limitations (low bandwidth, low resolution, production necessitates multiple ways of process- inaccurate timing, half-duplex communications, ing and mapping the information coming from the etc.) (Moore 1988), but we optimistically believe input device. Mapping becomes therefore an essen- that there is still room for experimentation and im- tial element in designing new instruments. 24 Computer Music Journal Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/014892602320582954 by guest on 30 September 2021 Mappings in acoustical instruments are often Duplex communications allow the existence of multidimensional and slightly nonlinear. Blowing haptic and force feedback manipulations, which harder on many wind instruments not only affects may bring richer dialog loops between the instru- dynamics but also influences pitch, and in these ment and the performer (Bongers 1994; Gillespie control difficulties lie much of their expressive- 1999). However, force feedback is not the only ness. As Johan Sundberg suggests, the voice, possi- feedback loop present in traditional music playing. bly the most expressive instrument ever, can also An acoustical instrument resonates, affecting its be considered to be one of the most poorly designed patterns of vibration, and this acoustic feedback from an engineering point of view (Sundberg 1987), that takes place inside the instrument is funda- taking into account the complex relationship be- mental to the overall sound production process. tween articulator movement and formant fre- We could metaphorically imagine that by this feed- quency changes that makes articulation/formant back, the instrument is being ‘‘conscious’’ of the frequency a one-to-many system. sound it is producing and that this information is For the sake of simplicity, many electronic in- also given back to the performer. struments use basic
Recommended publications
  • Music Mouse™ - an Intelligent Instrument
    Music Mouse™ - An Intelligent Instrument Version for Atari ST by Laurie Spiegel Program and Manual © 1986-1993 Laurie Spiegel. All Rights Reserved. 0 Table of Contents ______________________________________________________________________________ 3 Why Music Mouse? - An Introduction ______________________________________________________________________________ 4 What Music Mouse Is ______________________________________________________________________________ 5 Setting Up and Running Music Mouse 5 Setting Up for MIDI 5 Using Music Mouse with a MIDI Instrument 6 Changing MIDI Channels for Music Mouse Output 6 Playing Along with Music Mouse on an External MIDI Instrument 6 Patching MIDI Through Music Mouse 7 Recording Music Mouse's Output 7 Other Setup Considerations ______________________________________________________________________________ 8 Playing a Music Mouse 8 The Mouse 8 The Polyphonic Cursor and Pitch Display 9 The Keyboard 9 The Menus 10 In General ______________________________________________________________________________ 11 Music Mouse Keyboard Controls ______________________________________________________________________________ 12 Types of Keyboard Controls ______________________________________________________________________________ 13 Music Mouse Keyboard Usage 13 1 - General Controls 14 2 - Type of Harmony 15 3 - Transposition 16 4 - Addition of Melodic Patterning 18 5 - Voicing 19 6 - Loudness, Muting, and Articulation 20 7 - Tempo 21 8 - Rhythmic Treatments 22 10 - MIDI Sound Selection and Control 1 ______________________________________________________________________________
    [Show full text]
  • Mary Had a Little Scoretable* Or the Reactable* Goes Melodic
    Proceedings of the 2006 International Conference on New Interfaces for Musical Expression (NIME06), Paris, France Mary had a little scoreTable* or the reacTable* goes melodic Sergi Jordà Marcos Alonso Music Technology Group Music Technology Group IUA, Universitat Pompeu Fabra IUA, Universitat Pompeu Fabra 08003, Barcelona, Spain 08003, Barcelona, Spain [email protected] [email protected] ABSTRACT selection of the correct notes [18]. When in 1997 one of the This paper introduces the scoreTable*, a tangible interactive authors designed FMOL, an instrument that was conceived with music score editor which started as a simple application for the proselytist intention to introduce newcomers, possibly non- demoing “traditional” approaches to music creation, using the musicians, to more experimental and ‘noisy’ music, in order to reacTable* technology, and which has evolved into an avoid a “wrong note syndrome” that would inhibit the independent research project on its own. After a brief performer’s experimentation and freedom of expression, he discussion on the role of pitch in music, we present a brief decided instead to minimize the importance of pitch. FMOL overview of related tangible music editors, and discuss several does neither fix nor correct pitch; it just dethrones it from its paradigms in computer music creation, contrasting synchronous privileged position and turns it into merely another parameter, with asynchronous approaches. The final part of the paper both at the control and at the perception level. describes the current state of the scoreTable* as well as its In any case, the prevalence of pitch as the primary variable in future lines of research.
    [Show full text]
  • Pitch Shifting with the Commercially Available Eventide Eclipse: Intended and Unintended Changes to the Speech Signal
    JSLHR Research Note Pitch Shifting With the Commercially Available Eventide Eclipse: Intended and Unintended Changes to the Speech Signal Elizabeth S. Heller Murray,a Ashling A. Lupiani,a Katharine R. Kolin,a Roxanne K. Segina,a and Cara E. Steppa,b,c Purpose: This study details the intended and unintended 5.9% and 21.7% less than expected, based on the portion consequences of pitch shifting with the commercially of shift selected for measurement. The delay between input available Eventide Eclipse. and output signals was an average of 11.1 ms. Trials Method: Ten vocally healthy participants (M = 22.0 years; shifted +100 cents had a longer delay than trials shifted 6 cisgender females, 4 cisgender males) produced a sustained −100 or 0 cents. The first 2 formants (F1, F2) shifted in the /ɑ/, creating an input signal. This input signal was processed direction of the pitch shift, with F1 shifting 6.5% and F2 in near real time by the Eventide Eclipse to create an output shifting 6.0%. signal that was either not shifted (0 cents), shifted +100 cents, Conclusions: The Eventide Eclipse is an accurate pitch- or shifted −100 cents. Shifts occurred either throughout the shifting hardware that can be used to explore voice and entire vocalization or for a 200-ms period after vocal onset. vocal motor control. The pitch-shifting algorithm shifts all Results: Input signals were compared to output signals to frequencies, resulting in a subsequent change in F1 and F2 examine potential changes. Average pitch-shift magnitudes during pitch-shifted trials. Researchers using this device were within 1 cent of the intended pitch shift.
    [Show full text]
  • 3-D Audio Using Loudspeakers
    3-D Audio Using Loudspeakers William G. Gardner B. S., Computer Science and Engineering, Massachusetts Institute of Technology, 1982 M. S., Media Arts and Sciences, Massachusetts Institute of Technology, 1992 Submitted to the Program in Media Arts and Sciences, School of Architecture and Planning in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy at the Massachusetts Institute of Technology September, 1997 © Massachusetts Institute of Technology, 1997. All Rights Reserved. Author Program in Media Arts and Sciences August 8, 1997 Certified by Barry L. Vercoe Professor of Media Arts and Sciences Massachusetts Institute of Technology Accepted by Stephen A. Benton Chair, Departmental Committee on Graduate Students Program in Media Arts and Sciences Massachusetts Institute of Technology 3-D Audio Using Loudspeakers William G. Gardner Submitted to the Program in Media Arts and Sciences, School of Architecture and Planning on August 8, 1997, in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy. Abstract 3-D audio systems, which can surround a listener with sounds at arbitrary locations, are an important part of immersive interfaces. A new approach is presented for implementing 3-D audio using a pair of conventional loudspeakers. The new idea is to use the tracked position of the listener’s head to optimize the acoustical presentation, and thus produce a much more realistic illusion over a larger listening area than existing loudspeaker 3-D audio systems. By using a remote head tracker, for instance based on computer vision, an immersive audio environment can be created without donning headphones or other equipment.
    [Show full text]
  • Computer Music
    THE OXFORD HANDBOOK OF COMPUTER MUSIC Edited by ROGER T. DEAN OXFORD UNIVERSITY PRESS OXFORD UNIVERSITY PRESS Oxford University Press, Inc., publishes works that further Oxford University's objective of excellence in research, scholarship, and education. Oxford New York Auckland Cape Town Dar es Salaam Hong Kong Karachi Kuala Lumpur Madrid Melbourne Mexico City Nairobi New Delhi Shanghai Taipei Toronto With offices in Argentina Austria Brazil Chile Czech Republic France Greece Guatemala Hungary Italy Japan Poland Portugal Singapore South Korea Switzerland Thailand Turkey Ukraine Vietnam Copyright © 2009 by Oxford University Press, Inc. First published as an Oxford University Press paperback ion Published by Oxford University Press, Inc. 198 Madison Avenue, New York, New York 10016 www.oup.com Oxford is a registered trademark of Oxford University Press All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of Oxford University Press. Library of Congress Cataloging-in-Publication Data The Oxford handbook of computer music / edited by Roger T. Dean. p. cm. Includes bibliographical references and index. ISBN 978-0-19-979103-0 (alk. paper) i. Computer music—History and criticism. I. Dean, R. T. MI T 1.80.09 1009 i 1008046594 789.99 OXF tin Printed in the United Stares of America on acid-free paper CHAPTER 12 SENSOR-BASED MUSICAL INSTRUMENTS AND INTERACTIVE MUSIC ATAU TANAKA MUSICIANS, composers, and instrument builders have been fascinated by the expres- sive potential of electrical and electronic technologies since the advent of electricity itself.
    [Show full text]
  • Designing Empowering Vocal and Tangible Interaction
    Designing Empowering Vocal and Tangible Interaction Anders-Petter Andersson Birgitta Cappelen Institute of Design Institute of Design AHO, Oslo AHO, Oslo Norway Norway [email protected] [email protected] ABSTRACT on observations in the research project RHYME for the last 2 Our voice and body are important parts of our self-experience, years and on work with families with children and adults with and our communication and relational possibilities. They severe disabilities prior to that. gradually become more important for Interaction Design due to Our approach is multidisciplinary and based on earlier studies increased development of tangible interaction and mobile of voice in resource-oriented Music and Health research and communication. In this paper we present and discuss our work the work on voice by music therapists. Further, more studies with voice and tangible interaction in our ongoing research and design methods in the fields of Tangible Interaction in project RHYME. The goal is to improve health for families, Interaction Design [10], voice recognition and generative sound adults and children with disabilities through use of synthesis in Computer Music [22, 31], and Interactive Music collaborative, musical, tangible media. We build on the use of [1] for interacting persons with layman expertise in everyday voice in Music Therapy and on a humanistic health approach. situations. Our challenge is to design vocal and tangible interactive media Our results point toward empowered participants, who that through use reduce isolation and passivity and increase interact with the vocal and tangible interactive designs [5]. empowerment for the users. We use sound recognition, Observations and interviews show increased communication generative sound synthesis, vibrations and cross-media abilities, social interaction and improved health [29].
    [Show full text]
  • Pitch-Shifting Algorithm Design and Applications in Music
    DEGREE PROJECT IN ELECTRICAL ENGINEERING, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2019 Pitch-shifting algorithm design and applications in music THÉO ROYER KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE ii Abstract Pitch-shifting lowers or increases the pitch of an audio recording. This technique has been used in recording studios since the 1960s, many Beatles tracks being produced using analog pitch-shifting effects. With the advent of the first digital pitch-shifting hardware in the 1970s, this technique became essential in music production. Nowa- days, it is massively used in popular music for pitch correction or other creative pur- poses. With the improvement of mixing and mastering processes, the recent focus in the audio industry has been placed on the high quality of pitch-shifting tools. As a consequence, current state-of-the-art literature algorithms are often outperformed by the best commercial algorithms. Unfortunately, these commercial algorithms are ”black boxes” which are very complicated to reverse engineer. In this master thesis, state-of-the-art pitch-shifting techniques found in the liter- ature are evaluated, attaching great importance to audio quality on musical signals. Time domain and frequency domain methods are studied and tested on a wide range of audio signals. Two offline implementations of the most promising algorithms are proposed with novel features. Pitch Synchronous Overlap and Add (PSOLA), a sim- ple time domain algorithm, is used to create pitch-shifting, formant-shifting, pitch correction and chorus effects on voice and monophonic signals. Phase vocoder, a more complex frequency domain algorithm, is combined with high quality spec- tral envelope estimation and harmonic-percussive separation to design a polyvalent pitch-shifting and formant-shifting algorithm.
    [Show full text]
  • Effects List
    Effects List Multi-Effects Parameters (MFX1–3, MFX) The multi-effects feature 78 different kinds of effects. Some of the effects consist of two or more different effects connected in series. 67 p. 8 FILTER (10 types) OD->Flg (OVERDRIVE"FLANGER) 68 OD->Dly (OVERDRIVE DELAY) p. 8 01 Equalizer (EQUALIZER) p. 1 " 69 Dist->Cho (DISTORTION CHORUS) p. 8 02 Spectrum (SPECTRUM) p. 2 " 70 Dist->Flg (DISTORTION FLANGER) p. 8 03 Isolator (ISOLATOR) p. 2 " 71 p. 8 04 Low Boost (LOW BOOST) p. 2 Dist->Dly (DISTORTION"DELAY) 05 Super Flt (SUPER FILTER) p. 2 72 Eh->Cho (ENHANCER"CHORUS) p. 8 06 Step Flt (STEP FILTER) p. 2 73 Eh->Flg (ENHANCER"FLANGER) p. 8 07 Enhancer (ENHANCER) p. 2 74 Eh->Dly (ENHANCER"DELAY) p. 8 08 AutoWah (AUTO WAH) p. 2 75 Cho->Dly (CHORUS"DELAY) p. 9 09 Humanizer (HUMANIZER) p. 2 76 Flg->Dly (FLANGER"DELAY) p. 9 10 Sp Sim (SPEAKER SIMULATOR) p. 2 77 Cho->Flg (CHORUS"FLANGER) p. 9 MODULATION (12 types) PIANO (1 type) 11 Phaser (PHASER) p. 3 78 SymReso (SYMPATHETIC RESONANCE) p. 9 12 Step Ph (STEP PHASER) p. 3 13 MltPhaser (MULTI STAGE PHASER) p. 3 14 InfPhaser (INFINITE PHASER) p. 3 15 Ring Mod (RING MODULATOR) p. 3 About Note 16 Step Ring (STEP RING MODULATOR) p. 3 17 Tremolo (TREMOLO) p. 3 Some effect settings (such as Rate or Delay Time) can be specified in 18 Auto Pan (AUTO PAN) p. 3 terms of a note value. The note value for the current setting is 19 Step Pan (STEP PAN) p.
    [Show full text]
  • MS-100BT Effects List
    Effect Types and Parameters © 2012 ZOOM CORPORATION Copying or reproduction of this document in whole or in part without permission is prohibited. Effect Types and Parameters Parameter Parameter range Effect type Effect explanation Flanger This is a jet sound like an ADA Flanger. Knob1 Knob2 Knob3 Depth 0–100 Rate 0–50 Reso -10–10 Page01 Sets the depth of the modulation. Sets the speed of the modulation. Adjusts the intensity of the modulation resonance. PreD 0–50 Mix 0–100 Level 0–150 Page02 Adjusts the amount of effected sound Sets pre-delay time of effect sound. Adjusts the output level. that is mixed with the original sound. Effect screen Parameter explanation Tempo synchronization possible icon Effect Types and Parameters [DYN/FLTR] Comp This compressor in the style of the MXR Dyna Comp. Knob1 Knob2 Knob3 Sense 0–10 Tone 0–10 Level 0–150 Page01 Adjusts the compressor sensitivity. Adjusts the tone. Adjusts the output level. ATTCK Slow, Fast Page02 Sets compressor attack speed to Fast or Slow. RackComp This compressor allows more detailed adjustment than Comp. Knob1 Knob2 Knob3 THRSH 0–50 Ratio 1–10 Level 0–150 Page01 Sets the level that activates the Adjusts the compression ratio. Adjusts the output level. compressor. ATTCK 1–10 Page02 Adjusts the compressor attack rate. M Comp This compressor provides a more natural sound. Knob1 Knob2 Knob3 THRSH 0–50 Ratio 1–10 Level 0–150 Page01 Sets the level that activates the Adjusts the compression ratio. Adjusts the output level. compressor. ATTCK 1–10 Page02 Adjusts the compressor attack rate.
    [Show full text]
  • Timbretron:Awavenet(Cyclegan(Cqt(Audio))) Pipeline for Musical Timbre Transfer
    Published as a conference paper at ICLR 2019 TIMBRETRON:AWAVENET(CYCLEGAN(CQT(AUDIO))) PIPELINE FOR MUSICAL TIMBRE TRANSFER Sicong Huang1;2, Qiyang Li1;2, Cem Anil1;2, Xuchan Bao1;2, Sageev Oore2;3, Roger B. Grosse1;2 University of Toronto1, Vector Institute2, Dalhousie University3 ABSTRACT In this work, we address the problem of musical timbre transfer, where the goal is to manipulate the timbre of a sound sample from one instrument to match another instrument while preserving other musical content, such as pitch, rhythm, and loudness. In principle, one could apply image-based style transfer techniques to a time-frequency representation of an audio signal, but this depends on having a representation that allows independent manipulation of timbre as well as high- quality waveform generation. We introduce TimbreTron, a method for musical timbre transfer which applies “image” domain style transfer to a time-frequency representation of the audio signal, and then produces a high-quality waveform using a conditional WaveNet synthesizer. We show that the Constant Q Transform (CQT) representation is particularly well-suited to convolutional architectures due to its approximate pitch equivariance. Based on human perceptual evaluations, we confirmed that TimbreTron recognizably transferred the timbre while otherwise preserving the musical content, for both monophonic and polyphonic samples. We made an accompanying demo video 1 which we strongly encourage you to watch before reading the paper. 1 INTRODUCTION Timbre is a perceptual characteristic that distinguishes one musical instrument from another playing the same note with the same intensity and duration. Modeling timbre is very hard, and it has been referred to as “the psychoacoustician’s multidimensional waste-basket category for everything that cannot be labeled pitch or loudness”2.
    [Show full text]
  • Audio Plug-Ins Guide Version 9.0 Legal Notices This Guide Is Copyrighted ©2010 by Avid Technology, Inc., (Hereafter “Avid”), with All Rights Reserved
    Audio Plug-Ins Guide Version 9.0 Legal Notices This guide is copyrighted ©2010 by Avid Technology, Inc., (hereafter “Avid”), with all rights reserved. Under copyright laws, this guide may not be duplicated in whole or in part without the written consent of Avid. 003, 96 I/O, 96i I/O, 192 Digital I/O, 192 I/O, 888|24 I/O, 882|20 I/O, 1622 I/O, 24-Bit ADAT Bridge I/O, AudioSuite, Avid, Avid DNA, Avid Mojo, Avid Unity, Avid Unity ISIS, Avid Xpress, AVoption, Axiom, Beat Detective, Bomb Factory, Bruno, C|24, Command|8, Control|24, D-Command, D-Control, D-Fi, D-fx, D-Show, D-Verb, DAE, Digi 002, DigiBase, DigiDelivery, Digidesign, Digidesign Audio Engine, Digidesign Intelligent Noise Reduction, Digidesign TDM Bus, DigiDrive, DigiRack, DigiTest, DigiTranslator, DINR, DV Toolkit, EditPack, Eleven, EUCON, HD Core, HD Process, Hybrid, Impact, Interplay, LoFi, M-Audio, MachineControl, Maxim, Mbox, MediaComposer, MIDI I/O, MIX, MultiShell, Nitris, OMF, OMF Interchange, PRE, ProControl, Pro Tools M-Powered, Pro Tools, Pro Tools|HD, Pro Tools LE, QuickPunch, Recti-Fi, Reel Tape, Reso, Reverb One, ReVibe, RTAS, Sibelius, Smack!, SoundReplacer, Sound Designer II, Strike, Structure, SYNC HD, SYNC I/O, Synchronic, TL Aggro, TL AutoPan, TL Drum Rehab, TL Everyphase, TL Fauxlder, TL In Tune, TL MasterMeter, TL Metro, TL Space, TL Utilities, Transfuser, Trillium Lane Labs, Vari-Fi, Velvet, X-Form, and XMON are trademarks or registered trademarks of Avid Technology, Inc. Xpand! is Registered in the U.S. Patent and Trademark Office. All other trademarks are the property of their respective owners.
    [Show full text]
  • The [Hid] Toolkit for Pd Hans-Christoph Steiner
    The [hid] Toolkit for Pd Hans-Christoph Steiner The [hid] Toolkit for Pd Hans-Christoph Steiner Interactive Telecommunications Program, New York University Masters Thesis Interactive Telecommunications Program. Fall 2004 1 The [hid] Toolkit for Pd Hans-Christoph Steiner Table of Contents Abstract 3 Introduction 4 Concept 7 Project Rationale 16 Background and Context 17 Prototype Design 23 Implementation of Prototype 26 User Testing 27 Conclusion 29 Future Work 29 Appendices 32 Bibliography 35 Interactive Telecommunications Program. Fall 2004 2 The [hid] Toolkit for Pd Hans-Christoph Steiner 1. Abstract The [hid] toolkit is a set of Pd objects for designing instruments to harness the real-time interaction made possible by contemporary computer music performance environments. This interaction has been too frequently tied to the keyboard-mouse-monitor model, narrowly constraining the range of possible gestures the performer can use. A multitude of gestural input devices are readily available, making it much easier utilize a broad range of gestures. Consumer Human Interface Devices (HIDs) such as joysticks, tablets, and mice are cheap, and some can be quite good musical controllers, including some that can provide non-auditory feedback. The [hid] toolkit provides objects for using the data from these devices and controlling the feedback, as well as objects for mapping the data from these devices to the desired output. Many musicians are using and creating gestural instruments of their own, but the creators rarely develop virtuosity, and these instruments rarely gain wide acceptance due to the expense and skill level needed to build them; this prevents the formation of a body of technique for these new instruments.
    [Show full text]