Free-Space Gesture Mappings for Music and Sound Gabrielle Odowichuk Master of Applied Science

Total Page:16

File Type:pdf, Size:1020Kb

Free-Space Gesture Mappings for Music and Sound Gabrielle Odowichuk Master of Applied Science Free-Space Gesture Mappings for Music and Sound by Gabrielle Odowichuk BEng, University of Victoria, 2009 A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Applied Science in the Department of Electrical and Computer Engineering c Gabrielle Odowichuk, 2012 University of Victoria All rights reserved. This thesis may not be reproduced in whole or in part by photocopy or other means, without the permission of the author. ii Free-Space Gesture Mappings for Music and Sound by Gabrielle Odowichuk BEng, University of Victoria, 2009 Supervisory Committee Dr. P. Driessen, Co-Supervisor (Department of Electrical and Computer Engineering) Dr. G. Tzanetakis, Co-Supervisor (Department of Computer Science) Dr. Wyatt Page, Member (Department of Electrical and Computer Engineering) iii Supervisory Committee Dr. P. Driessen, Co-Supervisor (Department of Electrical and Computer Engineering) Dr. G. Tzanetakis, Co-Supervisor (Department of Computer Science) Dr. Wyatt Page, Member (Department of Electrical and Computer Engineering) Abstract This thesis describes a set of software applications for real-time gesturally con- trolled interactions with music and sound. The applications for each system are varied but related, addressing unsolved problems in the field of audio and music technology. The three systems presented in this work capture 3D human motion with spatial sensors and map position data from the sensors onto sonic parameters. Two different spatial sensors are used interchangeably to perform motion capture: the radiodrum and the Xbox Kinect. The first two systems are aimed at creating immersive virtually-augmented environments. The first application uses human ges- ture to move sounds spatially in a 3D surround sound by physically modelling the movement of sound in a space. The second application is a gesturally controlled self- organized music browser in which songs are clustered based on auditory similarity. The third application is specifically aimed at extending musical performance through the development of a digitally augmented vibraphone. Each of these applications is presented with related work, theoretical and technical details for implementation, and discussions of future work. iv Table of Contents Supervisory Committee ii Abstract iii Table of Contents iv List of Figures vi Acknowledgements viii 1 Introduction 1 1.1 Problem Formulation . .2 1.2 Thesis Structure . .4 2 Background And Motivation 6 2.1 Contextualizing a Gesture . .7 2.2 Data Mapping . .8 2.3 Free-space Gesture Controllers . 10 2.4 A Case Study . 13 3 Capturing Motion 16 3.1 Spatial Sensor Comparison . 17 3.2 Latency . 19 3.3 Range . 20 v 3.4 Software Tools . 22 3.5 Future Work with Motion Capture . 24 4 Motion-controlled Spatialization 27 4.1 Related Work . 28 4.2 Sound Localization . 29 4.3 Creating a Spatial Model . 30 4.4 Implementation . 31 4.5 Summary and Future Work . 37 5 Gesturally-controlled Music Browsing 38 5.1 Related Work . 39 5.2 Organizing Music in a 3D space . 40 5.3 Navigating through the collection . 44 5.4 Implementation . 45 5.5 Summary and Future Work . 47 6 Hyper-Vibraphone 48 6.1 Related Work . 50 6.2 Gestural Range (Magic Eyes) . 51 6.3 Adaptive Control (Fantom Faders) . 54 6.4 Summary and Future Work . 57 7 Conclusions 59 7.1 Recommendations for Future Work . 60 Bibliography 62 vi List of Figures 2.1 Interactions between Sound and Motion . .6 2.2 Data Mapping from a Gesture to Sound . .9 2.3 Mickey Mouse, controlling a cartoon world with his movements in Fantasia . 10 2.4 Leon Theremin playing the Theremin . 11 2.5 Radiodrum design diagram . 12 2.6 Still shots from MISTIC concert . 15 3.1 Sensor Fusion Experiment Hardware Diagram . 17 3.2 Sensor Fusion Experiment Software Diagram . 18 3.3 Demonstration of Latency for the Radiodrum and Kinect . 19 3.4 Captured Motion of Four Drum Strikes . 21 3.5 Radiodrum Viewable Area . 21 3.6 Kinect Viewable Area . 22 3.7 Horizontal Range of both controllers . 23 4.1 Room within a room model . 31 4.2 Implementation Flow Chart . 32 4.3 Delay Line Implementation . 33 4.4 Image Source Model . 35 4.5 OpenGL Screenshot . 36 vii 5.1 A 3D self organizing map before (a) and after (b) training with an 8-color dataset . 42 5.2 3D SOM with two genres and user-controlled cursor . 44 5.3 Implementation Diagram . 46 6.1 Music Control Design . 52 6.2 Audio Signal Chain . 53 6.3 Virtual Vibraphone Faders . 54 6.4 Computer Vision Diagram . 55 6.5 Virtual recreation of the vibraphone . 56 viii Acknowledgements I'd like to begin by thanking my co-supervisors, Dr. George Tzanetakis and Dr. Peter Driessen, for their support, patience, and many teachings through my undergraduate and graduate studies at UVic. Peter's enthusiasm for my potential and my future has given me motivation and confidence, especially combined with the respect I have for his incredible knowledge and experience. Whenever I asked George if he was finally getting sick of me, he would assure me that could never happen. I'm still not sure how that's possible after all this time, but what a relief, and I will always strive to one day be as totally awesome in every way as George. My first encounter with this field of research and much of my early enthusiasm came from sitting in the classroom of Dr. Andy Schloss. His dry sense of humour and passion for the material is what got me into this world. Thanks also to Kirk McNally, for helping me set up the speaker cube and teaching me some crucial skills with audio equipment, and to Dr. Wyatt Page for his help with my thesis and for showing me what an amazing academic presentation looks like. Early on in my master's program, Steven Ness welcomed me into our research lab, and has helped me understand how to be an effective researcher. Many other friends and colleagues have helped me a long the way: Tiago Tiavares, Sonmez Zehtabi, Alex Lerch, and Scott Miller were all of particular importance to me. A large chapter of this thesis is about a collaboration with Shawn Trail, who is a dear friend and the inspiration for what is, in my mind, the research with the most possible impact down the road. The use of this type of gestural control, when completely into music practice, has expressive possibilities that are still very much untapped. Thanks also to David Parfit for collaborating with me in the Trimpin ix concert, which gave me more context and empirical proof that this type of control is rich with expressive possibilities. Paul Reimer is a close friend and my indispensable coding consultant. If I found myself spending more than a few hours beating my head against a wall with a techni- cal issue, I need only ask Paul for help and my problem would soon be solved. Marlene Stewart has been another source of much support. It's so rare to have people in your life you can rely on so completely like Paul and Marlene. Thanks mom 'n dad for being the proud supportive parents that raised the kind of daughter who goes and gets a master's degree in engineering. And finally thank you to NSERC and SSHRC for supplying the funding for this research. Chapter 1 Introduction The ability for sound and human gestures to affect one another is a fascinating and useful notion, often associated with artistic expression. For example, a pianist will make gestures and motions that affect the sounds produced by the piano, and also some that do not. Both types of gesture are important to the full experience of the performance. A dancer, though not directly changing the music, is also creating a expressive representation of the music, or the ideas and emotions evoked by the music. In this case, sound affects motion. The connection between auditory and visual senses is a large part of what makes audio-visual performances interesting to watch and listen to. Advances in personal computing and the adoption of new technologies allow the creation of new and novel mappings between visual and auditory information. A large motivator for this research is the growing capabilities of personal computing. The mapping of free-space human gesture to sound used to be a strictly off-line operation. A collection of computers and sensors were used to capture motion, and then calculations to produce a corresponding auditory output were synthesized and played back afterwards. Modern computers are able to sense motion and gesture and react almost instantaneously. The ability to capture free-space motion, perform complex calculations, and pro- 2 duce corresponding audio in real-time is a fundamental requirement for the imple- mentation of these systems. This type of control requires thought into how to use this control in many contexts. The secondary feedback of audio playback is an important aspect of what makes gesture-controlled sound and music useful, because accessing or manipulating aural information by listening to auditory feedback of that information is intuitive and natural. Though there are many types of gestures used in human-computer interaction (HCI), in particular this work focuses on three dimensional motion capture of large, relatively slow, continuous human motions. The purpose of this work is not to classify these motions and recognize gestures to trigger events. Instead, the focus is on the mapping continuous human motion onto sonic parameters in three new ways that are both intuitive and useful in the music and audio industry. 1.1 Problem Formulation The possible applications of these gesturally controlled audio systems span several different facets of HCI, and address a variety of music and audio-industry related problems, such as: • Intuitive control in 2D and 3D control scenarios Intuitive and ergonomic control are an important consideration in the field of HCI.
Recommended publications
  • On the Choice of Gestural Controllers for Musical Applications: an Evaluation of the Lightning II and the Radio Baton
    On the Choice of Gestural Controllers for Musical Applications: An Evaluation of the Lightning II and the Radio Baton Carmine Casciato Music Technology Area Schulich School of Music McGill University Montreal, Canada August 2007 A thesis submitted to McGill University in partial fulfillment of the requirements for the degree of Master of Arts. c 2007 Carmine Casciato and the regents of McGill University 2007 i Abstract This thesis evaluates the Lightning II and the Radio Baton gestural contollers for musical appli- cations within two main perspectives. The first involves a technical specification of each in terms of their construction and sensing technology. This step, along with an analysis of the insights by long-term users on the controllers in question, provides an understanding about the different musical contexts each controllers can be and have been used in. The second perspective involves studying the Radio Baton and the Lightning within a specific musical context, namely that of a simulated acoustic percussion instrument performance. Three expert percussionists performed basic percussion techniques on a real drum, a drum-like gestural controller (the Roland V-Drum), the Radio Baton and the Lightning II. The motion capture and audio data from these trials suggest that certain acoustic percussion playing techniques can be successfully transfered over to gestural controllers. This comparative analysis between gestural controllers adds to the ongo- ing discussion on the evaluation of digital musical instruments and their relationship to acoustic instruments. ii Resum´e Ce rapport de thse examine les contrleurs gestuels Lightning II et Radio Baton selon deux per- spectives principales. La premiere implique pour chaque contrleur une specification technique en terme de materiel et de technologie sensorielle.
    [Show full text]
  • Touchpoint: Dynamically Re-Routable Effects Processing As a Multi-Touch Tablet Instrument
    Proceedings ICMC|SMC|2014 14-20 September 2014, Athens, Greece Touchpoint: Dynamically Re-Routable Effects Processing as a Multi-Touch Tablet Instrument Nicholas K. Suda Owen S. Vallis, Ph.D California Institute of the Arts California Institute of the Arts 24700 McBean Pkwy. 24700 McBean Pkwy. Valencia, California 91355 Valencia, California 91355 United States United States [email protected] [email protected] ABSTRACT Touchpoint is a multi-touch tablet instrument which pre- sents the chaining-together of non-linear effects processors as its core music synthesis technique. In doing so, it uti- lizes the on-the-fly re-combination of effects processors as the central mechanic of performance. Effects Processing as Synthesis is justified by the fact that that the order in which non-linear systems are arranged re- sults in a diverse range of different output signals. Be- cause the Effects Processor Instrument is a collection of software, the signal processing ecosystem is virtual. This means that processors can be re-defined, re-configured, cre- Figure 1. The sprawl of peripherals used to control IDM ated, and destroyed instantaneously, as a “note-level” mu- artist Tim Exile’s live performance. sical decision within a performance. The software of Touchpoint consists of three compo- nents. The signal processing component, which is address- In time stretching, pitch correcting, sample replacing, no- ed via Open Sound Control (OSC), runs in Reaktor Core. ise reducing, sound file convolving, and transient flagging The touchscreen component runs in the iOS version of Le- their recordings, studio engineers of every style are reg- mur, and the networking component uses ChucK.
    [Show full text]
  • Resonant Spaces Notes
    Andrew McPherson Resonant Spaces for cello and live electronics Performance Notes Overview This piece involves three components: the cello, live electronic processing, and "sequenced" sounds which are predetermined but which render in real time to stay synchronized to the per- former. Each component is notated on a separate staff (or pair of staves) in the score. The cellist needs a bridge contact pickup on his or her instrument. It can be either integrated into the bridge or adhesively attached. A microphone will not work as a substitute, because the live sound plays back over speakers very near to the cello and feedback will inevitably result. The live sound simulates the sound of a resonating string (and, near the end of the piece, ringing bells). This virtual string is driven using audio from the cello. Depending on the fundamental frequency of the string and the note played by the cello, different pitches are produced and played back by speakers placed near the cellist. The sequenced sounds are predetermined, but are synchronized to the cellist by means of a series of cue points in the score. At each cue point, the computer pauses and waits for the press of a footswitch to advance. In some cases, several measures go by without any cue points, and it is up to the cellist to stay synchronized to the sequenced sound. An in-ear monitor with a click track is available in these instances to aid in synchronization. The electronics are designed to be controlled in real-time by the cellist, so no other technician is required in performance.
    [Show full text]
  • Trimpin Above, Below, and in Between Trimpin
    TRIMPIN ABOVE, BELOW, AND IN BETWEEN BELOW, ABOVE, SEATTLE SYMPHONY LUDOVIC MORLOT TRIMPIN Above, Below, and In Between, A site-specific composition Part 1 .............................................................................1:36 Part 2 ............................................................................ 2:55 Part 3 – For Jessika ..................................................... 4:20 Part 4 ............................................................................ 2:34 Part 5 ............................................................................ 6:00 Part 6 ............................................................................ 5:00 Jessika Kenney, soprano; Sayaka Kokubo, viola; Penelope Crane, viola: Eric Han, cello; David Sabee, cello; Jordan Anderson, double bass; Joseph Kaufman, double bass; Ko-ichiro Yamamoto, trombone; David Lawrence Ritt, trombone; Stephen Fissel, trombone TOTAL TIME ............................................................... 22:30 SEATTLESYMPHONY.ORG � & © 2016 Seattle Symphony Media. All rights reserved. Unauthorized copying, hiring, lending, public performance and broadcasting of this record prohibited without prior written permission from the Seattle Symphony. Benaroya Hall, 200 University Street, Seattle, WA 98101 MADE IN USA Photo: Larey McDaniel Larey Photo: SEATTLE SYMPHONY Founded in 1903, the Seattle Symphony is one of America’s leading symphony orchestras and is internationally acclaimed for its innovative programming and extensive recording history. Under the leadership
    [Show full text]
  • User Manual Mellotron V - WELCOME to the MELLOTRON the Company Was Called Mellotronics, and the First Product, the Mellotron Mark 1, Appeared in 1963
    USER MANUAL Special Thanks DIRECTION Frédéric BRUN Kévin MOLCARD DEVELOPMENT Pierre-Lin LANEYRIE Benjamin RENARD Marie PAULI Samuel LIMIER Baptiste AUBRY Corentin COMTE Mathieu NOCENTI Simon CONAN Geoffrey GORMOND Florian MARIN Matthieu COUROUBLE Timothée BÉHÉTY Arnaud BARBIER Germain MARZIN Maxime AUDFRAY Yann BURRER Adrien BARDET Kevin ARCAS Pierre PFISTER Alexandre ADAM Loris DE MARCO Raynald DANTIGNY DESIGN Baptiste LE GOFF Morgan PERRIER Shaun ELLWOOD Jonas SELLAMI SOUND DESIGN Victor MORELLO Boele GERKES Ed Ten EYCK Paul SCHILLING SPECIAL THANKS Terry MARDSEN Ben EGGEHORN Jay JANSSEN Paolo NEGRI Andrew CAPON Boele GERKES Jeffrey CECIL Peter TOMLINSON Fernando Manuel Chuck CAPSIS Jose Gerardo RENDON Richard COURTEL RODRIGUES Hans HOLEMA SANTANA JK SWOPES Marco CORREIA Greg COLE Luca LEFÈVRE Dwight DAVIES Gustavo BRAVETTI Ken Flux PIERCE George WARE Tony Flying SQUIRREL Matt PIKE Marc GIJSMAN Mat JONES Ernesto ROMEO Adrien KANTER Jason CHENEVAS-PAULE Neil HESTER MANUAL Fernando M RODRIGUES Vincent LE HEN (editor) Jose RENDON (Author) Minoru KOIKE Holger STEINBRINK Stephan VANKOV Charlotte METAIS Jack VAN © ARTURIA SA – 2019 – All rights reserved. 11 Chemin de la Dhuy 38240 Meylan FRANCE www.arturia.com Information contained in this manual is subject to change without notice and does not represent a commitment on the part of Arturia. The software described in this manual is provided under the terms of a license agreement or non-disclosure agreement. The software license agreement specifies the terms and conditions for its lawful use. No part of this manual may be reproduced or transmitted in any form or by any purpose other than purchaser’s personal use, without the express written permission of ARTURIA S.A.
    [Show full text]
  • USB MIDI CONTROLLER INTRODUCTION This Is an Ultra-Compact MIDI Controller Which Serves to Control Music Software
    USB MIDI CONTROLLER INTRODUCTION This is an ultra-compact MIDI controller which serves to control music software. It is equipped with 25/49/61 velocity-sensitive keys as well as 1 fader and 4 rotary controls. The controller connects both PC and Mac(OTG convertor cable is required for connecting to Mobile phone and Pad). You can do without a bulky power adapter as power supply is via the USB bus. It’s also available to download the full user manual and other setting instructions from http://en.worlde.com.cn/. 2 STEPS FOR KS TO WORK 1) Connect your KS to the computer by USB cable. 2) Open your DAW in your computer and select KS as your hardware controller in your DAW. Now you are ready to go. CONNECTING A COMPUTER Use the included USB cable to connect the USB MIDI controller to a USB2.0 port on your computer. The power will turn on. Select the KS as MIDI controller within your music software and you are ready to go. DEVICE SETUP IN SOFTWARE To select KS as a controller for your digital audio workstation (DAW): 1. Connect KS to your computer using a standard USB cable. (If you are connecting KS to a USB hub, make sure it is a powered hub. If another device with USB3.0 is connected to USB hub at the same time, KS will not work properly at this time.) 2. Open your DAW. 3. Open your DAW's Preferences, Options, or Device Setup, select KS as your hardware controller, and then close t hat window.
    [Show full text]
  • The Evolution of the Performer Composer
    CONTEMPORARY APPROACHES TO LIVE COMPUTER MUSIC: THE EVOLUTION OF THE PERFORMER COMPOSER BY OWEN SKIPPER VALLIS A thesis submitted to the Victoria University of Wellington in fulfillment of the requirements for the degree of Doctor of Philosophy Victoria University of Wellington 2013 Supervisory Committee Dr. Ajay Kapur (New Zealand School of Music) Supervisor Dr. Dugal McKinnon (New Zealand School of Music) Co-Supervisor © OWEN VALLIS, 2013 NEW ZEALAND SCHOOL OF MUSIC ii ABSTRACT This thesis examines contemporary approaches to live computer music, and the impact they have on the evolution of the composer performer. How do online resources and communities impact the design and creation of new musical interfaces used for live computer music? Can we use machine learning to augment and extend the expressive potential of a single live musician? How can these tools be integrated into ensembles of computer musicians? Given these tools, can we understand the computer musician within the traditional context of acoustic instrumentalists, or do we require new concepts and taxonomies? Lastly, how do audiences perceive and understand these new technologies, and what does this mean for the connection between musician and audience? The focus of the research presented in this dissertation examines the application of current computing technology towards furthering the field of live computer music. This field is diverse and rich, with individual live computer musicians developing custom instruments and unique modes of performance. This diversity leads to the development of new models of performance, and the evolution of established approaches to live instrumental music. This research was conducted in several parts. The first section examines how online communities are iteratively developing interfaces for computer music.
    [Show full text]
  • USB MIDI CONTROLLER INTRODUCTION This Is an Ultra-Compact and Light-Weight MIDI Controller Which Serves to Control Music Software
    USB MIDI CONTROLLER INTRODUCTION This is an ultra-compact and light-weight MIDI controller which serves to control music software. It is equipped with 25 velocity-sensitive keys, and 8 velocity-sensitive drum pads as well as 4 faders and 4 rotary controls. The controller connects both PC and Mac(OTG convertor cable is required for connecting to Mobile phone and Pad). You can do without a bulky power adapter as power supply is via the USB bus. The unit is supplied with a software editor which you can download from WORLDE website. The software editor will let you customize this USB MIDI controller to your own requirements. It’s also available to download the full user manual and other setting instructions from http://en.worlde.com.cn/. 3 STEPS FOR PANDAMINI TO WORK 1) Connect your PANDAMINI to the computer by USB cable. 2) Download the software editor from the download page of WORLDE website and customize all editable controllers, and create, save and load presets. 3) Open your DAW in your computer and select PANDAMINI as your hardware controller in your DAW. Now you are ready to go. CONNECTING A COMPUTER Use the included USB cable to connect the USB MIDI controller to a USB2.0 port on your computer. The power will turn on and the scene LED will light up. Select the PANDAMINI as MIDI controller within your music software and you are ready to go. FIRST STEPS WITH THE EDITOR The editor will let you customize all editable controllers, and create, save and load presets.
    [Show full text]
  • DVD Program Notes
    DVD Program Notes Part One: Thor Magnusson, Alex Click Nilson is a Swedish avant McLean, Nick Collins, Curators garde codisician and code-jockey. He has explored the live coding of human performers since such Curators’ Note early self-modifiying algorithmic text pieces as An Instructional Game [Editor’s note: The curators attempted for One to Many Musicians (1975). to write their Note in a collaborative, He is now actively involved with improvisatory fashion reminiscent Testing the Oxymoronic Potency of of live coding, and have left the Language Articulation Programmes document open for further interaction (TOPLAP), after being in the right from readers. See the following URL: bar (in Hamburg) at the right time (2 https://docs.google.com/document/d/ AM, 15 February 2004). He previously 1ESzQyd9vdBuKgzdukFNhfAAnGEg curated for Leonardo Music Journal LPgLlCe Mw8zf1Uw/edit?hl=en GB and the Swedish Journal of Berlin Hot &authkey=CM7zg90L&pli=1.] Drink Outlets. Alex McLean is a researcher in the area of programming languages for Figure 1. Sam Aaron. the arts, writing his PhD within the 1. Overtone—Sam Aaron Intelligent Sound and Music Systems more effectively and efficiently. He group at Goldsmiths College, and also In this video Sam gives a fast-paced has successfully applied these ideas working within the OAK group, Uni- introduction to a number of key and techniques in both industry versity of Sheffield. He is one-third of live-programming techniques such and academia. Currently, Sam the live-coding ambient-gabba-skiffle as triggering instruments, scheduling leads Improcess, a collaborative band Slub, who have been making future events, and synthesizer design.
    [Show full text]
  • An Arduino-Based MIDI Controller for Detecting Minimal Movements in Severely Disabled Children
    IT 16054 Examensarbete 15 hp Juni 2016 An Arduino-based MIDI Controller for Detecting Minimal Movements in Severely Disabled Children Mattias Linder Institutionen för informationsteknologi Department of Information Technology Abstract An Arduino-based MIDI Controller for Detecting Minimal Movements in Severely Disabled Children Mattias Linder Teknisk- naturvetenskaplig fakultet UTH-enheten In therapy, music has played an important role for children with physical and cognitive impairments. Due to the nature of different impairments, many traditional Besöksadress: instruments can be very hard to play. This thesis describes the development of a Ångströmlaboratoriet Lägerhyddsvägen 1 product in which electrical sensors can be used as a way of creating sound. These Hus 4, Plan 0 sensors can be used to create specially crafted controllers and thus making it possible for children with different impairments to create music or sound. This Postadress: thesis examines if it is possible to create such a device with the help of an Arduino Box 536 751 21 Uppsala micro controller, a smart phone and a computer. The end result is a product that can use several sensors simultaneously to either generate notes, change the Telefon: volume of a note or controlling the pitch of a note. There are three inputs for 018 – 471 30 03 specially crafted sensors and three static potentiometers which can also be used as Telefax: specially crafted sensors. The sensor inputs for the device are mini tele (2.5mm) 018 – 471 30 00 and any sensor can be used as long as it can be equipped with this connector. The product is used together with a smartphone application to upload different settings Hemsida: and a computer with a music work station which interprets the device as a MIDI http://www.teknat.uu.se/student synthesizer.
    [Show full text]
  • Virtual Musical Instruments: Print This Technological Aspects Article and Interactive Performance Issues
    428 l Virtual Musical Instruments: Print this Technological Aspects article and Interactive Performance Issues Suguru Goto [email protected] Abstract I have been creating various Gestural Interfaces1 for use in my compositions for Virtual Musical Instruments2. These Virtual Musical Instruments do not merely refer to the physical instruments, but also involve Sound Synthesis3, programming and Interactive Video4. Using the Virtual Musical Instruments, I experimented with numerous compositions and performances. This paper is intended to report my experiences, as well as their development; and concludes with a discussion of some issues as well as the problem of the very notion of interactivity. 1. An interface which translates body movement to analog signals. This contains a controller which is created with sensors and video scanning system. This is usually created by an artist himself or with a collaborator. This does not include a commercially produced MIDI controller. 2. This refers to a whole system which contains Gesture, Gestural Interface, Mapping Interface, algorithm, Sound Syn- thesis, and Interactive Video. According to programming and artistic concept, it may extensively vary. 3. This is a domain of programming to generate sound with a computer. In this article, the way of this programming emphasis the relationship between gesture and sound production. 4. A video image which is altered in real time. In Virtual Musical Instruments, the image is changed by gesture. This image is usually projected on a screen in a live performance. 429 Introduction The problem confronting artists who work with interactive media is the use of commercially-produced computers. Very few of them build their machines from scratch.
    [Show full text]
  • Instruments for Spatial Sound Control in Real Time Music Performances
    Andreas Pysiewicz, Stefan Weinzierl Instruments for Spatial Sound Control in Real Time Music Performances. A Review Chapter in book | Accepted manuscript (Postprint) This version is available at https://doi.org/10.14279/depositonce-9008 Pysiewicz, A., & Weinzierl, S. (2016). Instruments for Spatial Sound Control in Real Time Music Performances. A Review. In: Bovermann et al. (Eds.): Musical Instruments in the 21st Century (pp. 273– 296). Springer Singapore. https://doi.org/10.1007/978-981-10-2951-6_18 Terms of Use Copyright applies. A non-exclusive, non-transferable and limited right to use is granted. This document is intended solely for personal, non-commercial use. Instruments for Spatial Sound Control in Real Time Music Performances. A Review Andreas Pysiewicz and Stefan Weinzierl Abstract The systematic arrangement of sound in space is widely considered as one important compositional design category of Western art music and acoustic media art in the 20th century. A lot of attention has been paid to the artistic concepts of sound in space and its reproduction through loudspeaker systems. Much less attention has been attracted by live-interactive practices and tools for spatialisation as performance practice. As a contribution to this topic, the current study has conducted an inventory of controllers for the real time spatialisation of sound as part of musical performances, and classi fied them both along different interface paradigms and according to their scope of spatial control. By means of a literature study, we were able to identify 31 different spatialisation interfaces presented to the public in context of artistic performances or at relevant conferences on the subject.
    [Show full text]