A Free Open-Source Mobile App for Multi-Touch Interaction with Faust Generated Modules Hyung-Suk Kim, Julius Smith

Total Page:16

File Type:pdf, Size:1020Kb

A Free Open-Source Mobile App for Multi-Touch Interaction with Faust Generated Modules Hyung-Suk Kim, Julius Smith FaustPad : A free open-source mobile app for multi-touch interaction with Faust generated modules Hyung-Suk Kim, Julius Smith To cite this version: Hyung-Suk Kim, Julius Smith. FaustPad : A free open-source mobile app for multi-touch interaction with Faust generated modules. Proceedings of the 10th International Linux Audio Conference (LAC- 12), Apr 2012, Stanford, United States. hal-03162971 HAL Id: hal-03162971 https://hal.archives-ouvertes.fr/hal-03162971 Submitted on 8 Mar 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. FaustPad : A free open-source mobile app for multi-touch interaction with Faust generated modules Hyung-Suk Kim Julius O. Smith Dept. of Electrical Engineering, Center for Computer Research in Music and Stanford University Acoustics. Palo Alto, CA 94305, USA (CCRMA) Stanford University [email protected] Palo Alto, CA 94305, USA [email protected] Abstract Mobile devices have become computationally In this paper we introduce a novel interface for powerful enough for audio synthesis directly on the mobile devices enabling multi-touch interaction with device. The MoMu toolkit [2] used by the Stanford sound modules generated with Faust1 (Functional Mobile Phone Orchestra (MoPho) is a toolkit for Audio Stream) and run in SuperCollider. The music synthesis on iOS devices. Unfortunately most interface allows a streamlined experience for audio SDKs for mobile devices are OS dependent, experimentation and exploration of sound design. requiring separate learning for each new mobile OS. The computation power of mobile devices is still Keywords limited compared to desktop systems. Computationally complex modules are hard to run Faust, mobile music, SuperCollider, music app. on mobile devices. Mobile devices can be used as a music controller 1 Introduction via OSC. TouchOSC, Control OSC2 are examples. Faust (Functional Audio Stream)[1] is a functional Such apps are general purpose OSC controllers that domain-specific language for real-time signal are highly customizable. The price of customizability processing and synthesis. Programs written in Faust is that it takes time to set up a module. This can are translated in to highly efficient C++ code which become very cumbersome if the sound module has a can then be compiled into modules for various lot of controls. Physical modeling synthesis architectures and synthesis environments including programs can easily have over 20 parameters which SuperCollider, PureData and Chuck. Faust separates can be very time consuming to setup. the specification from the underlying implementation In this paper we present an OSC music controller and the UI, allowing the programmer to focus on the app, FaustPad,3 that automatically creates the design of signal processing or synthesis modules. interface and necessary OSC settings based on the Quick testing of a module is possible by compiled results from Faust. This approach reduces compiling the program as a standalone program. the complicated setup process, allowing a However integrating a UI for a synthesis streamlined experience for experimenting with the environment such as SuperCollider or PureData is generated sound module. not that trivial. In either case, the user interaction is limited to mouse-pointer interaction. 2 Overview Mobile devices introduce a tangible, interactive Setting up a module for interaction requires the experience compared to that of the traditional following steps: 1) compiling the program with desktop. Recent mobile devices, i.e. smart phones, Faust, 2) setting up the synthesis environment, and 3) enable multi-touch interaction as well as other setting up the mobile device. features, e.g. accelerometer, GPS, gyroscope, which allow novel methods of interacting with sound. 2http://charlie-roberts.com/Control/ 3Code and documentation for FaustPad, can be found at 1http://faust.grame.fr/ https://github.com/hskim08/FaustPad SynthDef.new("sitar", { arg freq = 440.0, gain = 1.0, gate = 0.0, resonance = 0.7, damp = 0.72, roomsize = 0.54, wet = 0.141, pan_angle = 0.6, spatial_width = 0.5; Out.ar(0, FaustSitar.ar(freq, gain, gate, resonance, damp, roomsize, wet, pan_angle, spatial_width)); }).load(s); Figure 2: Output example of FaustScParser. It is easy to import custom synthesis modules into SuperCollider as “extensions” by copying the .sc Figure 1: Overview flowchart of FaustPad setup. The and .scx files into the SuperCollider extension file in parentheses may change depending on the folder. environment used. Here we show the case for a Synth definition (SynthDef) files need to be SuperCollider Extension. defined and loaded to create instances of the synth modules. Control parameter names are also defined In this paper we cover setting up a Faust synthesis in the SynthDef files. The parameter names of the module with SuperCollider. In the last section we SynthDef files must match those of the app. This can look at an example of installing all the modules in be achieved by parsing the UI definition XML-file Faust-STK[3], the Synthesis Toolkit port for Faust. from Faust to create the SynthDef file. The Java We use Faust to create a SuperCollider extension project in the FaustPad github repository creates an which is loaded into scsynth, the SuperCollider executable jar-file, FaustScParser.jar, that does this. server. FaustPad is used as an alternative scsynth In sclang, a SynthDef is compiled into byte client, replacing sclang, the SuperCollider code which is then sent to scsynth as an OSC interpreter. message. As of the time of writing the SynthDef needs to be loaded once manually by the script 3 Compiling with Faust created by the FaustScParser. Once loaded the SynthDef is saved on the server and will be loaded Recent Faust releases include scripts to compile each time scsynth boots. We plan to port programs into various environments. For SynthDef compiling into FaustPad to further reduce SuperCollider extensions, we used the setup process. faust2supercollider to create a SuperCollider class file (.sc) and a SuperCollider 5 FaustPad extension (.scx). The Faust compiler can create an abstract “user FaustPad is an OSC controller app with auto interface” definition in XML format with the --xml generated UIs tailored to Faust created modules. option. This XML-file is used to create the UI for the Ease of setup comes at the price of customizability. mobile app. The app will only work for Faust generated modules. For an in-depth description see “Audio Signal Given the expandability of Faust, it is a tradeoff Processing in FAUST”4. worth making. One advantage of using a mobile device is multi- 4 SuperCollider touch. Multiple sliders and buttons can be modified simultaneously. FaustPad currently supports iOS SuperCollider consists of two parts, the client, devices only. An iPhone or iPod Touch can handle up sclang, and the server, scsynth. The two parts to 5 touches and an iPad can handle up to 11 touches communicate via OSC. FaustPad acts as an at once. alternative client to sclang. 5.1 Building the UI A typical Faust user interface definition has two parts, 1) the definition of widgets, i.e. the control 4https://ccrma.stanford.edu/~jos/spf/Using_FAUST_Su parameters both passive and active, and 2) the layout perCollider.html Figure 3: Screenshots of FaustPad for iPad and iPod Touch. The FaustPad UI works for any orientation. In the iPod Touch screenshot, groups are folded to show only controls of interest. definition. The separation of components and layout environments. The OSC messaging object also frees the UI builder from a static layout. listens for incoming OSC messages that can be used In the current implementation of FaustPad, the app for automation and feedback from the server. maintains the given layout, creating foldable “group views”. The automated building of the UI has two 6 Example: Faust-STK phases accordingly, 1) the creation of widgets and 2) We have tested FaustPad with Faust-STK, included the creation of groups which form the layout. in the Faust release. The example files come with a 5.2 Layout principles Makefile for each environment including SuperCollider. Creating the UI description files is Faust generated modules may have many control less trivial, because the Makefile removes the parameters. This is especially true for physical files at the end of the compilation. This can be modeling synthesis modules such as many of the avoided by modifying modules in Faust-STK. For many parameters, once a faust2supercollider value is set there may be little need to change it and Makefile.sccompile. After setting up the during a performance. In FaustPad, unused groups aforementioned scripts then loading all SynthDefs, can be folded so that the user can focus on the we could use all 21 Faust-STK modules in FaustPad parameters of interest. by simply copying the files to the FaustPad FaustPad also allows multiple modules to be documents directory. created. Each module is created in a separate tab. 7 Extending FaustPad Tabs are always visible in the tab bar at the bottom of the UI, allowing quick change of synth module. The FaustPad user interface only adheres to the 5.3 OSC messaging user interface description file and makes no assumption of the underlying structure until the Each UI component holds its own data, e.g. label, moment a OSC message is sent in the OSC min/max values, from the UI description file. When messaging object. Thus it is possible to extend the there is a user interaction event, it sends an event to FaustPad to other OSC enabled synthesis the OSC messaging object which parses the event environment such as PureData or Chuck.
Recommended publications
  • Synchronous Programming in Audio Processing Karim Barkati, Pierre Jouvelot
    Synchronous programming in audio processing Karim Barkati, Pierre Jouvelot To cite this version: Karim Barkati, Pierre Jouvelot. Synchronous programming in audio processing. ACM Computing Surveys, Association for Computing Machinery, 2013, 46 (2), pp.24. 10.1145/2543581.2543591. hal- 01540047 HAL Id: hal-01540047 https://hal-mines-paristech.archives-ouvertes.fr/hal-01540047 Submitted on 15 Jun 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A Synchronous Programming in Audio Processing: A Lookup Table Oscillator Case Study KARIM BARKATI and PIERRE JOUVELOT, CRI, Mathématiques et systèmes, MINES ParisTech, France The adequacy of a programming language to a given software project or application domain is often con- sidered a key factor of success in software development and engineering, even though little theoretical or practical information is readily available to help make an informed decision. In this paper, we address a particular version of this issue by comparing the adequacy of general-purpose synchronous programming languages to more domain-specific
    [Show full text]
  • Kronos Reimagining Musical Signal Processing
    Kronos Reimagining Musical Signal Processing Vesa Norilo March 14, 2016 PREFACE First of all, I wish to thank my supervisors; Dr. Kalev Tiits, Dr. Marcus Castrén and Dr. Lauri Savioja,thanks for guidanceand acknowledgements and some necessary goading. Secondly; this project would never have materialized without the benign influence of Dr Mikael Laurson and Dr Mika Kuuskankare. I learned most of my research skills working as a research assistant in the PWGL project, which I had the good fortune to join at a relatively early age. Very few get such a head start. Most importantly I want to thank my family, Lotta and Roi, for their love, support and patience. Many thanks to Roi’s grandparents as well, who have made it possible for us to juggle an improb- able set of props: freelance musician careers, album productions, trips around the world, raising a baby and a couple of theses on the side. This thesis is typeset in LATEX with the Ars Classica stylesheet generously shared by Lorenzo Pantieri. This report is a part of the portfolio required for the Applied Studies Program for the degree of Doctorthe applied of Music. It studies consists of an program introductory portfolio essay, supporting appendices and six internationally peer reviewed articles. The portfolio comprises of this report and a software package, Kronos. Kronos is a programming language development environment designed for musical signal processing. The contributions of the package include the specification and implementation of a compiler for this language. Kronos is intended for musicians and music technologists. It aims to facilitate creation of sig- nal processors and digital instruments for use in the musical context.
    [Show full text]
  • Berkelee Voltage Presentation.Key
    A Brief History of Physical Modeling Synthesis, Leading up to Mobile Devices and MPE Pat Scandalis Dr. Julius O. Smith III Nick Porcaro Berklee Voltage, March 10-11 2017 02/20/2017 1 Overview The story of physical modeling stretches back nearly 1000 years (yup)! We now find ourselves in a place where each of us can be Jimi Hendrix with just a small device in the palm of our hands. Its a fun and deeply technical topic drawing on many fields including physics, acoustics, digital signal processing and music. • Demo • A few high points from the history • Questions, possibly from the FAQ The Full Presentation Can Be Found at: moforte.com/berklee-voltage-physical-modeling/ It’s also posted in the news section 02/20/2017 of the moForte website 2 Physical Modeling Was Poised to be the “Next Big Thing” in 1994 So What Happened? 02/20/2017 3 For Context, what is Physical Modeling Synthesis? • Methods in which a sound is generated using a mathematical model of the physical source of sound. • Any gestures that are used to interact with a real physical system can be mapped to parameters yielded an interactive an expressive performance experience. • Physical modeling is a collection of different techniques. 02/20/2017 4 Taxonomy of Modeling Areas Hornbostel–Sachs Classification • Chordaphones - Guitars • Idiophones - Mallet Instruments • Aerophones - Woodwinds • Electrophones - Virtual Analog • Membranophones - Drums • Game Sounds 02/20/2017 • Voice 5 First a Quick Demo! Geo Shred Preview Modeled Guitar Features and and Europa Demo Demo Reel 02/20/2017 6 Brief (though not complete) History of Physical Modeling Synthesis As well as a some commercial products using the technology 7 02/20/2017 Early Mechanical Voice Synthesis • 1000 -1200 ce - Speech Machines, Brazen Heads • 1791 - Wolfgang Von Kempelin, speaking machine.
    [Show full text]
  • Design and Implementation of a Software Sound Synthesizer
    HELSINKI UNIVERSITY OF TECHNOLOGY Department of Electrical and Communications Engineering Laboratory of Acoustics and Audio Signal Processing Jari Kleimola Design and Implementation of a Software Sound Synthesizer Master‘s Thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Technology. Espoo, August 31st, 2005 Supervisor and Instructor: Professor Vesa Välimäki HELSINKI UNIVERSITY ABSTRACT OF THE OF TECHNOLOGY MASTER‘S THESIS Author: Jari Kleimola Name of the Thesis: Design and Implementation of a Software Sound Synthesizer Date: August 31st, 2005 Number of Pages: 96+4 Department: Electrical and Communications Engineering Professorship: S-89 Acoustics and Audio Signal Processing Supervisor: Professor Vesa Välimäki Instructor: Professor Vesa Välimäki Increased processing power of personal computers has enabled their use as real-time virtual musical instruments. In this thesis, such a software sound synthesizer is designed and implemented, with the main objective being in the development of a composite synthesis architecture comprising several elementary synthesis techniques. First, a survey of sound synthesis, effects processing and modulation techniques was conducted, followed by an investigation to some existing implementations in hardware and software platforms. Next, a formal object-oriented design methodology was applied to capture the requirements of the implementation, and an architectural design phase was carried out to ensure that the requirements were fulfilled. Thereafter, the actual implementation work was divided between the reusable application framework library and the extended implementation packages. Finally, evaluation of the results was made in form of sound and source code analysis. As a conclusion, the composite synthesis architecture was found to be relatively intuitive and realizable.
    [Show full text]
  • A Declarative Metaprogramming Language for Digital Signal
    Vesa Norilo Kronos: A Declarative Centre for Music and Technology University of Arts Helsinki, Sibelius Academy Metaprogramming PO Box 30 FI-00097 Uniarts, Finland Language for Digital [email protected] Signal Processing Abstract: Kronos is a signal-processing programming language based on the principles of semifunctional reactive systems. It is aimed at efficient signal processing at the elementary level, and built to scale towards higher-level tasks by utilizing the powerful programming paradigms of “metaprogramming” and reactive multirate systems. The Kronos language features expressive source code as well as a streamlined, efficient runtime. The programming model presented is adaptable for both sample-stream and event processing, offering a cleanly functional programming paradigm for a wide range of musical signal-processing problems, exemplified herein by a selection and discussion of code examples. Signal processing is fundamental to most areas handful of simple concepts. This language is a good of creative music technology. It is deployed on fit for hardware—ranging from CPUs to GPUs and both commodity computers and specialized sound- even custom-made DSP chips—but unpleasant for processing hardware to accomplish transformation humans to work in. Human programmers are instead and synthesis of musical signals. Programming these presented with a very high-level metalanguage, processors has proven resistant to the advances in which is compiled into the lower-level data flow. general computer science. Most signal processors This programming method is called Kronos. are programmed in low-level languages, such as C, often thinly wrapped in rudimentary C++. Such a workflow involves a great deal of tedious detail, as Musical Programming Paradigms these languages do not feature language constructs and Environments that would enable a sufficiently efficient imple- mentation of abstractions that would adequately The most prominent programming paradigm for generalize signal processing.
    [Show full text]
  • Chugens, Chubgraphs, Chugins: 3 Tiers For
    CHUGENS, CHUBGRAPHS, CHUGINS: 3 TIERS FOR tick function, which accepts a single floating-point 1::second => delay.delay; EXTENDING CHUCK argument (the input sample) and returns a single } floating-point value (the output sample). For example, (Chubgraphs that do not wish to process an input signal, this code uses a Chugen to synthesize a sinusoid using such as audio synthesizing algorithms, may omit the Spencer Salazar Ge Wang the cosine function: connection from inlet.) Center for Computer Research in Music and Acoustics Compared to Chugens, Chubgraphs have obvious class MyCosine extends Chugen performance advantages, as primary audio-rate Stanford University { {spencer, ge}@ccrma.stanford.edu 0 => int p; processing still occurs in the native machine code 440 => float f; underlying its component ugens. However Chubgraphs second/samp => float SRATE; are limited to audio algorithms that can be expressed as combinations of existing unit generators; implementing, ABSTRACT native compiled code, which is precisely the intent of fun float tick(float in) for example, intricate mathematical formulae or ChuGins. { The ChucK programming language lacks return Math.cos(p++*2*pi*f/SRATE); conditional logic in the form of a ugen graph is possible straightforward mechanisms for extension beyond its } but, in our experience, fraught with hazard. 2. RELATED WORK } built-in programming and processing facilities. Chugens address this issue by allowing programmers to live-code Extensibility is a primary concern of music software of In the case of an audio synthesizer that does not process 3.3. ChuGins new unit generators in ChucK in real-time. Chubgraphs all varieties. The popular audio programming an existing signal, the input sample may be ignored.
    [Show full text]
  • Chuck: a Strongly Timed Computer Music Language
    See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/289495366 ChucK: A Strongly Timed Computer Music Language ARTICLE in COMPUTER MUSIC JOURNAL · DECEMBER 2015 Impact Factor: 0.47 · DOI: 10.1162/COMJ_a_00324 READS 6 3 AUTHORS, INCLUDING: Ge Wang Spencer Salazar Stanford University Stanford University 76 PUBLICATIONS 679 CITATIONS 10 PUBLICATIONS 15 CITATIONS SEE PROFILE SEE PROFILE All in-text references underlined in blue are linked to publications on ResearchGate, Available from: Ge Wang letting you access and read them immediately. Retrieved on: 08 January 2016 Ge Wang,∗ Perry R. Cook,† ChucK: A Strongly Timed and Spencer Salazar∗ ∗Center for Computer Research in Music Computer Music Language and Acoustics (CCRMA) Stanford University 660 Lomita Drive, Stanford, California 94306, USA {ge, spencer}@ccrma.stanford.edu †Department of Computer Science Princeton University 35 Olden Street, Princeton, New Jersey 08540, USA [email protected] Abstract: ChucK is a programming language designed for computer music. It aims to be expressive and straightforward to read and write with respect to time and concurrency, and to provide a platform for precise audio synthesis and analysis and for rapid experimentation in computer music. In particular, ChucK defines the notion of a strongly timed audio programming language, comprising a versatile time-based programming model that allows programmers to flexibly and precisely control the flow of time in code and use the keyword now as a time-aware control construct, and gives programmers the ability to use the timing mechanism to realize sample-accurate concurrent programming. Several case studies are presented that illustrate the workings, properties, and personality of the language.
    [Show full text]
  • Extension 5: Sound Text by R
    Extension 5: Sound Text by R. Luke DuBois The history of music is, in many ways, the history of technology. From developments in the writing and transcription of music (notation) to the design of spaces for the performance of music (acoustics) to the creation of musical instruments, composers and musicians have availed themselves of advances in human understanding to perfect and advance their professions. Unsurprisingly, therefore, we find that in the machine age these same people found themselves first in line to take advantage of the new techniques and possibilities offered by electricity, telecommunications, and, in the last century, digital computers to leverage all of these systems to create new and expressive forms of sonic art. Indeed, the development of phonography (the ability to reproduce sound mechanically) has, by itself, had such a transformative effect on aural culture that it seems inconceivable now to step back to an age where sound could emanate only from its original source.1 The ability to create, manipulate, and losslessly reproduce sound by digital means is having, at the time of this writing, an equally revolutionary effect on how we listen. As a result, the artist today working with sound has not only a huge array of tools to work with, but also a medium exceptionally well suited to technological experimentation. Music and sound programming in the arts Thomas Edison’s 1857 invention of the phonograph and Nikola Tesla’s wireless radio demonstration of 1893 paved the way for what was to be a century of innovation in the electromechanical transmission and reproduction of sound.
    [Show full text]
  • Chugens, Chubgraphs, Chugins: 3 Tiers for Extending Chuck
    CHUGENS, CHUBGRAPHS, CHUGINS: 3 TIERS FOR EXTENDING CHUCK Spencer Salazar Ge Wang Center for Computer Research in Music and Acoustics Stanford University {spencer, ge}@stanford.edu ABSTRACT native compiled code, which is precisely the intent of ChuGins. The ChucK programming language lacks straightforward mechanisms for extension beyond its 2. RELATED WORK built-in programming and processing facilities. Chugens address this issue by allowing programmers to Extensibility is a primary concern of music software of implement new unit generators in ChucK code in real- all varieties. The popular audio programming time. Chubgraphs also allow new unit generators to be environments Max/MSP [12], Pure Data [8], built in ChucK, by defining specific arrangements of SuperCollider [6], and Csound [4] all provide mechanisms for developing C/C++-based compiled existing unit generators. ChuGins allow a wide array of sound processing functions. Max also allows control-rate high-performance unit generators and general functionality to be encapsulated in-situ in the form of functionality to be exposed in ChucK by providing a Javascript code snippets. Csound allows the execution of dynamic binding between ChucK and native C/C++- Tcl, Lua, and Python code for control-rate and/or audio- based compiled code. Performance and code analysis rate manipulation and synthesis. A thriving ecosystem shows that the most suitable approach for extending revolves around extensions to popular digital audio workstation software, in the form of VST, RTAS, ChucK is situation-dependent. AudioUnits, and LADSPA plugins developed primarily in C and C++. In general-purpose computing 1. INTRODUCTION enivornments, JNI provides a highly flexible binding between native machine code and the Java virtual Since its introduction, the ChucK programming machine-based run-time environment [5].
    [Show full text]
  • Subsynth: a Generic Audio Synthesis Framework for Real-Time Applications
    Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 1-1-2002 Subsynth: A generic audio synthesis framework for real-time applications Kevin August Meinert Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Recommended Citation Meinert, Kevin August, "Subsynth: A generic audio synthesis framework for real-time applications" (2002). Retrospective Theses and Dissertations. 20166. https://lib.dr.iastate.edu/rtd/20166 This Thesis is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Subsynth: A generic audio synthesis framework for real-time applications by Kevin August Meinert A thesis submitted to the graduate faculty in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Major: Computer Engineering Program of Study Committee: Carolina Cruz-Neira, Major Professor Daniel Ashlock AnneDeane Julie Dickerson Govindarasu Manimaran Iowa State University Ames, Iowa 2002 Copyright © Kevin August Meinert, 2002. All rights reserved. 11 Graduate College Iowa State University This is to certify that the master's thesis of Kevin August Meinert has met the thesis requirements of Iowa State University Signatures have been redacted for privacy
    [Show full text]
  • FAUST ARCHITECTURES DESIGN and OSC SUPPORT. D. Fober, Y. Orlarey, S. Letz Grame Centre National De Création Musicale Lyon, Fran
    Proc. of the 14th Int. Conference on Digital Audio Effects (DAFx-11), Paris, France, September 19-23, 2011 Proc. of the 14th International Conference on Digital Audio Effects (DAFx-11), Paris, France, September 19-23, 2011 FAUST ARCHITECTURES DESIGN AND OSC SUPPORT. D. Fober, Y. Orlarey, S. Letz Grame Centre national de création musicale Lyon, France {fober,orlarey,letz}@grame.fr ABSTRACT composition. For that purpose, FAUST relies on a block-diagram algebra of five composition operations (: , ~ <: :>) [1, 2]. FAUST [Functional Audio Stream] is a functional programming We don’t have the space to describe the language in details language specifically designed for real-time signal processing and but as an example here is how to write a pseudo random number synthesis. It consists in a compiler that translates a FAUST pro- generator r in Faust 2 : gram into an equivalent C++ program, taking care of generating r = +(12345)~*(1103515245); the most efficient code. The FAUST environment also includes This example uses the recursive composition operator ~ to cre- various architecture files, providing the glue between the FAUST ate a feedback loop as illustrated figure 1. C++ output and the host audio and GUI environments. The com- bination of architecture files and FAUST output gives ready to run applications or plugins for various systems, which makes a single FAUST specification available on different platforms and environ- ments without additional cost. This article presents the overall de- sign of the architecture files and gives more details on the recent OSC architecture. 1. INTRODUCTION From a technical point of view FAUST 1 (Functional Audio Stream) is a functional, synchronous, domain specific language designed for real-time signal processing and synthesis.
    [Show full text]
  • Faust2api: a Comprehensive API Generator for Android and Ios
    faust2api: a Comprehensive API Generator for Android and iOS Romain Michon, Julius Smith, St´ephaneLetz, Yann Orlarey Chris Chafe GRAME CCRMA Centre National de Cr´eationMusicale Stanford University 11 Cours de Verdun (Gensoul) Stanford, CA 94305-8180 69002, Lyon USA France frmichon,jos,[email protected] fletz,[email protected] Abstract use objects written in common computer music 6 We introduce faust2api, a tool to generate cus- languages such as PureData: libpd [Brinkmann tom DSP engines for Android and iOS using the et al., 2011] and Csound:7 Mobile Csound Plat- Faust programming language. Faust DSP objects form (MCP) [Lazzarini et al., 2012] on mobile can easily be turned into MIDI-controllable poly- platforms. phonic synthesizers or audio effects with built-in sen- Similarly, we introduced faust2android sors support, etc. The various elements of the DSP in a previous publication [Michon, 2013]: a engine can be accessed through a high-level API, tool allowing to turn Faust8 [Orlarey et al., made uniform across platforms and languages. This paper provides technical details on the im- 2009] code into a fully operational Android plementation of this system as well as an evaluation application. faust2android is based on of its various features. faust2api [Michon et al., 2015]. It al- lows to turn a Faust program into a cross- Keywords platform API usable on Android and iOS to Faust, iOS, Android, Mobile Instruments carry out various kinds of real-time audio pro- cessing tasks. 1 Introduction In this paper, we present a completely re- Mobile devices (smart-phones, tablets, etc.) designed version of faust2api offering the have been used as musical instruments for the same features on Android and iOS: past ten years, both in the industry (e.g., • polyphony and MIDI support, GarageBand1 for iPad, Smule's apps,2 mo- Forte's GeoShred,3 etc.), and in the academic • audio effects chains, community ([Tanaka, 2004], [Geiger, 2006], • built-in sensors support, [Gaye et al., 2006], [Essl and Rohs, 2009] and • low latency audio, [Wang, 2014]).
    [Show full text]