Faust2api: a Comprehensive API Generator for Android and Ios

Total Page:16

File Type:pdf, Size:1020Kb

Faust2api: a Comprehensive API Generator for Android and Ios faust2api: a Comprehensive API Generator for Android and iOS Romain Michon, Julius Smith, St´ephaneLetz, Yann Orlarey Chris Chafe GRAME CCRMA Centre National de Cr´eationMusicale Stanford University 11 Cours de Verdun (Gensoul) Stanford, CA 94305-8180 69002, Lyon USA France frmichon,jos,[email protected] fletz,[email protected] Abstract use objects written in common computer music 6 We introduce faust2api, a tool to generate cus- languages such as PureData: libpd [Brinkmann tom DSP engines for Android and iOS using the et al., 2011] and Csound:7 Mobile Csound Plat- Faust programming language. Faust DSP objects form (MCP) [Lazzarini et al., 2012] on mobile can easily be turned into MIDI-controllable poly- platforms. phonic synthesizers or audio effects with built-in sen- Similarly, we introduced faust2android sors support, etc. The various elements of the DSP in a previous publication [Michon, 2013]: a engine can be accessed through a high-level API, tool allowing to turn Faust8 [Orlarey et al., made uniform across platforms and languages. This paper provides technical details on the im- 2009] code into a fully operational Android plementation of this system as well as an evaluation application. faust2android is based on of its various features. faust2api [Michon et al., 2015]. It al- lows to turn a Faust program into a cross- Keywords platform API usable on Android and iOS to Faust, iOS, Android, Mobile Instruments carry out various kinds of real-time audio pro- cessing tasks. 1 Introduction In this paper, we present a completely re- Mobile devices (smart-phones, tablets, etc.) designed version of faust2api offering the have been used as musical instruments for the same features on Android and iOS: past ten years, both in the industry (e.g., • polyphony and MIDI support, GarageBand1 for iPad, Smule's apps,2 mo- Forte's GeoShred,3 etc.), and in the academic • audio effects chains, community ([Tanaka, 2004], [Geiger, 2006], • built-in sensors support, [Gaye et al., 2006], [Essl and Rohs, 2009] and • low latency audio, [Wang, 2014]). Implementing real-time Digital Signal Pro- • etc. cessing (DSP) engines from scratch on mobile First, we'll give an overview of how platforms can be hard using standard audio faust2api works. Then, technical details on APIs provided with common operating systems the implementation of this system will be pro- (we'll only cover iOS and Android here). In- vided. Finally, we'll evaluate it and present fu- deed, CoreAudio on iOS and OpenSL ES on ture directions for this project. Android are relatively low-level APIs offering customization possibilities not needed by most 2 Overview audio app developers. Fortunately, there ex- 2.1 Basics ist several third party cross-platform APIs to work with real-time audio on mobile devices at a At its highest level, faust2api is a command higher level (e.g., SuperPowered,4 JUCE,5 etc.). line program taking a Faust code as its main Additionally, several open-source tools allow to argument and generating a package containing a series of files implementing the DSP engine. 1http://www.apple.com/ios/garageband. All Various flags can be used to customize the API. the URLs in this paper were verified on 01/26/17. The only required flag is the target platform: 2https://www.smule.com 3http://www.moforte.com/geoshredapp 6https://puredata.info 4http://superpowered.com 7http://www.csounds.com 5https://www.juce.com 8http://faust.grame.fr faust2api -ios myCode.dsp 2.2 MIDI Support will generate a DSP engine for iOS and MIDI support can be easily added to a faust2api -android myCode.dsp DspFaust object simply by providing the -midi flag when calling faust2api. MIDI will generate a DSP engine for Android. support works the same way on Android and The content of each package is quite different iOS: all MIDI devices connected to the mobile between these two platforms (see x3), but the device before the app is launched can control the format of the API itself remains very similar Faust object, and any new device connected (see Figure 1 at page 4). The iOS DSP engines while the app is running will also be able to faust2api generated with consist of a large control it. C++ object (DspFaust) accessible through a Standard Faust MIDI meta-data9 can be separate header file. This object can be con- used to assign MIDI CCs to specific parame- C++ veniently instantiated and used in any or ters. For example, the freq parameter of the Objective-C code in an app project. A typi- previous code could be controlled by MIDI CC DspFaust cal \life cycle" for a object can be 52 simply by writing DspFaust *dspFaust = new DspFaust(SR, blockSize); dspFaust->start(); f = nentry("freq[midi: ctrl dspFaust->stop(); delete dspFaust; 52]",440,50,1000,0.01); start() launches the computation of the 2.3 Polyphony audio blocks and stop() stops (pauses) the objects can be conveniently turned into computation. These two methods can be re- Faust polyphonic synthesizers simply by specifying peated as many times as needed. The construc- the maximum number of voices of polyphony tor allows to specify the sampling rate and the when calling faust2api using the -nvoices block size, and is used to instantiate the au- flag. In practice, only active voices are allocated dio engine. While the configuration of the au- and computed, so this number is just used as a dio engine is very limited at the API level (only safeguard. these two parameters can be configured through As used for many years by the various it), lots of flexibility is given to the program- tools for making Faust synthesizers, such as mer within the Faust code. For example, if faust2pd, compatibility with the -nvoices the Faust object doesn't have any input, then option requires the freq, gain and gate pa- no audio input will be instantiated in the audio rameters to be defined. faust2api automati- engine, etc. cally takes care of converting MIDI note num- The value of the different parameters of a bers to frequency values in Hz for freq, MIDI Faust object can be easily modified once the velocity to linear amplitude-gain for gain, and DspFaust object is created and is running. note-on (1) and note-off (0) for gate: For example, the freq parameter of the sim- ple Faust code f = nentry("freq",440,50,1000,0.01); g = nentry("gain",1,0,1,0.01); f = nentry("freq",440,50,1000,0.01); t = button("gate"); process = osc(f)*g* process = osc(f); t; can be modified simply by calling Here, t could be used to trigger an envelope dspFaust->setParamValue("freq",440); generator, for example. In such a case, the voice Faust user-interface elements (nentry here) would stop being computed only after t is set are ignored by faust2api and simply used as to 0 and the tail-off amplitude becomes smaller a way to declare parameters controllable in the than -60dB (configurable using macros in the API. API packages generated by faust2api application code). also contain a markdown documentation pro- A wide range of methods is accessible to work viding information on how to use the API as with voices. A \typical" life cycle for a MIDI well a list of all the parameters controllable with note can be setParamValue(). long voiceAddress = dspFaust->keyOn( The structure of the DSP engine package note,velocity); is quite different for Android since it contains dspFaust->setVoiceParamValue("param", both C++ and JAVA files (see x3). Otherwise, voiceAddress,paramValue); the same steps can be used to work with the 9http://faust.grame.fr/images/ DspFaust object. faust-quick-reference.pdf dspFaust->keyOff(note); its custom DSP engines. [Letz et al., 2017] For setVoiceParamValue() can be used to example, turning a monophonic Faust synthe- change the value of a parameter for a specific sizer into a polyphonic one can be done in a voice. simple generic way. Both on Android and iOS, faust2api C++ Alternatively, voices can be allocated without generates a large file imple- specifying a note number and a velocity: menting all the features used by the high level API. On iOS, this API is accessed through a long voiceAddress = dspFaust->newVoice C++ header file that can be conveniently in- (); dspFaust->setVoiceParamValue("param", cluded in any C++ or Objective-C code. On voiceAddress,paramValue); Android, a JAVA interface allows to interact dspFaust->deleteVoice(voiceAddress); with the native (C++) block. The DSP C++ For example, this can be very convenient to code is the same for all platforms (see Figure 2 associate voices to specific fingers on a touch- at page 5) and is wrapped into an object imple- screen. menting the polyphonic synthesizer followed by When MIDI support is enabled in the effects chain (assuming that the -mvoices faust2api, MIDI events will automati- and -poly2 options were used during compila- cally interact with voices. Thus, if a MIDI tion). keyboard is connected to the mobile device, In this section, we provide more information it will be able to control the Faust object on the architecture of DSP engines generated without additional configuration steps. by faust2api for Android and iOS. 2.4 Adding Audio Effects 3.1 iOS In most cases, effects don't need to be re- The global architecture of API packages gen- implemented for each voice of polyphony and erated by faust2api is relatively simple on can be placed at the end of the DSP chain. iOS since C++ code can be used directly in faust2api allows to provide a Faust object Objective-C (which is one of the two lan- implementing the effects chain to be connected guages used to make iOS applications along to the output of the polyphonic synthesizer.
Recommended publications
  • Synchronous Programming in Audio Processing Karim Barkati, Pierre Jouvelot
    Synchronous programming in audio processing Karim Barkati, Pierre Jouvelot To cite this version: Karim Barkati, Pierre Jouvelot. Synchronous programming in audio processing. ACM Computing Surveys, Association for Computing Machinery, 2013, 46 (2), pp.24. 10.1145/2543581.2543591. hal- 01540047 HAL Id: hal-01540047 https://hal-mines-paristech.archives-ouvertes.fr/hal-01540047 Submitted on 15 Jun 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A Synchronous Programming in Audio Processing: A Lookup Table Oscillator Case Study KARIM BARKATI and PIERRE JOUVELOT, CRI, Mathématiques et systèmes, MINES ParisTech, France The adequacy of a programming language to a given software project or application domain is often con- sidered a key factor of success in software development and engineering, even though little theoretical or practical information is readily available to help make an informed decision. In this paper, we address a particular version of this issue by comparing the adequacy of general-purpose synchronous programming languages to more domain-specific
    [Show full text]
  • Kronos Reimagining Musical Signal Processing
    Kronos Reimagining Musical Signal Processing Vesa Norilo March 14, 2016 PREFACE First of all, I wish to thank my supervisors; Dr. Kalev Tiits, Dr. Marcus Castrén and Dr. Lauri Savioja,thanks for guidanceand acknowledgements and some necessary goading. Secondly; this project would never have materialized without the benign influence of Dr Mikael Laurson and Dr Mika Kuuskankare. I learned most of my research skills working as a research assistant in the PWGL project, which I had the good fortune to join at a relatively early age. Very few get such a head start. Most importantly I want to thank my family, Lotta and Roi, for their love, support and patience. Many thanks to Roi’s grandparents as well, who have made it possible for us to juggle an improb- able set of props: freelance musician careers, album productions, trips around the world, raising a baby and a couple of theses on the side. This thesis is typeset in LATEX with the Ars Classica stylesheet generously shared by Lorenzo Pantieri. This report is a part of the portfolio required for the Applied Studies Program for the degree of Doctorthe applied of Music. It studies consists of an program introductory portfolio essay, supporting appendices and six internationally peer reviewed articles. The portfolio comprises of this report and a software package, Kronos. Kronos is a programming language development environment designed for musical signal processing. The contributions of the package include the specification and implementation of a compiler for this language. Kronos is intended for musicians and music technologists. It aims to facilitate creation of sig- nal processors and digital instruments for use in the musical context.
    [Show full text]
  • Berkelee Voltage Presentation.Key
    A Brief History of Physical Modeling Synthesis, Leading up to Mobile Devices and MPE Pat Scandalis Dr. Julius O. Smith III Nick Porcaro Berklee Voltage, March 10-11 2017 02/20/2017 1 Overview The story of physical modeling stretches back nearly 1000 years (yup)! We now find ourselves in a place where each of us can be Jimi Hendrix with just a small device in the palm of our hands. Its a fun and deeply technical topic drawing on many fields including physics, acoustics, digital signal processing and music. • Demo • A few high points from the history • Questions, possibly from the FAQ The Full Presentation Can Be Found at: moforte.com/berklee-voltage-physical-modeling/ It’s also posted in the news section 02/20/2017 of the moForte website 2 Physical Modeling Was Poised to be the “Next Big Thing” in 1994 So What Happened? 02/20/2017 3 For Context, what is Physical Modeling Synthesis? • Methods in which a sound is generated using a mathematical model of the physical source of sound. • Any gestures that are used to interact with a real physical system can be mapped to parameters yielded an interactive an expressive performance experience. • Physical modeling is a collection of different techniques. 02/20/2017 4 Taxonomy of Modeling Areas Hornbostel–Sachs Classification • Chordaphones - Guitars • Idiophones - Mallet Instruments • Aerophones - Woodwinds • Electrophones - Virtual Analog • Membranophones - Drums • Game Sounds 02/20/2017 • Voice 5 First a Quick Demo! Geo Shred Preview Modeled Guitar Features and and Europa Demo Demo Reel 02/20/2017 6 Brief (though not complete) History of Physical Modeling Synthesis As well as a some commercial products using the technology 7 02/20/2017 Early Mechanical Voice Synthesis • 1000 -1200 ce - Speech Machines, Brazen Heads • 1791 - Wolfgang Von Kempelin, speaking machine.
    [Show full text]
  • Design and Implementation of a Software Sound Synthesizer
    HELSINKI UNIVERSITY OF TECHNOLOGY Department of Electrical and Communications Engineering Laboratory of Acoustics and Audio Signal Processing Jari Kleimola Design and Implementation of a Software Sound Synthesizer Master‘s Thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Technology. Espoo, August 31st, 2005 Supervisor and Instructor: Professor Vesa Välimäki HELSINKI UNIVERSITY ABSTRACT OF THE OF TECHNOLOGY MASTER‘S THESIS Author: Jari Kleimola Name of the Thesis: Design and Implementation of a Software Sound Synthesizer Date: August 31st, 2005 Number of Pages: 96+4 Department: Electrical and Communications Engineering Professorship: S-89 Acoustics and Audio Signal Processing Supervisor: Professor Vesa Välimäki Instructor: Professor Vesa Välimäki Increased processing power of personal computers has enabled their use as real-time virtual musical instruments. In this thesis, such a software sound synthesizer is designed and implemented, with the main objective being in the development of a composite synthesis architecture comprising several elementary synthesis techniques. First, a survey of sound synthesis, effects processing and modulation techniques was conducted, followed by an investigation to some existing implementations in hardware and software platforms. Next, a formal object-oriented design methodology was applied to capture the requirements of the implementation, and an architectural design phase was carried out to ensure that the requirements were fulfilled. Thereafter, the actual implementation work was divided between the reusable application framework library and the extended implementation packages. Finally, evaluation of the results was made in form of sound and source code analysis. As a conclusion, the composite synthesis architecture was found to be relatively intuitive and realizable.
    [Show full text]
  • A Declarative Metaprogramming Language for Digital Signal
    Vesa Norilo Kronos: A Declarative Centre for Music and Technology University of Arts Helsinki, Sibelius Academy Metaprogramming PO Box 30 FI-00097 Uniarts, Finland Language for Digital [email protected] Signal Processing Abstract: Kronos is a signal-processing programming language based on the principles of semifunctional reactive systems. It is aimed at efficient signal processing at the elementary level, and built to scale towards higher-level tasks by utilizing the powerful programming paradigms of “metaprogramming” and reactive multirate systems. The Kronos language features expressive source code as well as a streamlined, efficient runtime. The programming model presented is adaptable for both sample-stream and event processing, offering a cleanly functional programming paradigm for a wide range of musical signal-processing problems, exemplified herein by a selection and discussion of code examples. Signal processing is fundamental to most areas handful of simple concepts. This language is a good of creative music technology. It is deployed on fit for hardware—ranging from CPUs to GPUs and both commodity computers and specialized sound- even custom-made DSP chips—but unpleasant for processing hardware to accomplish transformation humans to work in. Human programmers are instead and synthesis of musical signals. Programming these presented with a very high-level metalanguage, processors has proven resistant to the advances in which is compiled into the lower-level data flow. general computer science. Most signal processors This programming method is called Kronos. are programmed in low-level languages, such as C, often thinly wrapped in rudimentary C++. Such a workflow involves a great deal of tedious detail, as Musical Programming Paradigms these languages do not feature language constructs and Environments that would enable a sufficiently efficient imple- mentation of abstractions that would adequately The most prominent programming paradigm for generalize signal processing.
    [Show full text]
  • Chugens, Chubgraphs, Chugins: 3 Tiers For
    CHUGENS, CHUBGRAPHS, CHUGINS: 3 TIERS FOR tick function, which accepts a single floating-point 1::second => delay.delay; EXTENDING CHUCK argument (the input sample) and returns a single } floating-point value (the output sample). For example, (Chubgraphs that do not wish to process an input signal, this code uses a Chugen to synthesize a sinusoid using such as audio synthesizing algorithms, may omit the Spencer Salazar Ge Wang the cosine function: connection from inlet.) Center for Computer Research in Music and Acoustics Compared to Chugens, Chubgraphs have obvious class MyCosine extends Chugen performance advantages, as primary audio-rate Stanford University { {spencer, ge}@ccrma.stanford.edu 0 => int p; processing still occurs in the native machine code 440 => float f; underlying its component ugens. However Chubgraphs second/samp => float SRATE; are limited to audio algorithms that can be expressed as combinations of existing unit generators; implementing, ABSTRACT native compiled code, which is precisely the intent of fun float tick(float in) for example, intricate mathematical formulae or ChuGins. { The ChucK programming language lacks return Math.cos(p++*2*pi*f/SRATE); conditional logic in the form of a ugen graph is possible straightforward mechanisms for extension beyond its } but, in our experience, fraught with hazard. 2. RELATED WORK } built-in programming and processing facilities. Chugens address this issue by allowing programmers to live-code Extensibility is a primary concern of music software of In the case of an audio synthesizer that does not process 3.3. ChuGins new unit generators in ChucK in real-time. Chubgraphs all varieties. The popular audio programming an existing signal, the input sample may be ignored.
    [Show full text]
  • Chuck: a Strongly Timed Computer Music Language
    See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/289495366 ChucK: A Strongly Timed Computer Music Language ARTICLE in COMPUTER MUSIC JOURNAL · DECEMBER 2015 Impact Factor: 0.47 · DOI: 10.1162/COMJ_a_00324 READS 6 3 AUTHORS, INCLUDING: Ge Wang Spencer Salazar Stanford University Stanford University 76 PUBLICATIONS 679 CITATIONS 10 PUBLICATIONS 15 CITATIONS SEE PROFILE SEE PROFILE All in-text references underlined in blue are linked to publications on ResearchGate, Available from: Ge Wang letting you access and read them immediately. Retrieved on: 08 January 2016 Ge Wang,∗ Perry R. Cook,† ChucK: A Strongly Timed and Spencer Salazar∗ ∗Center for Computer Research in Music Computer Music Language and Acoustics (CCRMA) Stanford University 660 Lomita Drive, Stanford, California 94306, USA {ge, spencer}@ccrma.stanford.edu †Department of Computer Science Princeton University 35 Olden Street, Princeton, New Jersey 08540, USA [email protected] Abstract: ChucK is a programming language designed for computer music. It aims to be expressive and straightforward to read and write with respect to time and concurrency, and to provide a platform for precise audio synthesis and analysis and for rapid experimentation in computer music. In particular, ChucK defines the notion of a strongly timed audio programming language, comprising a versatile time-based programming model that allows programmers to flexibly and precisely control the flow of time in code and use the keyword now as a time-aware control construct, and gives programmers the ability to use the timing mechanism to realize sample-accurate concurrent programming. Several case studies are presented that illustrate the workings, properties, and personality of the language.
    [Show full text]
  • Extension 5: Sound Text by R
    Extension 5: Sound Text by R. Luke DuBois The history of music is, in many ways, the history of technology. From developments in the writing and transcription of music (notation) to the design of spaces for the performance of music (acoustics) to the creation of musical instruments, composers and musicians have availed themselves of advances in human understanding to perfect and advance their professions. Unsurprisingly, therefore, we find that in the machine age these same people found themselves first in line to take advantage of the new techniques and possibilities offered by electricity, telecommunications, and, in the last century, digital computers to leverage all of these systems to create new and expressive forms of sonic art. Indeed, the development of phonography (the ability to reproduce sound mechanically) has, by itself, had such a transformative effect on aural culture that it seems inconceivable now to step back to an age where sound could emanate only from its original source.1 The ability to create, manipulate, and losslessly reproduce sound by digital means is having, at the time of this writing, an equally revolutionary effect on how we listen. As a result, the artist today working with sound has not only a huge array of tools to work with, but also a medium exceptionally well suited to technological experimentation. Music and sound programming in the arts Thomas Edison’s 1857 invention of the phonograph and Nikola Tesla’s wireless radio demonstration of 1893 paved the way for what was to be a century of innovation in the electromechanical transmission and reproduction of sound.
    [Show full text]
  • Chugens, Chubgraphs, Chugins: 3 Tiers for Extending Chuck
    CHUGENS, CHUBGRAPHS, CHUGINS: 3 TIERS FOR EXTENDING CHUCK Spencer Salazar Ge Wang Center for Computer Research in Music and Acoustics Stanford University {spencer, ge}@stanford.edu ABSTRACT native compiled code, which is precisely the intent of ChuGins. The ChucK programming language lacks straightforward mechanisms for extension beyond its 2. RELATED WORK built-in programming and processing facilities. Chugens address this issue by allowing programmers to Extensibility is a primary concern of music software of implement new unit generators in ChucK code in real- all varieties. The popular audio programming time. Chubgraphs also allow new unit generators to be environments Max/MSP [12], Pure Data [8], built in ChucK, by defining specific arrangements of SuperCollider [6], and Csound [4] all provide mechanisms for developing C/C++-based compiled existing unit generators. ChuGins allow a wide array of sound processing functions. Max also allows control-rate high-performance unit generators and general functionality to be encapsulated in-situ in the form of functionality to be exposed in ChucK by providing a Javascript code snippets. Csound allows the execution of dynamic binding between ChucK and native C/C++- Tcl, Lua, and Python code for control-rate and/or audio- based compiled code. Performance and code analysis rate manipulation and synthesis. A thriving ecosystem shows that the most suitable approach for extending revolves around extensions to popular digital audio workstation software, in the form of VST, RTAS, ChucK is situation-dependent. AudioUnits, and LADSPA plugins developed primarily in C and C++. In general-purpose computing 1. INTRODUCTION enivornments, JNI provides a highly flexible binding between native machine code and the Java virtual Since its introduction, the ChucK programming machine-based run-time environment [5].
    [Show full text]
  • Subsynth: a Generic Audio Synthesis Framework for Real-Time Applications
    Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 1-1-2002 Subsynth: A generic audio synthesis framework for real-time applications Kevin August Meinert Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Recommended Citation Meinert, Kevin August, "Subsynth: A generic audio synthesis framework for real-time applications" (2002). Retrospective Theses and Dissertations. 20166. https://lib.dr.iastate.edu/rtd/20166 This Thesis is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Subsynth: A generic audio synthesis framework for real-time applications by Kevin August Meinert A thesis submitted to the graduate faculty in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Major: Computer Engineering Program of Study Committee: Carolina Cruz-Neira, Major Professor Daniel Ashlock AnneDeane Julie Dickerson Govindarasu Manimaran Iowa State University Ames, Iowa 2002 Copyright © Kevin August Meinert, 2002. All rights reserved. 11 Graduate College Iowa State University This is to certify that the master's thesis of Kevin August Meinert has met the thesis requirements of Iowa State University Signatures have been redacted for privacy
    [Show full text]
  • FAUST ARCHITECTURES DESIGN and OSC SUPPORT. D. Fober, Y. Orlarey, S. Letz Grame Centre National De Création Musicale Lyon, Fran
    Proc. of the 14th Int. Conference on Digital Audio Effects (DAFx-11), Paris, France, September 19-23, 2011 Proc. of the 14th International Conference on Digital Audio Effects (DAFx-11), Paris, France, September 19-23, 2011 FAUST ARCHITECTURES DESIGN AND OSC SUPPORT. D. Fober, Y. Orlarey, S. Letz Grame Centre national de création musicale Lyon, France {fober,orlarey,letz}@grame.fr ABSTRACT composition. For that purpose, FAUST relies on a block-diagram algebra of five composition operations (: , ~ <: :>) [1, 2]. FAUST [Functional Audio Stream] is a functional programming We don’t have the space to describe the language in details language specifically designed for real-time signal processing and but as an example here is how to write a pseudo random number synthesis. It consists in a compiler that translates a FAUST pro- generator r in Faust 2 : gram into an equivalent C++ program, taking care of generating r = +(12345)~*(1103515245); the most efficient code. The FAUST environment also includes This example uses the recursive composition operator ~ to cre- various architecture files, providing the glue between the FAUST ate a feedback loop as illustrated figure 1. C++ output and the host audio and GUI environments. The com- bination of architecture files and FAUST output gives ready to run applications or plugins for various systems, which makes a single FAUST specification available on different platforms and environ- ments without additional cost. This article presents the overall de- sign of the architecture files and gives more details on the recent OSC architecture. 1. INTRODUCTION From a technical point of view FAUST 1 (Functional Audio Stream) is a functional, synchronous, domain specific language designed for real-time signal processing and synthesis.
    [Show full text]
  • A Free Open-Source Mobile App for Multi-Touch Interaction with Faust Generated Modules Hyung-Suk Kim, Julius Smith
    FaustPad : A free open-source mobile app for multi-touch interaction with Faust generated modules Hyung-Suk Kim, Julius Smith To cite this version: Hyung-Suk Kim, Julius Smith. FaustPad : A free open-source mobile app for multi-touch interaction with Faust generated modules. Proceedings of the 10th International Linux Audio Conference (LAC- 12), Apr 2012, Stanford, United States. hal-03162971 HAL Id: hal-03162971 https://hal.archives-ouvertes.fr/hal-03162971 Submitted on 8 Mar 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. FaustPad : A free open-source mobile app for multi-touch interaction with Faust generated modules Hyung-Suk Kim Julius O. Smith Dept. of Electrical Engineering, Center for Computer Research in Music and Stanford University Acoustics. Palo Alto, CA 94305, USA (CCRMA) Stanford University [email protected] Palo Alto, CA 94305, USA [email protected] Abstract Mobile devices have become computationally In this paper we introduce a novel interface for powerful enough for audio synthesis directly on the mobile devices enabling multi-touch interaction with device. The MoMu toolkit [2] used by the Stanford sound modules generated with Faust1 (Functional Mobile Phone Orchestra (MoPho) is a toolkit for Audio Stream) and run in SuperCollider.
    [Show full text]