A Standard Audio API for C++: Motivation, Scope, and Basic Design
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
MAGIC Summoning: Towards Automatic Suggesting and Testing of Gestures with Low Probability of False Positives During Use
JournalofMachineLearningResearch14(2013)209-242 Submitted 10/11; Revised 6/12; Published 1/13 MAGIC Summoning: Towards Automatic Suggesting and Testing of Gestures With Low Probability of False Positives During Use Daniel Kyu Hwa Kohlsdorf [email protected] Thad E. Starner [email protected] GVU & School of Interactive Computing Georgia Institute of Technology Atlanta, GA 30332 Editors: Isabelle Guyon and Vassilis Athitsos Abstract Gestures for interfaces should be short, pleasing, intuitive, and easily recognized by a computer. However, it is a challenge for interface designers to create gestures easily distinguishable from users’ normal movements. Our tool MAGIC Summoning addresses this problem. Given a specific platform and task, we gather a large database of unlabeled sensor data captured in the environments in which the system will be used (an “Everyday Gesture Library” or EGL). The EGL is quantized and indexed via multi-dimensional Symbolic Aggregate approXimation (SAX) to enable quick searching. MAGIC exploits the SAX representation of the EGL to suggest gestures with a low likelihood of false triggering. Suggested gestures are ordered according to brevity and simplicity, freeing the interface designer to focus on the user experience. Once a gesture is selected, MAGIC can output synthetic examples of the gesture to train a chosen classifier (for example, with a hidden Markov model). If the interface designer suggests his own gesture and provides several examples, MAGIC estimates how accurately that gesture can be recognized and estimates its false positive rate by comparing it against the natural movements in the EGL. We demonstrate MAGIC’s effectiveness in gesture selection and helpfulness in creating accurate gesture recognizers. -
Audio Middleware the Essential Link from Studio to Game Design
AUDIONEXT B Y A LEX A N D E R B R A NDON Audio Middleware The Essential Link From Studio to Game Design hen I first played games such as Pac Man and GameCODA. The same is true of Renderware native audio Asteroids in the early ’80s, I was fascinated. tools. One caveat: Criterion is now owned by Electronic W While others saw a cute, beeping box, I saw Arts. The Renderware site was last updated in 2005, and something to be torn open and explored. How could many developers are scrambling to Unreal 3 due to un- these games create sounds I’d never heard before? Back certainty of Renderware’s future. Pity, it’s a pretty good then, it was transistors, followed by simple, solid-state engine. sound generators programmed with individual memory Streaming is supported, though it is not revealed how registers, machine code and dumb terminals. Now, things it is supported on next-gen consoles. What is nice is you are more complex. We’re no longer at the mercy of 8-bit, can specify whether you want a sound streamed or not or handing a sound to a programmer, and saying, “Put within CAGE Producer. GameCODA also provides the it in.” Today, game audio engineers have just as much ability to create ducking/mixing groups within CAGE. In power to create an exciting soundscape as anyone at code, this can also be taken advantage of using virtual Skywalker Ranch. (Well, okay, maybe not Randy Thom, voice channels. but close, right?) Other than SoundMAX (an older audio engine by But just as a single-channel strip on a Neve or SSL once Analog Devices and Staccato), GameCODA was the first baffled me, sound-bank manipulation can baffle your audio engine I’ve seen that uses matrix technology to average recording engineer. -
Institut Für Musikinformatik Und Musikwissenschaft
Institut für Musikinformatik und Musikwissenschaft Leitung: Prof. Dr. Christoph Seibert Veranstaltungsverzeichnis für das Sommersemester 2021 Stand 9.04.2021 Vorbemerkung: Im Sommersemester 2021 werden Lehrveranstaltungen als Präsenzveranstaltungen, online oder in hybrider Form (Präsenzveranstaltung mit der Möglichkeit auch online teilzunehmen) durchgeführt. Die konkrete Durchführung der Lehrveranstaltung hängt ab von der jeweils aktuellen Corona-Verordnung Studienbetrieb. Darüber hinaus ausschlaggebend sind die Anzahl der Teilnehmerinnen und Teilnehmer, die maximal zulässige Personenzahl im zugewiesenen Raum und inhaltliche Erwägungen. Die vorliegenden Angaben hierzu entsprechen dem aktuellen Planungsstand. Für die weitere Planung ist es notwendig, dass alle, die an einer Lehrveranstaltung teilnehmen möchten, sich bis Mittwoch, 07.04. bei der jeweiligen Dozentin / dem jeweiligen Dozenten per E-Mail anmelden. Geben Sie bitte auch an, wenn Sie aufgrund der Pandemie-Situation nicht an Präsenzveranstaltungen teilnehmen können oder möchten, etwa bei Zugehörigkeit zu einer Risikogruppe. Diese Seite wird in regelmäßigen Abständen aktualisiert, um die Angaben den aktuellen Gegebenheiten anzupassen. Änderungen gegenüber früheren Fassungen werden markiert. Musikinformatik Prof. Dr. Marc Bangert ([email protected]) Prof. Dr. Paulo Ferreira-Lopes ([email protected]) Prof. Dr. Eckhard Kahle ([email protected]) Prof. Dr. Christian Langen ([email protected]) Prof. Dr. Damon T. Lee ([email protected]) Prof. Dr. Marlon Schumacher ([email protected]) Prof. Dr. Christoph Seibert ([email protected]) Prof. Dr. Heiko Wandler ([email protected]) Tobias Bachmann ([email protected]) Patrick Borgeat ([email protected]) Anna Czepiel ([email protected]) Dirk Handreke ([email protected]) Daniel Höpfner ([email protected]) Daniel Fütterer ([email protected]) Rainer Lorenz ([email protected]) Nils Lemke ([email protected]) Alexander Lunt ([email protected]) Luís A. -
Amigaos 3.2 FAQ 47.1 (09.04.2021) English
$VER: AmigaOS 3.2 FAQ 47.1 (09.04.2021) English Please note: This file contains a list of frequently asked questions along with answers, sorted by topics. Before trying to contact support, please read through this FAQ to determine whether or not it answers your question(s). Whilst this FAQ is focused on AmigaOS 3.2, it contains information regarding previous AmigaOS versions. Index of topics covered in this FAQ: 1. Installation 1.1 * What are the minimum hardware requirements for AmigaOS 3.2? 1.2 * Why won't AmigaOS 3.2 boot with 512 KB of RAM? 1.3 * Ok, I get it; 512 KB is not enough anymore, but can I get my way with less than 2 MB of RAM? 1.4 * How can I verify whether I correctly installed AmigaOS 3.2? 1.5 * Do you have any tips that can help me with 3.2 using my current hardware and software combination? 1.6 * The Help subsystem fails, it seems it is not available anymore. What happened? 1.7 * What are GlowIcons? Should I choose to install them? 1.8 * How can I verify the integrity of my AmigaOS 3.2 CD-ROM? 1.9 * My Greek/Russian/Polish/Turkish fonts are not being properly displayed. How can I fix this? 1.10 * When I boot from my AmigaOS 3.2 CD-ROM, I am being welcomed to the "AmigaOS Preinstallation Environment". What does this mean? 1.11 * What is the optimal ADF images/floppy disk ordering for a full AmigaOS 3.2 installation? 1.12 * LoadModule fails for some unknown reason when trying to update my ROM modules. -
Graphical Design of Physical Models for Real-Time Sound Synthesis
peter vasil GRAPHICALDESIGNOFPHYSICALMODELSFOR REAL-TIMESOUNDSYNTHESIS Masterarbeit Audiokommunikation und -technologie GRAPHICALDESIGNOFPHYSICALMODELS FORREAL-TIMESOUNDSYNTHESIS Implementation of a Graphical User Interface for the Synth-A-Modeler compiler vorgelegt von peter vasil Matr.-Nr.: 328493 Technische Universität Berlin Fakultät 1: Fachgebiet Audiokommunikation Abgabe: 25. Juli 2013 Peter Vasil: Graphical Design of Physical Models for Real-Time Sound Synthesis, Implementation of a Graphical User Interface for the Synth- A-Modeler compiler, © July 2013 supervisors: Prof. Dr. Stefan Weinzierl Dr. Edgar Berdahl ABSTRACT The goal of this Master of Science thesis in Audio Communication and Technology at Technical University Berlin, is to develop a Graph- ical User Interface (GUI) for the Synth-A-Modeler compiler, a text-based tool for converting physical model specification files into DSP exter- nal modules. The GUI should enable composers, artists and students to use physical modeling intuitively, without having to employ com- plex mathematical equations normally necessary for physical model- ing. The GUI that will be developed in the course of this thesis allows the creation of physical models with graphical objects for physical masses, links, resonators, waveguides, terminations, and audioout objects. In addition, this tool will be used to create a model for an Arabic oud to demonstrate its functionality. ZUSAMMENFASSUNG Das Ziel dieser Master Arbeit am Fachgebiet Audiokommunikation der TU Berlin, ist die Entwicklung einer graphischen Umgebung für die textbasierte Software, Synth-A-Modeler compiler, welche es erlaubt, ein speziell dafür entwickeltes Datei Format für physikalische Mod- elle, in externe DSP Module umzuwandeln. Die Software macht es Komponisten, Künstlern und Studenten möglich, physikalische Modellierung intuitiv zu anzuwenden, ohne die komplexen mathe- matischen Formeln, welche normalerweise für physikalische Model- lierung nötig sind, anwenden zu müssen. -
Understanding and Mitigating the Security Risks of Apple Zeroconf
2016 IEEE Symposium on Security and Privacy Staying Secure and Unprepared: Understanding and Mitigating the Security Risks of Apple ZeroConf Xiaolong Bai*,1, Luyi Xing*,2, Nan Zhang2, XiaoFeng Wang2, Xiaojing Liao3, Tongxin Li4, Shi-Min Hu1 1TNList, Tsinghua University, Beijing, 2Indiana University Bloomington, 3Georgia Institute of Technology, 4Peking University [email protected], {luyixing, nz3, xw7}@indiana.edu, [email protected], [email protected], [email protected] Abstract—With the popularity of today’s usability-oriented tend to build their systems in a “plug-and-play” fashion, designs, dubbed Zero Configuration or ZeroConf, unclear are using techniques dubbed zero-configuration (ZeroConf ). For the security implications of these automatic service discovery, example, the AirDrop service on iPhone, once activated, “plug-and-play” techniques. In this paper, we report the first systematic study on this issue, focusing on the security features automatically detects another Apple device nearby running of the systems related to Apple, the major proponent of the service to transfer documents. Such ZeroConf services ZeroConf techniques. Our research brings to light a disturb- are characterized by automatic IP selection, host name ing lack of security consideration in these systems’ designs: resolving and target service discovery. Prominent examples major ZeroConf frameworks on the Apple platforms, includ- include Apple’s Bonjour [3], and the Link-Local Multicast ing the Core Bluetooth Framework, Multipeer Connectivity and -
Porting Portaudio API on ASIO
GRAME - Computer Music Research Lab. Technical Note - 01-11-06 Porting PortAudio API on ASIO Stéphane Letz November 2001 Grame - Computer Music Research Laboratory 9, rue du Garet BP 1185 69202 FR - LYON Cedex 01 [email protected] Abstract This document describes a port of the PortAudio API using the ASIO API on Macintosh and Windows. It explains technical choices used, buffer size adaptation techniques that guarantee minimal additional latency, results and limitations. 1 The ASIO API ASIO (Audio Streaming Input Ouput) is an API defined and proposed by Steinberg. It addesses the area of efficient audio processing, synchronization, low latency and extentibility on the hardware side. ASIO allows the handling of multi-channel professional audio cards, and different sample rates (32 kHz to 96 kHz), different sample formats (16, 24, 32 bits of 32/64 floating point formats). ASIO is available on MacOS and Windows. 2 PortAudio API PortAudio is a library that provides streaming audio input and output. It is a cross-platform API that works on Windows, Macintosh, Linux, SGI, FreeBSD and BeOS. This means that programs that need to process or generate an audio signal, can run on several different computers just by recompiling the source code. PortAudio is intended to promote the exchange of audio synthesis software between developers on different platforms. 3 Technical choices Porting PortAudio to ASIO means that some technical choices have to be made. The life cycle of the ASIO driver must be “mapped” to the life cycle of a PortAudio application. Each PortAudio function will be implemented using one or more ASIO functions. -
Python-Sounddevice Release 0.4.2
python-sounddevice Release 0.4.2 Matthias Geier 2021-07-18 Contents 1 Installation 2 2 Usage 3 2.1 Playback................................................3 2.2 Recording...............................................4 2.3 Simultaneous Playback and Recording................................4 2.4 Device Selection...........................................4 2.5 Callback Streams...........................................5 2.6 Blocking Read/Write Streams.....................................6 3 Example Programs 6 3.1 Play a Sound File...........................................6 3.2 Play a Very Long Sound File.....................................8 3.3 Play a Very Long Sound File without Using NumPy......................... 10 3.4 Play a Web Stream.......................................... 12 3.5 Play a Sine Signal........................................... 14 3.6 Input to Output Pass-Through..................................... 15 3.7 Plot Microphone Signal(s) in Real-Time............................... 17 3.8 Real-Time Text-Mode Spectrogram................................. 19 3.9 Recording with Arbitrary Duration.................................. 21 3.10 Using a stream in an asyncio coroutine............................... 23 3.11 Creating an asyncio generator for audio blocks........................... 24 4 Contributing 26 4.1 Reporting Problems.......................................... 26 4.2 Development Installation....................................... 28 4.3 Building the Documentation..................................... 28 5 API Documentation -
Foundations for Music-Based Games
Die approbierte Originalversion dieser Diplom-/Masterarbeit ist an der Hauptbibliothek der Technischen Universität Wien aufgestellt (http://www.ub.tuwien.ac.at). The approved original version of this diploma or master thesis is available at the main library of the Vienna University of Technology (http://www.ub.tuwien.ac.at/englweb/). MASTERARBEIT Foundations for Music-Based Games Ausgeführt am Institut für Gestaltungs- und Wirkungsforschung der Technischen Universität Wien unter der Anleitung von Ao.Univ.Prof. Dipl.-Ing. Dr.techn. Peter Purgathofer und Univ.Ass. Dipl.-Ing. Dr.techn. Martin Pichlmair durch Marc-Oliver Marschner Arndtstrasse 60/5a, A-1120 WIEN 01.02.2008 Abstract The goal of this document is to establish a foundation for the creation of music-based computer and video games. The first part is intended to give an overview of sound in video and computer games. It starts with a summary of the history of game sound, beginning with the arguably first documented game, Tennis for Two, and leading up to current developments in the field. Next I present a short introduction to audio, including descriptions of the basic properties of sound waves, as well as of the special characteristics of digital audio. I continue with a presentation of the possibilities of storing digital audio and a summary of the methods used to play back sound with an emphasis on the recreation of realistic environments and the positioning of sound sources in three dimensional space. The chapter is concluded with an overview of possible categorizations of game audio including a method to differentiate between music-based games. -
Audio Software Developer (F/M/X)
Impulse Audio Lab is an audio company based in the heart of Munich. We are an interdisciplinary team of experts in interactive sound design and audio software development, working on innovative and pioneering projects focused on sound. We are looking for a Software Developer (f/m/x) for innovative audio projects You are passionate about audio and DSP algorithms and want to work on new products and solutions. The combination of audio processing and e-mobility sounds exciting to you. You want to solve complex problems through state-of-the-art software solutions and apply your knowledge and expertise to all aspects of the software development lifecycle. What are you waiting for? These could be your future responsibilities: • Handling complex audio software projects, from conception and architecture to implementation • Working on both client-oriented projects as well as our own growing product portfolio • Development, implementation and optimization of new algorithms for processing and generating audio • Sharing your experiences and knowledge with your colleagues This should be you: • You are educated to degree level or have equivalent experience in relevant fields such as Electrical Engineering, Media Technology or Computer Science • You have multiple years of experience programming audio software in C/C++, using frameworks such as Qt or JUCE and Audio Plug-in APIs • You are a team player and are also comfortable taking responsibility for your own projects • You are able to work collaboratively, from requirements to acceptance and in the corresponding tool landscape (Git, unit testing, build systems, automation, debugging, etc.) This could be a plus: • You have programming experience on embedded architectures (e.g. -
Cooper ( Interaction Design
Cooper ( Interaction Design Inspiration The Myth of Metaphor Cooper books by Alan Cooper, Chairman & Founder Articles June 1995 Newsletter Originally Published in Visual Basic Programmer's Journal Concept projects Book reviews Software designers often speak of "finding the right metaphor" upon which to base their interface design. They imagine that rendering their user interface in images of familiar objects from the real world will provide a pipeline to automatic learning by their users. So they render their user interface as an office filled with desks, file cabinets, telephones and address books, or as a pad of paper or a street of buildings in the hope of creating a program with breakthrough ease-of- learning. And if you search for that magic metaphor, you will be in august company. Some of the best and brightest designers in the interface world put metaphor selection as one of their first and most important tasks. But by searching for that magic metaphor you will be making one of the biggest mistakes in user interface design. Searching for that guiding metaphor is like searching for the correct steam engine to power your airplane, or searching for a good dinosaur on which to ride to work. I think basing a user interface design on a metaphor is not only unhelpful but can often be quite harmful. The idea that good user interface design is based on metaphors is one of the most insidious of the many myths that permeate the software community. Metaphors offer a tiny boost in learnability to first time users at http://www.cooper.com/articles/art_myth_of_metaphor.htm (1 of 8) [1/16/2002 2:21:34 PM] Cooper ( Interaction Design tremendous cost. -
The Open Master Hearing Aid (Openmha) Application Engineers' Manual
The Open Master Hearing Aid (openMHA) 4.16.1 Application Engineers’ Manual © 2005-2021 by HörTech gGmbH, Marie-Curie-Str. 2, D–26129 Oldenburg, Germany The Open Master Hearing Aid (openMHA) – Application Engineers’ Manual HörTech gGmbH Marie-Curie-Str. 2 D–26129 Oldenburg iii LICENSE AGREEMENT This file is part of the HörTech Open Master Hearing Aid (openMHA) Copyright © 2005 2006 2007 2008 2009 2010 2012 2013 2014 2015 2016 HörTech gGmbH. Copyright © 2017 2018 2019 2020 2021 HörTech gGmbH. openMHA is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, version 3 of the License. openMHA is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License, version 3 for more details. You should have received a copy of the GNU Affero General Public License, version 3 along with openMHA. If not, see <http://www.gnu.org/licenses/>. © 2005-2021 HörTech gGmbH, Oldenburg Contents 1 Introduction 1 1.1 Structure........................................ 1 1.2 Platform Services and Conventions......................... 2 2 The openMHA configuration language4 2.1 Structure of the openMHA configuration language................. 4 2.2 Communication between openMHA Plugins .................... 7 3 The openMHA host application8 3.1 Invocation of ’mha’ .................................. 8 3.2 Configuration variables of the openMHA host application............. 10 3.3 States of the openMHA host application ...................... 11 3.4 Audio abstraction layer................................ 11 4 GNU Octave/MATLAB tools 15 4.1 "mhactl_wrapper" - openMHA control interface for GNU Octave and MATLAB .