Game Audio Via Openal

Total Page:16

File Type:pdf, Size:1020Kb

Game Audio Via Openal Game Audio via OpenAL Summary In the Graphics For Games module, you learnt how to use OpenGL to create complex graphical scenes, improving your programming skills along the way, and learning about data structures such as scene graphs. In this workshop, you'll see how to add sounds to your game worlds using the OpenAL sound library. New Concepts Sound in games, OpenAL, PCM audio, binary file formats, FourCC codes, WAV files, limited resource management Introduction Audio has played an important part in gaming almost as long as there have been games to play - even Pong back in 1972 had simple sound effects. We've moved a long way from then; the 80s brought dedicated audio hardware such as the Commodore 64's SID chip that could play 3 simultaneous syn- thesised sounds, and later the Amiga brought the ability to play audio samples to the home gaming market. In modern gaming hardware, we can expect to hear many simultaneous sounds and music tracks, often in surround sound. Game developers now employ dedicated sound engineers that will carefully adjust the sounds in each game release to create an immersive aural experience - making sure that each individual sound is uniquely identifiable and correctly equalised, and that every music track suits the situation they will be played in. At the heart of a game's audio experience is the code that plays back the game sounds, and cal- culates which speakers they should use - the sound system. Although we can't hope to compete with the complex sound systems of AAA games, we should still be able to make a robust, simple system for the addition of sound in our 3D games, and that's what this workshop will assist you in creating. Audio Data Before we even begin to explore how to add in-game audio to our games, it's worth while having a brief overview of what sound actually is. Sound is simply the propagation of pressure waves through a medium, such as water or air. If we were to take a look at a sound wave using an oscilloscope, it'd look something like this: A single continuous wave, with an amplitude (how large the peaks and troughs of the wave are - the larger the amplitude, the louder the sound), and a frequency (how often these peaks and troughs occur - the higher the frequency, the higher pitched the sound is). 1 This poses a problem for computers. Sound waves are continuous, yet computers store discrete, digital values - so how do we store our game audio? The answer is Pulse-Code Modulation, or PCM for short. To digitally represent sound, the sound is regularly sampled, and its amplitude stored as a value. PCM data has two important components - sample rate (measured in hertz, meaning cycles per second, just like a computer's CPU) which determines how often a sample of the sound is taken, and bit depth, which determines the range of amplitude values that can be stored. As there is limited bit depth, and so only a finite number of values that can be stored, the sampled values of a sound wave are quantised - rounded to the nearest value that the bit depth can hold. Here's an example of our sound wave being transformed into PCM data: Clockwise from top left: A section of the original analogue wave, low bit depth and frequency, medium bit depth and frequency, hight bit depth and frequency You should be able to see how both bit depth and sample rate determine the final quality of the digitised sound. To give you an idea of what the commonly used values for bit depth and sample rate are, audio CDs encode their data at a sample rate of 44,056hz (over 44k samples a second!), and a bit rate of 16 (65,536 unique values). The audio hardware in your computer can natively handle PCM data, and can turn it back into analog data, for outputting via speakers or headphones. WAV Files One of the most popular file formats to store PCM data is IBM / Microsoft's WAV file. Unlike the text files you'll have been reading and writing to so far, WAV is a 'binary' file format, meaning it contains no human-readable data. Well, almost. WAV files use something known as FourCCs, short for Four-Character Code, to identify the start of data structures within the file, commonly known as 'chunks'. As the name suggests, a FourCC is a simple 4-byte identifier, with the byte values in the range that represents readable characters. If you were to open a WAV file in a text editor such as Notepad++, you'd see the phrase RIFF at the start of the file, and then WAVE a few characters later - FourCCs that identify the file is of type RIFF (a container file format, meaning it could contain any sort of binary data), with WAVE data in it. Here's a more detailed example of a WAV file’s contents: 2 After the initial file header containing RIFF and WAVE, we have a number of chunks, each of which has a FourCC, a size value, and a number of bytes matching that size value - this structure allows the file to be read in correctly, and searched through if necessary. WAV files can have many types of chunk, containing information on MIDI playback, que indicators within the file data, and even space for text notes. The most important two chunks, and the only two we'll be processing in the code later on, are the format chunk, and the data chunk. Here's what the format chunk looks like: Like other WAV chunks, it begins with a FourCC and a size, before defining a number of important characteristics on the PCM data contained within the WAV. We can determine the number of chan- nels the PCM data has (i.e. whether it is mono, stereo, 5.1, and so on), the sample rate, and bit rate. WAV files can contain either uncompressed or compressed PCM data (although uncompressed is far more common), so a compression code is also defined. Format chunks sometimes also specify additional data to support the compression type, so another size value is defined, determining the number of this extra data. Finally, this is what the data chunk contains: Not much to it! It's basically just a big lump of binary data, containing either compressed or uncompressed PCM data. All that's really worth pointing out is how the number of channels is handled by the PCM sample data: for each given 'time slice' of data (remember the 44,056 samples per second for a CD?), the PCM data for each channel is stored, one after the other, then the next samples, and so on. This interleaving of sample data per channel allows multichannel sound to be easily 'streamed' from the data source, without having to read data from different parts of the file. Audio Hardware Much as with graphical output, sound is usually handled by dedicated hardware, which may have a wide variety of capabilities. Inexpensive 'on board' sound chips might be little more than a digital to analogue signal converter connected to some audio jacks, while more expensive hardware may have support for natively decoding compressed audio, and contain sophisticated signal processors to add effects such as reverb. Higher end sound hardware will have a number of discrete channels, or voices that it can support (that is, the number of concurrent PCM sounds it can process into analogue data and output at one time, a value that is generally independent from whether the PCM data is mono, stereo etc.), while lower-end hardware might perform all PCM data mixing in software, outputting to a single combined channel. Depending on the exact audio features of the hardware, there might even be some amount of dedicated RAM, to buffer sound samples for higher performance. 3 OpenAL As with graphics cards, the majority of the varying hardware features of sound devices are hidden behind an API, which will in turn target the vendor-provided drivers of the hardware. There's a few of these APIs around: DirectSound for Windows, XAudio2 for Windows and Xbox 360, and ALSA for Linux. One of the most pervasive audio APIs is OpenAL, currently hosted and maintained by Creative Labs. As its name suggets, OpenAL aims to be for audio what OpenGL is for graphics - a cross-platform, easy to use API, that supports both hardware acceleration, and software fallbacks where necessary. Unlike OpenGL, which has an Architecture Review Board to discuss features and improvements to the language (currently handled by the Khronos Group), no such collaborative entity currently ex- ists for OpenAL, with all recent development being undertaken by Creative Labs, developers of the SoundBlaster series of sound cards. Like OpenGL, OpenAL is a giant 'state machine', with its API functions adding commands to an internal stack, which are popped off one after the other as they are processed. Implementations of OpenAL exist for Windows, Apple Mac and iOS devices, Android, and the Xbox 360. Sony's Playstation 3 has an OpenAL-like sound API called x, making OpenAL a popular choice for cross platform programs, and a firm understanding of how to use it an advantage for a budding game developer. OpenAL allows a number of sound sources to be played in 3D space, with all of the processing required to determine the volume of sound required in each speaker to generate the spatial effect han- dled automatically. It supports a number of parameters to determine how samples are played back, and how distance from a sound source effects sound playback volume.
Recommended publications
  • XMP SPECIFICATION PART 3 STORAGE in FILES Copyright © 2016 Adobe Systems Incorporated
    XMP SPECIFICATION PART 3 STORAGE IN FILES Copyright © 2016 Adobe Systems Incorporated. All rights reserved. Adobe XMP Specification Part 3: Storage in Files NOTICE: All information contained herein is the property of Adobe Systems Incorporated. No part of this publication (whether in hardcopy or electronic form) may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written consent of Adobe Systems Incorporated. Adobe, the Adobe logo, Acrobat, Acrobat Distiller, Flash, FrameMaker, InDesign, Illustrator, Photoshop, PostScript, and the XMP logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other countries. MS-DOS, Windows, and Windows NT are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Apple, Macintosh, Mac OS and QuickTime are trademarks of Apple Computer, Inc., registered in the United States and other countries. UNIX is a trademark in the United States and other countries, licensed exclusively through X/Open Company, Ltd. All other trademarks are the property of their respective owners. This publication and the information herein is furnished AS IS, is subject to change without notice, and should not be construed as a commitment by Adobe Systems Incorporated. Adobe Systems Incorporated assumes no responsibility or liability for any errors or inaccuracies, makes no warranty of any kind (express, implied, or statutory) with respect to this publication, and expressly disclaims any and all warranties of merchantability, fitness for particular purposes, and noninfringement of third party rights. Contents 1 Embedding XMP metadata in application files .
    [Show full text]
  • (A/V Codecs) REDCODE RAW (.R3D) ARRIRAW
    What is a Codec? Codec is a portmanteau of either "Compressor-Decompressor" or "Coder-Decoder," which describes a device or program capable of performing transformations on a data stream or signal. Codecs encode a stream or signal for transmission, storage or encryption and decode it for viewing or editing. Codecs are often used in videoconferencing and streaming media solutions. A video codec converts analog video signals from a video camera into digital signals for transmission. It then converts the digital signals back to analog for display. An audio codec converts analog audio signals from a microphone into digital signals for transmission. It then converts the digital signals back to analog for playing. The raw encoded form of audio and video data is often called essence, to distinguish it from the metadata information that together make up the information content of the stream and any "wrapper" data that is then added to aid access to or improve the robustness of the stream. Most codecs are lossy, in order to get a reasonably small file size. There are lossless codecs as well, but for most purposes the almost imperceptible increase in quality is not worth the considerable increase in data size. The main exception is if the data will undergo more processing in the future, in which case the repeated lossy encoding would damage the eventual quality too much. Many multimedia data streams need to contain both audio and video data, and often some form of metadata that permits synchronization of the audio and video. Each of these three streams may be handled by different programs, processes, or hardware; but for the multimedia data stream to be useful in stored or transmitted form, they must be encapsulated together in a container format.
    [Show full text]
  • Codec Is a Portmanteau of Either
    What is a Codec? Codec is a portmanteau of either "Compressor-Decompressor" or "Coder-Decoder," which describes a device or program capable of performing transformations on a data stream or signal. Codecs encode a stream or signal for transmission, storage or encryption and decode it for viewing or editing. Codecs are often used in videoconferencing and streaming media solutions. A video codec converts analog video signals from a video camera into digital signals for transmission. It then converts the digital signals back to analog for display. An audio codec converts analog audio signals from a microphone into digital signals for transmission. It then converts the digital signals back to analog for playing. The raw encoded form of audio and video data is often called essence, to distinguish it from the metadata information that together make up the information content of the stream and any "wrapper" data that is then added to aid access to or improve the robustness of the stream. Most codecs are lossy, in order to get a reasonably small file size. There are lossless codecs as well, but for most purposes the almost imperceptible increase in quality is not worth the considerable increase in data size. The main exception is if the data will undergo more processing in the future, in which case the repeated lossy encoding would damage the eventual quality too much. Many multimedia data streams need to contain both audio and video data, and often some form of metadata that permits synchronization of the audio and video. Each of these three streams may be handled by different programs, processes, or hardware; but for the multimedia data stream to be useful in stored or transmitted form, they must be encapsulated together in a container format.
    [Show full text]
  • Foundations for Music-Based Games
    Die approbierte Originalversion dieser Diplom-/Masterarbeit ist an der Hauptbibliothek der Technischen Universität Wien aufgestellt (http://www.ub.tuwien.ac.at). The approved original version of this diploma or master thesis is available at the main library of the Vienna University of Technology (http://www.ub.tuwien.ac.at/englweb/). MASTERARBEIT Foundations for Music-Based Games Ausgeführt am Institut für Gestaltungs- und Wirkungsforschung der Technischen Universität Wien unter der Anleitung von Ao.Univ.Prof. Dipl.-Ing. Dr.techn. Peter Purgathofer und Univ.Ass. Dipl.-Ing. Dr.techn. Martin Pichlmair durch Marc-Oliver Marschner Arndtstrasse 60/5a, A-1120 WIEN 01.02.2008 Abstract The goal of this document is to establish a foundation for the creation of music-based computer and video games. The first part is intended to give an overview of sound in video and computer games. It starts with a summary of the history of game sound, beginning with the arguably first documented game, Tennis for Two, and leading up to current developments in the field. Next I present a short introduction to audio, including descriptions of the basic properties of sound waves, as well as of the special characteristics of digital audio. I continue with a presentation of the possibilities of storing digital audio and a summary of the methods used to play back sound with an emphasis on the recreation of realistic environments and the positioning of sound sources in three dimensional space. The chapter is concluded with an overview of possible categorizations of game audio including a method to differentiate between music-based games.
    [Show full text]
  • FLV File Format
    Video File Format Specification Version 10 Copyright © 2008 Adobe Systems Incorporated. All rights reserved. This manual may not be copied, photocopied, reproduced, translated, or converted to any electronic or machine-readable form in whole or in part without written approval from Adobe Systems Incorporated. Notwithstanding the foregoing, a person obtaining an electronic version of this manual from Adobe may print out one copy of this manual provided that no part of this manual may be printed out, reproduced, distributed, resold, or transmitted for any other purposes, including, without limitation, commercial purposes, such as selling copies of this documentation or providing paid-for support services. Trademarks Adobe, ActionScript, Flash, Flash Media Server, XMP, and Flash Player are either registered trademarks or trademarks of Adobe Systems Incorporated and may be registered in the United States or in other jurisdictions including internationally. Other product names, logos, designs, titles, words, or phrases mentioned within this publication may be trademarks, service marks, or trade names of Adobe Systems Incorporated or other entities and may be registered in certain jurisdictions including internationally. No right or license is granted to any Adobe trademark. Third-Party Information This guide contains links to third-party websites that are not under the control of Adobe Systems Incorporated, and Adobe Systems Incorporated is not responsible for the content on any linked site. If you access a third-party website mentioned in this guide, then you do so at your own risk. Adobe Systems Incorporated provides these links only as a convenience, and the inclusion of the link does not imply that Adobe Systems Incorporated endorses or accepts any responsibility for the content on those third- party sites.
    [Show full text]
  • Surround Sound in Linux
    SURROUND SOUND IN LINUX 1. Introduction Getting applications in Linux to play surround sound can involve some work, especially when you want the highest quality. Don't think you're done just because you can hear sound, surround or otherwise. It's possible you still need to do some fine-tuning and calibration to get the most out of it. In this article, I will show you what you need to do get surround sound working properly, in both games and other applications. When you're done, there's no reason to boot Windows to watch movies anymore. In fact, the sound quality is possibly higher than you were used to in windows. Read on to find out why. Note that I'm not talking about duplicating the audio of the front speakers to the rear speakers. Two channel sources are to be played back on two speakers. Anybody who tells you otherwise doesn't know what stereo-imaging and soundstages are. What I mean by surround sound, is playing four or more channels of source audio, like surround sound in games and DVD movies. What you first need to do, get Alsa to work. For most people, this is no issue, and is already done. Note: there is actually a mistake in this article. The 10 dB boost with which the LFE channel is supposed to be played back is not included. I'm working on a correction. 2. Setting up a custom Alsa device Most applications output in 5.1. If you only have a 4.0 soundcard, like me, you have to make a custom sound device for the applications that don't provide downmixing themselves, or if you don't trust applications to downmix properly (like me...).
    [Show full text]
  • 仮想の “音の部屋” によるコミュニケーション・メディア Voiscape の JMF と Java 3D を使用した実装
    情報処理学会 DPS/CSEC 研究会 2004-3-5 (改訂版) 仮想の “音の部屋” によるコミュニケーション・メディア voiscape の JMF と Java 3D を使用した実装 金田 泰 日立製作所 中央研究所 〒185-8601 東京都国分寺市東恋ヶ窪1-280 E-mail: [email protected] 電話にかわるべき音声コミュニケーション・メディア voiscape の確立をめざして研究をおこなっている. Vois- cape においては 3 次元オーディオ技術によってつくられた仮想的な “音の部屋” を使用するが,音声通信と 3 次元音声にくわえて 3 次元グラフィクスを使用するプロトタイプを PC 上に開発した. このプロトタイプにお いては音声のキャプチャと通信のために JMF (Java Media Framework),3 次元音声 / グラフィクスのために Java 3D を使用した. 開発前はこれらの API をつなげば必要な基本機能がほぼ実現できるとかんがえてい たが,実際にはこれらを直接つなぐことはできず,3 次元音声のためには Java 3D のインタフェースをとおし て OpenAL を使用した. また,プロトタイプにおいては音質劣化や遅延などの問題を容易に解決することが できなかったが,試行をかさねてこれらの問題をほぼ解決した. An Implementation of a Virtual “Sound Room” Based Communication-Medium Called Voiscape Using JMF and Java 3D Yasusi Kanada Hitachi Ltd., Central Research Laboratory Higashi-Koigakubo 1-280, Kokubunji, Tokyo 185-8601, Japan E-mail: [email protected] The author researches toward establishing voice communication media called voiscape which shall replace telephone. A virtual “sound room” that is created by spatial audio technology is used in voiscape. We devel- oped a prototype on PCs, in which 3-D graphic is used for supplementing spatial autio. In this prototype, JMF (Java Media Framework) was used for voice capturing and communication, and Java 3D was used for spatial audio and 3-D graphics. Before the development, the author had believed that the basic functions required for the prototype would be realized by connecting these APIs. However, in fact, they cannot be connected di- rectly, so we used OpenAL through the interface of Java 3D. We also encountered problems of sound quality degradation and delay, but they have been almost solved by refining the program by trial and error. 1.
    [Show full text]
  • Creative Portable Audio & Video Users Guide
    User’s Guide Model No.: GH0260 Congratulations! Thank you for choosing the Sound Blaster EVO USB entertainment headset. Connect it to your computer and you are about to experience the legendary audio quality of Sound Blaster. The EVO USB enhances the quality of all your audio content, even from online streaming sources such as YouTube. Its beamforming dual microphone array guarantees crystal clear communication in any environment. Beyond that, it allows you to connect to your mobile devices and enjoy high quality audio on the go! Package Checklist Your Sound Blaster EVO USB package comes with the following: l The Sound Blaster EVO USB l MicroUSB-to-USB cable - Length: 1.8m (5.91ft) l 4-pole analog cable - Length: 1.2m (3.94ft) l Quick Start leaflet Minimum System Requirements l Intel Core™2 Duo processor 2.2 GHz, AMD Athlon 64x2 Dual Core or equivalent processor l Microsoft® Windows® 8 64-bit or 32-bit, Windows 7 64-bit or 32-bit, Windows Vista 64-bit or 32-bit; Macintosh OS X 10.5.8 and above l 1GB RAM l Powered USB 2.0/3.0 port Note: Due to programming changes, the recommended system requirements for the software and applications may change over time. Overview 1 2 3 5 4 OR 1 - Adjustable Headband 2 - Volume Control Multifunction Button 3 - (for calls and playback) 4 - 4-Pole Analog Jack 5 - MicroUSB Port Flexible Connection Options 4-pole Analog Cable to your Smart Devices USB Cable to your PC/Mac Splitter Cable* to your Soundcard * Cable not included Using Your Sound Blaster EVO USB Set up your headset in three simple steps: 1.
    [Show full text]
  • Mac OS X Server
    Mac OS X Server Version 10.4 Technology Overview August 2006 Technology Overview 2 Mac OS X Server Contents Page 3 Introduction Page 5 New in Version 10.4 Page 7 Operating System Fundamentals UNIX-Based Foundation 64-Bit Computing Advanced BSD Networking Architecture Robust Security Directory Integration High Availability Page 10 Integrated Management Tools Server Admin Workgroup Manager Page 14 Service Deployment and Administration Open Directory Server File and Print Services Mail Services Web Hosting Enterprise Applications Media Streaming iChat Server Software Update Server NetBoot and NetInstall Networking and VPN Distributed Computing Page 29 Product Details Page 31 Open Source Projects Page 35 Additional Resources Technology Overview 3 Mac OS X Server Introduction Mac OS X Server version 10.4 Tiger gives you everything you need to manage servers in a mixed-platform environment and to con gure, deploy, and manage powerful network services. Featuring the renowned Mac OS X interface, Mac OS X Server streamlines your management tasks with applications and utilities that are robust yet easy to use. Apple’s award-winning server software brings people and data together in innovative ways. Whether you want to empower users with instant messaging and blogging, gain greater control over email, reduce the cost and hassle of updating software, or build your own distributed supercomputer, Mac OS X Server v10.4 has the tools you need. The Universal release of Mac OS X Server runs on both Intel- and PowerPC-based The power and simplicity of Mac OS X Server are a re ection of Apple’s operating sys- Mac desktop and Xserve systems.
    [Show full text]
  • The Linux Gamers' HOWTO
    The Linux Gamers’ HOWTO Peter Jay Salzman Frédéric Delanoy Copyright © 2001, 2002 Peter Jay Salzman Copyright © 2003, 2004 Peter Jay SalzmanFrédéric Delanoy 2004-11-13 v.1.0.6 Abstract The same questions get asked repeatedly on Linux related mailing lists and news groups. Many of them arise because people don’t know as much as they should about how things "work" on Linux, at least, as far as games go. Gaming can be a tough pursuit; it requires knowledge from an incredibly vast range of topics from compilers to libraries to system administration to networking to XFree86 administration ... you get the picture. Every aspect of your computer plays a role in gaming. It’s a demanding topic, but this fact is shadowed by the primary goal of gaming: to have fun and blow off some steam. This document is a stepping stone to get the most common problems resolved and to give people the knowledge to begin thinking intelligently about what is going on with their games. Just as with anything else on Linux, you need to know a little more about what’s going on behind the scenes with your system to be able to keep your games healthy or to diagnose and fix them when they’re not. 1. Administra If you have ideas, corrections or questions relating to this HOWTO, please email me. By receiving feedback on this howto (even if I don’t have the time to answer), you make me feel like I’m doing something useful. In turn, it motivates me to write more and add to this document.
    [Show full text]
  • Creating and Archiving Born Digital Video
    Federal Agencies Digitization Guidelines Initiative Creating and Archiving Born Digital Video Part III. High Level Recommended Practices December 2, 2014 The FADGI Audio-Visual Working Group http://www.digitizationguidelines.govaudio-visual/ Creating and Archiving Born Digital Video III: High Level Recommended Practices By the Federal Agencies Digitization Guidelines Initiative Audio-Visual Working Group http://www.digitizationguidelines.gov/audio-visual/ Version 1.1, December 2, 2014 TABLE OF CONTENTS Introduction .............................................................................................................................................................. 4 What is this Document? .......................................................................................................................................... 4 Key to Case History References .............................................................................................................................. 4 Understanding the Recommended Practices .......................................................................................................... 4 Part 1. Advice for File Creators .............................................................................................................................. 6 Plan for High Quality Video Files and Metadata ................................................................................................... 6 RP 1.1 Select a camera and other recording equipment with the capability to capture at high quality
    [Show full text]
  • Learning Core Audio: a Hands-On Guide to Audio Programming For
    Learning Core Audio "EEJTPO8FTMFZ -FBSOJOH 4FSJFT 7JTJU LQIRUPLWFRPOHDUQLQJVHULHV GPS B DPNQMFUF MJTU PG BWBJMBCMF QVCMJDBUJPOT 5IF $GGLVRQ:HVOH\ /HDUQLQJ 6HULHV JTBDPMMFDUJPOPGIBOETPOQSPHSBNNJOH HVJEFT UIBU IFMQ ZPV RVJDLMZ MFBSO B OFX UFDIOPMPHZ PS MBOHVBHF TP ZPV DBO BQQMZXIBUZPVWFMFBSOFESJHIUBXBZ &BDI UJUMF DPNFT XJUI TBNQMF DPEF GPS UIF BQQMJDBUJPO PS BQQMJDBUJPOT CVJMUJO UIF UFYU 5IJT DPEF JT GVMMZ BOOPUBUFE BOE DBO CF SFVTFE JO ZPVS PXO QSPKFDUT XJUIOPTUSJOHTBUUBDIFE.BOZDIBQUFSTFOEXJUIBTFSJFTPGFYFSDJTFTUP FODPVSBHFZPVUPSFFYBNJOFXIBUZPVIBWFKVTUMFBSOFE BOEUPUXFBLPS BEKVTUUIFDPEFBTBXBZPGMFBSOJOH 5JUMFTJOUIJTTFSJFTUBLFBTJNQMFBQQSPBDIUIFZHFUZPVHPJOHSJHIUBXBZBOE MFBWF ZPV XJUI UIF BCJMJUZ UP XBML PGG BOE CVJME ZPVS PXO BQQMJDBUJPO BOE BQQMZ UIFMBOHVBHFPSUFDIOPMPHZUPXIBUFWFSZPVBSFXPSLJOHPO Learning Core Audio A Hands-On Guide to Audio Programming for Mac and iOS Chris Adamson Kevin Avila Upper Saddle River, NJ • Boston • Indianapolis • San Francisco New York • Toronto • Montreal • London • Munich • Paris • Madrid Cape Town • Sydney • Tokyo • Singapore • Mexico City Many of the designations used by manufacturers and sellers to distinguish their products Editor-in-Chief are claimed as trademarks. Where those designations appear in this book, and the publish- Mark Taub er was aware of a trademark claim, the designations have been printed with initial capital Senior Acquisitions letters or in all capitals. Editor The authors and publisher have taken care in the preparation of this book, but make no Trina MacDonald expressed or implied warranty
    [Show full text]