Audio Coding for Digital Broadcasting
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
DVB DASH Webinar 13 June 2018 DVB DASH: an Overview Simon Waller (Samsung) DVB Codecs and DVB DASH Chris Poole (BBC) DVB DASH in the Wild Martin Schmalohr (IRT)
DVB DASH webinar 13 June 2018 DVB DASH: An overview Simon Waller (Samsung) DVB codecs and DVB DASH Chris Poole (BBC) DVB DASH in the wild Martin Schmalohr (IRT) 1 DVB DASH: An overview • Quick ABR refresher • Why DVB DASH? • What does DVB DASH include? • Relationship with HbbTV • Where next? 2 ABR refresher and DASH nomenclature Bitrate 1 Representation … Segment Segment Segment … MPD AdaptationSet Bitrate 2 Representation Encoder … Segment Segment Segment … Bitrate 3 Representation … Segment Segment Segment … 3 MPEG DASH vs DVB DASH • MPEG DASH is a large complicated specification • DVB has defined a profile of MPEG DASH to help make services and players interoperable – This profile includes constraints, requirements, limitations, additions (e.g. A/V codec profiles) etc 4 What does DVB DASH cover 5 MPD and content constraints • Profiles to identify features for players (DVB 2014 URN and the new DVB 2017 URN) – New 2017 profile required for some of the latest features • MPD construction – Required elements and attributes – Maximum number of some elements • Segment construction – E.g. Min and max segment durations • Live vs On Demand 6 Profiled A/V codecs • Video codecs: – AVC – HEVC • Audio codecs: – AC-3, AC-4 parts 1 and 2 – AAC (including HE-AAC, HE-AACv2 and AAC-LC) – MPEG-H – MPEG Surround – DTS, DTS-HD, DTS-LBR 7 Subtitles • DVB DASH defines the carriage of XML based subtitles, as per EBU-TT-D • Downloadable fonts are supported – Particularly useful for non-Latin based languages 8 Content protection • DVB does not specify a DRM but does reference MPEG Common Encryption which defines how content is encrypted and how license metadata can be carried. -
Ts 103 190-2 V1.2.1 (2018-02)
ETSI TS 103 190-2 V1.2.1 (2018-02) TECHNICAL SPECIFICATION Digital Audio Compression (AC-4) Standard; Part 2: Immersive and personalized audio 2 ETSI TS 103 190-2 V1.2.1 (2018-02) Reference RTS/JTC-043-2 Keywords audio, broadcasting, codec, content, digital, distribution, object audio, personalization ETSI 650 Route des Lucioles F-06921 Sophia Antipolis Cedex - FRANCE Tel.: +33 4 92 94 42 00 Fax: +33 4 93 65 47 16 Siret N° 348 623 562 00017 - NAF 742 C Association à but non lucratif enregistrée à la Sous-Préfecture de Grasse (06) N° 7803/88 Important notice The present document can be downloaded from: http://www.etsi.org/standards-search The present document may be made available in electronic versions and/or in print. The content of any electronic and/or print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any existing or perceived difference in contents between such versions and/or in print, the only prevailing document is the print of the Portable Document Format (PDF) version kept on a specific network drive within ETSI Secretariat. Users of the present document should be aware that the document may be subject to revision or change of status. Information on the current status of this and other ETSI documents is available at https://portal.etsi.org/TB/ETSIDeliverableStatus.aspx If you find errors in the present document, please send your comment to one of the following services: https://portal.etsi.org/People/CommiteeSupportStaff.aspx Copyright Notification No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying and microfilm except as authorized by written permission of ETSI. -
Dolby AC-4, Please Visit Dolby.Com/AC-4
NEXT-GENERATION AUDIO Dolby ® AC-4 For more information on Dolby AC-4, please visit Dolby.com/AC-4. ® ® NEX T-GENERATION AUDIO ED2 OR Dolby AC-4 PCM & METADATA AC-3 AC-4 PCM Integrating Personalized and Immersive Programming into Broadcast Operations The diagram below outlines a representative end-to-end signal flow for live and post-produced content through the broadcast ecosystem. PRODUCTION OPEN STANDARDS OUTSIDE BROADCAST BROADCASTER PAY TV OPERATOR CONFIDENCE MONITOR OUTPUT LEGACY RENDERER/ OBJECT AUDIO PROCESSOR ENCODER CHANNEL AUDIO OBJECT METADATA AUDIO MONITORING METADATA ANALYSIS GENERATION DISTRIBUTION INTEGRATED ENCODER RECEIVER/ MASTER DECODER CONTROL CHANNEL METADATA & OBJECT CHANNEL / GROUPING EMISSION INTEGRATED EMISSION OBJECT ROUTER AND OBJECT AUDIO ENCODER RECEIVER/ ENCODER DEFINITION DECODER PASSTHROUGH ANALYSIS PCM RENDERING AND PROCESSING INGEST PLAYOUT AUDIO SERVER SERVER MONITORING METADATA PCM MEZZANINE CONTRIBUTION AD STREAMING TRANSPORT FORMAT ENCODER MODULATOR/ SERVER ENCODER ENCODING FIBER MUX MUX/ SPLICER STREAMING ENCODER POSTPRODUCTION CDN ANTENNA CDN CONFIDENCE MONITOR OUTPUT OBJECT AUDIO PROCESSOR IN THE HOME SET-TOP BOX CHANNEL METADATA OBJECT AUDIO METADATA ANALYSIS GENERATION DEMODULATOR CHANNEL METADATA & OBJECT CHANNEL / ® GROUPING OBJECT AC-4 HDMI AND OBJECT AUDIO DECODER FORMATTER DEFINITION ANALYSIS PCM RENDERING AND PROCESSING TABLET/SMARTPHONE TV TO AVR / SOUNDBAR METADATA TO HEADPHONES TO SPEAKERS PCM ITU-R BWAV ADM (MXF, IMF) PACKAGING OBJECT APP AC-4 HEADPHONE RECEIVER / AC-4 HDMI® & CHANNEL DECODER RENDERER PROCESSOR DECODER FORMATTER INTERCHANGE TO AVR / SOUND BAR METADATA ED2 ENCODER PCM AC-4 IMMERSIVENESS Dolby AC-4 natively supports immersive audio to move sound around the audience in three-dimensional space. Both channel-based and object-based immersive audio can be delivered to the home and played back on speakers or headphones. -
Digital Speech Processing— Lecture 17
Digital Speech Processing— Lecture 17 Speech Coding Methods Based on Speech Models 1 Waveform Coding versus Block Processing • Waveform coding – sample-by-sample matching of waveforms – coding quality measured using SNR • Source modeling (block processing) – block processing of signal => vector of outputs every block – overlapped blocks Block 1 Block 2 Block 3 2 Model-Based Speech Coding • we’ve carried waveform coding based on optimizing and maximizing SNR about as far as possible – achieved bit rate reductions on the order of 4:1 (i.e., from 128 Kbps PCM to 32 Kbps ADPCM) at the same time achieving toll quality SNR for telephone-bandwidth speech • to lower bit rate further without reducing speech quality, we need to exploit features of the speech production model, including: – source modeling – spectrum modeling – use of codebook methods for coding efficiency • we also need a new way of comparing performance of different waveform and model-based coding methods – an objective measure, like SNR, isn’t an appropriate measure for model- based coders since they operate on blocks of speech and don’t follow the waveform on a sample-by-sample basis – new subjective measures need to be used that measure user-perceived quality, intelligibility, and robustness to multiple factors 3 Topics Covered in this Lecture • Enhancements for ADPCM Coders – pitch prediction – noise shaping • Analysis-by-Synthesis Speech Coders – multipulse linear prediction coder (MPLPC) – code-excited linear prediction (CELP) • Open-Loop Speech Coders – two-state excitation -
Overview of Technologies for Immersive Visual Experiences: Capture, Processing, Compression, Standardization and Display
Overview of technologies for immersive visual experiences: capture, processing, compression, standardization and display Marek Domański Poznań University of Technology Chair of Multimedia Telecommunications and Microelectronics Poznań, Poland 1 Immersive visual experiences • Arbitrary direction of viewing System : 3DoF • Arbitrary location of viewer - virtual navigation - free-viewpoint television • Both System : 6DoF Virtual viewer 2 Immersive video content Computer-generated Natural content Multiple camera around a scene also depth cameras, light-field cameras Camera(s) located in a center of a scene 360-degree cameras Mixed e.g.: wearable cameras 3 Video capture Common technical problems • Synchronization of cameras Camera hardware needs to enable synchronization Shutter release error < Exposition interval • Frame rate High frame rate needed – Head Mounted Devices 4 Common task for immersive video capture: Calibration Camera parameter estimation: • Intrinsic – the parameters of individual cameras – remain unchanged by camera motion • Extrinsic – related to camera locations in real word – do change by camera motion (even slight !) Out of scope of standardization • Improved methods developed 5 Depth estimation • By video analysis – from at least 2 video sequences – Computationally heavy – Huge progress recently • By depth cameras – diverse products – Infrared illuminate of the scene – Limited resolution mostly Out of scope of standardization 6 MPEG Test video sequences • Large collection of video sequences with depth information -
Communications/Information
Communications/Information Volume 7 — November 2008 Issue date: November 7, 2008 Info Update is published by the Canadian Standards Association (CSA) eight times a year. It contains important information about new and existing standards, e.g., recently published standards, and withdrawn standards. It also gives you highlights of other activities and services. CSA offers a free online service called Keep Me Informed that will notify registered users when each new issue of Info Update is published. To register go to http://www.csa-intl.org/onlinestore/KeepMeInformed/PleaseIdentifyYourself.asp?Language=EN. To view the complete issue of Info Update visit http://standardsactivities.csa.ca/standardsactivities/default.asp?language=en. y Completed Projects / Projets terminés New Standards — New Editions — Special Publications Please note: The following standards were developed by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), and have been adopted by the Canadian Standards Association. These standards are available in Portable Document Format (PDF) only. CAN/CSA-ISO/IEC 7812-2:08, 2nd edition Identification cards — Identification of issuers — Part 2: Application and registration procedures (Adopted ISO/IEC 7812-2:2007).................................................................. $110 CAN/CSA-ISO/IEC 7816-2:08, 1st edition Identification cards — Integrated circuit cards — Part 2: Cards with contacts — Dimensions and location of the contacts (Adopted ISO/IEC 7816-2:2007) ......................... $60 CAN/CSA-ISO/IEC 7816-13:08, 1st edition Identification cards — Integrated circuit cards — Part 13: Commands for application management in a multi-application environment (Adopted ISO/IEC 7816-13:2007)....... $110 CAN/CSA-ISO/IEC 8484:08, 1st edition Information technology — Magnetic stripes on savingsbooks (Adopted ISO/IEC 8484:2007) ...................................................................................... -
MPEG Surround Audio Coding on TI Floating-Point Dsps
MPEG Surround Audio Coding on TI Floating-Point DSPs Matthias Neusinger Fraunhofer IIS [email protected] Agenda • The Fraunhofer Institute for Integrated Circuits • The MPEG Surround Coding Algorithm • Audio Coding for TI Floating-Point Processors • Conclusions The “Fraunhofer Gesellschaft“ - FhG Itzehoe • Largest private research Rostock organization in Europe Bremen • Non-profit organization, Hannover Berlin founded in 1949 Braunschweig Golm Oberhausen Magdeburg • 56 Institutes in 40 locations DuisburgDortmund in Germany Aachen Schmallenberg St. Augustin Dresden Euskirchen Jena • Offices in Europe, USA and Ilmenau Chemnitz Asia Darmstadt Würzburg Kaiserslautern St. Ingbert Erlangen • Permanent staff 12,500, Saarbrücken Nuremberg Karlsruhe mainly scientists and engineers Pfinztal Stuttgart Freising München Freiburg Holzkirchen Fraunhofer IIS Institute for Integrated Circuits • Founded in 1985, “Home staff ~450 in 4 locations of mp3” • Audio and Multimedia Cluster: – More than 100 engineers in Erlangen – MPEG Audio encoders and decoders – MPEG-4 Advanced Audio Coding (AAC, HE-AAC v2) – MPEG Layer-3 (mp3) – MPEG-4 Video Coding, AV Streaming, Virtual Acoustics – Digital Rights Management Agenda • The Fraunhofer Institute for Integrated Circuits • The MPEG Surround Coding Algorithm • Audio Coding for TI Floating-Point Processors • Conclusions The MPEG Surround Coding Algorithm •Overview • MPEG Standardization • Applications • Technical Description • Features and Modes • Profiles and Levels • Summary MPEG Surround Overview • Efficient -
PXC 550 Wireless Headphones
PXC 550 Wireless headphones Instruction Manual 2 | PXC 550 Contents Contents Important safety instructions ...................................................................................2 The PXC 550 Wireless headphones ...........................................................................4 Package includes ..........................................................................................................6 Product overview .........................................................................................................7 Overview of the headphones .................................................................................... 7 Overview of LED indicators ........................................................................................ 9 Overview of buttons and switches ........................................................................10 Overview of gesture controls ..................................................................................11 Overview of CapTune ................................................................................................12 Getting started ......................................................................................................... 14 Charging basics ..........................................................................................................14 Installing CapTune .....................................................................................................16 Pairing the headphones ...........................................................................................17 -
Surround Sound Processed by Opus Codec: a Perceptual Quality Assessment
28. Konferenz Elektronische Sprachsignalverarbeitung 2017, Saarbrücken SURROUND SOUND PROCESSED BY OPUS CODEC: APERCEPTUAL QUALITY ASSESSMENT Franziska Trojahn, Martin Meszaros, Michael Maruschke and Oliver Jokisch Hochschule für Telekommunikation Leipzig, Germany [email protected] Abstract: The article describes the first perceptual quality study of 5.1 surround sound that has been processed by the Opus codec standardised by the Internet Engineering Task Force (IETF). All listening sessions with up to five subjects took place in a slightly sound absorbing laboratory – simulating living room conditions. For the assessment we conducted a Degradation Category Rating (DCR) listening- opinion test according to ITU-T P.800 recommendation with stimuli for six channels at total bitrates between 96 kbit/s and 192 kbit/s as well as hidden references. A group of 27 naive listeners compared a total of 20 sound samples. The differences between uncompressed and degraded sound samples were rated on a five-point degradation category scale resulting in Degradation Mean Opinion Score (DMOS). The overall results show that the average quality correlates with the bitrates. The quality diverges for the individual test stimuli depending on the music characteristics. Under most circumstances, a bitrate of 128 kbit/s is sufficient to achieve acceptable quality. 1 Introduction Nowadays, a high number of different speech and audio codecs are implemented in several kinds of multimedia applications; including audio / video entertainment, broadcasting and gaming. In recent years the demand for low delay and high quality audio applications, such as remote real-time jamming and cloud gaming, has been increasing. Therefore, current research objectives do not only include close to natural speech or audio quality, but also the requirements of low bitrates and a minimum latency. -
(A/V Codecs) REDCODE RAW (.R3D) ARRIRAW
What is a Codec? Codec is a portmanteau of either "Compressor-Decompressor" or "Coder-Decoder," which describes a device or program capable of performing transformations on a data stream or signal. Codecs encode a stream or signal for transmission, storage or encryption and decode it for viewing or editing. Codecs are often used in videoconferencing and streaming media solutions. A video codec converts analog video signals from a video camera into digital signals for transmission. It then converts the digital signals back to analog for display. An audio codec converts analog audio signals from a microphone into digital signals for transmission. It then converts the digital signals back to analog for playing. The raw encoded form of audio and video data is often called essence, to distinguish it from the metadata information that together make up the information content of the stream and any "wrapper" data that is then added to aid access to or improve the robustness of the stream. Most codecs are lossy, in order to get a reasonably small file size. There are lossless codecs as well, but for most purposes the almost imperceptible increase in quality is not worth the considerable increase in data size. The main exception is if the data will undergo more processing in the future, in which case the repeated lossy encoding would damage the eventual quality too much. Many multimedia data streams need to contain both audio and video data, and often some form of metadata that permits synchronization of the audio and video. Each of these three streams may be handled by different programs, processes, or hardware; but for the multimedia data stream to be useful in stored or transmitted form, they must be encapsulated together in a container format. -
Preview - Click Here to Buy the Full Publication
This is a preview - click here to buy the full publication IEC 62481-2 ® Edition 2.0 2013-09 INTERNATIONAL STANDARD colour inside Digital living network alliance (DLNA) home networked device interoperability guidelines – Part 2: DLNA media formats INTERNATIONAL ELECTROTECHNICAL COMMISSION PRICE CODE XH ICS 35.100.05; 35.110; 33.160 ISBN 978-2-8322-0937-0 Warning! Make sure that you obtained this publication from an authorized distributor. ® Registered trademark of the International Electrotechnical Commission This is a preview - click here to buy the full publication – 2 – 62481-2 © IEC:2013(E) CONTENTS FOREWORD ......................................................................................................................... 20 INTRODUCTION ................................................................................................................... 22 1 Scope ............................................................................................................................. 23 2 Normative references ..................................................................................................... 23 3 Terms, definitions and abbreviated terms ....................................................................... 30 3.1 Terms and definitions ............................................................................................ 30 3.2 Abbreviated terms ................................................................................................. 34 3.4 Conventions ......................................................................................................... -
A Multi-Frame PCA-Based Stereo Audio Coding Method
applied sciences Article A Multi-Frame PCA-Based Stereo Audio Coding Method Jing Wang *, Xiaohan Zhao, Xiang Xie and Jingming Kuang School of Information and Electronics, Beijing Institute of Technology, 100081 Beijing, China; [email protected] (X.Z.); [email protected] (X.X.); [email protected] (J.K.) * Correspondence: [email protected]; Tel.: +86-138-1015-0086 Received: 18 April 2018; Accepted: 9 June 2018; Published: 12 June 2018 Abstract: With the increasing demand for high quality audio, stereo audio coding has become more and more important. In this paper, a multi-frame coding method based on Principal Component Analysis (PCA) is proposed for the compression of audio signals, including both mono and stereo signals. The PCA-based method makes the input audio spectral coefficients into eigenvectors of covariance matrices and reduces coding bitrate by grouping such eigenvectors into fewer number of vectors. The multi-frame joint technique makes the PCA-based method more efficient and feasible. This paper also proposes a quantization method that utilizes Pyramid Vector Quantization (PVQ) to quantize the PCA matrices proposed in this paper with few bits. Parametric coding algorithms are also employed with PCA to ensure the high efficiency of the proposed audio codec. Subjective listening tests with Multiple Stimuli with Hidden Reference and Anchor (MUSHRA) have shown that the proposed PCA-based coding method is efficient at processing stereo audio. Keywords: stereo audio coding; Principal Component Analysis (PCA); multi-frame; Pyramid Vector Quantization (PVQ) 1. Introduction The goal of audio coding is to represent audio in digital form with as few bits as possible while maintaining the intelligibility and quality required for particular applications [1].