Itu-T G.719: a New Low-Complexity Full-Band (20 Khz) Audio Coding Standard for High-Quality Conversational Applications

Total Page:16

File Type:pdf, Size:1020Kb

Itu-T G.719: a New Low-Complexity Full-Band (20 Khz) Audio Coding Standard for High-Quality Conversational Applications 2009 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics O ctober 18-21, 2009, New Paltz, NY ITU-T G.719: A NEW LOW-COMPLEXITY FULL-BAND (20 KHZ) AUDIO CODING STANDARD FOR HIGH-QUALITY CONVERSATIONAL APPLICATIONS Minjie Xie∗ and Peter Chu Anisse Taleb and Manuel Briand Polycom, Inc. Ericsson Research – Multimedia Technologies 10 Mall Road, #110, Burlington, MA 01803, USA Färögatan 6, Kista, Sweden [email protected] {anisse.taleb, manuel.briand}@ericsson.com ABSTRACT Qualication phase was completed in June 2007 and two candidates submitted respectively by Ericsson AB and Polycom, This paper describes a new low-complexity full-band (20 kHz) Inc. were qualied. In September 2007, the two proponents audio coding algorithm which has been recently standardized announced their collaboration in an Optimization/ by ITU-T as Recommendation G.719. The algorithm is Characterization phase and agreed to jointly develop a candidate designed to provide 20 Hz - 20 kHz audio bandwidth using a 48 codec. The joint candidate codec met all the requirements in the kHz sample rate, operating at 32 - 128 kbps. This codec subjective quality tests for the G.722.1FB Optimization/ features very high audio quality and low computational Characterization phase in April 2008. In June 2008, the joint complexity and is suitable for use in applications such as codec was adopted as ITU-T Recommendation G.719 which is videoconferencing, teleconferencing, and streaming audio over the rst full-band audio coding standard of ITU-T [1]. the Internet. Subjective test results from the Optimization/ In this paper we rst give an overview of the ITU-T G.719 Characterization phase of G.719 are also presented in the paper. codec, followed by a description of two important modules of Index Terms— Audio coding, full-band, low the codec - adaptive time-frequency transform and quantization complexity, adaptive time-frequency transform, fast of transform coecients. We also present the evaluation of the lattice vector quantization computational complexity and memory requirements of the codec. Finally, subjective test results from the Optimization/ 1. INTRODUCTION Characterization phase are summarized. In hands-free videoconferencing and teleconferencing markets, 2. OVERVIEW OF THE G.719 CODEC there is strong and increasing demand for audio coding providing the full human auditory bandwidth of 20 Hz to 20 The G.719 codec is a low-complexity transform-based audio kHz. This is because: codec and can provide an audio bandwidth of 20 Hz to 20 kHz • Conferencing systems are increasingly used for more at 32 - 128 kbps. The codec operates on frames of 20 ms elaborate presentations, often including music and sound corresponding to 960 samples at a sampling rate of 48 kHz and eects (i.e. animal sounds, musical instruments, vehicles or has an algorithmic delay of 40 ms. The codec features very high nature sounds, etc.) which occupy a wider audio band than audio quality and extremely low computational complexity speech. Presentations involve remote education of music, compared to other state-of-the-art audio coding algorithms. playback of audio and video from DVDs and VCRs, audio/video clips from PCs, and elaborate audio-visual 2.1 Encoder presentations from, for example, PowerPoint. • Users perceive the bandwidth of 20 Hz to 20 kHz as Norm Norm quantization Norm estimation adjustment representing the ultimate goal for audio bandwidth. The and coding resulting market pressures aer causing a shift in this Transient direction, now that sucient IP bitrate and audio coding detector Bit allocation technology are available to deliver this. As with any audio codec for hands-free videoconferencing Spectrum FLVQ Transform use, the requirements include: Input signal normalization and coding Fs = 48 kHz • Low latency (support natural conversation) Noise level • Low complexity (free cycles for video processing and other adjustment audio processing; reduce cost) Quantization Noise • High quality on all signal types Transient indices level signaling Human adjustment To meet the market need for such a full-band audio coding code index standard, ITU-T Q10/SG16 launched the standardization of MUX Low-Complexity Full-Band Audio Coding Extension to G.722.1 (G.722.1FB) in November 2006. The ITU-T G.722.1FB Figure 1: Block diagram of the G.719 encoder. ∗ Minjie Xie was with Polycom, Inc., when this work was done. He is now with Huawei Technologies (USA). 978-1-4244-3679-8/09/$25.00 ©2009 IEEE 265 2009 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics October 18-21, 2009, New Paltz, NY A block diagram of the G.719 encoder is shown in Figure 1. coefficients and regenerated transform coefficients are mixed The input signal sampled at 48 kHz is processed through a and lead to normalized spectrum. The decoded spectral transient detector. Depending on the detection of a transient, envelope is applied to the decoded full-band spectrum. Finally, indicated by a flag IsTransient, a high frequency resolution or a the inverse transform is applied to recover the time-domain low frequency resolution transform is applied on the input signal decoded signal. This is performed by applying either the frame. The adaptive transform is based on a modified discrete inverse MDCT for stationary mode, or the inverse of the higher cosine transform (MDCT) [2] in case of stationary frames. For temporal resolution transform for transient mode. transient frames, the MDCT is modified to obtain a higher temporal resolution without a need for additional delay and with 3. ADAPTIVE TIME-FREQUENCY TRANSFORM very little overhead in complexity. Transient frames have a temporal resolution equivalent to 5 ms frames. The adaptive time-frequency transform is based on the detection The obtained transform coefficients are grouped into bands of a transient. In the case of stationary signals, the transform of unequal lengths as 8, 16, 24, or 32. As the bandwidth is 20 has a high frequency resolution which is able to efficiently kHz, only 800 transform coefficients are used. The 160 represent stationary sounds. In the case of transient sounds, the transform coefficients representing frequencies above 20 kHz time-frequency transform will increase its time resolution and are ignored. The norm or power of each band is estimated and allows a better representation of the rapid changes in the input the resulting spectral envelope consisting of the norms of all signal characteristics. These two modes of operations share a bands is scalar quantized and encoded. The coefficients are common buffering and windowing module and the switching then normalized by the quantized norms. The quantized norms between one mode of operation to the other is instantaneous. are further adjusted based on adaptive spectral weighting and Thus no additional look-ahead in the transient detection is used as input for bit allocation. An adaptive bit-allocation needed. This allows the codec to have selectable time resolution scheme based on the quantized norms of the bands is used to at low complexity and with zero additional delay. assign the available bits in a frame among the bands. The If the input signal is detected as stationary, a type IV number of bits assigned to each transform coefficient can be as discrete cosine transform (DCTIV) [2] is applied on the output ~ large as 9 bits depending on the input signal. The normalized nx )( of the time domain aliasing (TDA) operation. The DCTIV transform coefficients are lattice vector quantized according to of the time aliased signal is defined by the following equation: the allocated bits for each band. Huffman coding is applied to the quantization indices for both the coded norms and transform 959 π coefficients. The norm of the non-coded transform coefficients ~ ⎡⎛ 1 ⎞⎛ 1 ⎞ ⎤ )( = cos)( ⎢⎜ + ⎟⎜knnxky + ⎟ ⎥ , k = 0, 1, …, 959 (1) is estimated, coded, and transmitted to the decoder. ∑ 2 9602 n=0 ⎣⎝ ⎠⎝ ⎠ ⎦ 2.2 Decoder The resulting signal y(k) represents the transform coefficients of the input frame. It should be noted that in stationary mode, the cascade of windowing, TDA, and DCT is equivalent to DEMUX IV applying the modulated lapped transform (MLT) [2]. Transient signaling Transient Transient signaling Transient adjustment index adjustment FLVQ indices If the signal is detected as a transient, a further re-ordering Norm indices Norm Noise level of the time-domain aliased signal is performed. The basis functions of the resulting filter-bank would have an incoherent time and frequency responses without re-ordering. The re- ordering operation consists of shuffling the upper and lower half ~ Audio signal of the TDA output signal nx )( as follows: FLVQ Spectral-fill Envelope Inverse Fs = 48 kHz Decoding Generator + Shaping Transform )( ~ −= nxnv )959( , n = 0, 1, …, 959. (2) This re-ordering is only conceptual and in reality no Figure 2: Block diagram of the G.719 decoder. computations are involved. Higher time resolution is obtained by zero padding the signal v(n) and dividing the resulting signal A block diagram of the decoder is shown in Figure 2. The into four overlapped equal length sub-frames. The amount of transient flag is first decoded which indicates the frame zero-padding is equal to 120 on each side of the signal. The configuration, i.e. stationary or transient. The spectral envelope segments are 50% overlapped and each segment has a length is then decoded and the same, bit-exact, norm adjustment and equal to 480. The two inner segments are post-windowed using bit-allocation algorithms are used at the decoder to re-compute a sine window of length 480. The windows for outer segments the bit-allocation which is essential for decoding quantization are constructed using half a sine window. indices of the normalized transform coefficients. Each resulting post-windowed segment is further processed After decoding the transform coefficients, the non-coded by applying the MDCT, i.e.
Recommended publications
  • Fast Computational Structures for an Efficient Implementation of The
    ARTICLE IN PRESS Signal Processing ] (]]]]) ]]]–]]] Contents lists available at ScienceDirect Signal Processing journal homepage: www.elsevier.com/locate/sigpro Fast computational structures for an efficient implementation of the complete TDAC analysis/synthesis MDCT/MDST filter banks Vladimir Britanak a,Ã, Huibert J. Lincklaen Arrie¨ns b a Institute of Informatics, Slovak Academy of Sciences, Dubravska cesta 9, 845 07 Bratislava, Slovak Republic b Delft University of Technology, Department of Electrical Engineering, Mathematics and Computer Science, Mekelweg 4, 2628 CD Delft, The Netherlands article info abstract Article history: A new fast computational structure identical both for the forward and backward Received 27 May 2008 modified discrete cosine/sine transform (MDCT/MDST) computation is described. It is Received in revised form the result of a systematic construction of a fast algorithm for an efficient implementa- 8 January 2009 tion of the complete time domain aliasing cancelation (TDAC) analysis/synthesis MDCT/ Accepted 10 January 2009 MDST filter banks. It is shown that the same computational structure can be used both for the encoder and the decoder, thus significantly reducing design time and resources. Keywords: The corresponding generalized signal flow graph is regular and defines new sparse Modified discrete cosine transform matrix factorizations of the discrete cosine transform of type IV (DCT-IV) and MDCT/ Modified discrete sine transform MDST matrices. The identical fast MDCT computational structure provides an efficient Modulated
    [Show full text]
  • LOW COMPLEXITY H.264 to VC-1 TRANSCODER by VIDHYA
    LOW COMPLEXITY H.264 TO VC-1 TRANSCODER by VIDHYA VIJAYAKUMAR Presented to the Faculty of the Graduate School of The University of Texas at Arlington in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE IN ELECTRICAL ENGINEERING THE UNIVERSITY OF TEXAS AT ARLINGTON AUGUST 2010 Copyright © by Vidhya Vijayakumar 2010 All Rights Reserved ACKNOWLEDGEMENTS As true as it would be with any research effort, this endeavor would not have been possible without the guidance and support of a number of people whom I stand to thank at this juncture. First and foremost, I express my sincere gratitude to my advisor and mentor, Dr. K.R. Rao, who has been the backbone of this whole exercise. I am greatly indebted for all the things that I have learnt from him, academically and otherwise. I thank Dr. Ishfaq Ahmad for being my co-advisor and mentor and for his invaluable guidance and support. I was fortunate to work with Dr. Ahmad as his research assistant on the latest trends in video compression and it has been an invaluable experience. I thank my mentor, Mr. Vishy Swaminathan, and my team members at Adobe Systems for giving me an opportunity to work in the industry and guide me during my internship. I would like to thank the other members of my advisory committee Dr. W. Alan Davis and Dr. William E Dillon for reviewing the thesis document and offering insightful comments. I express my gratitude Dr. Jonathan Bredow and the Electrical Engineering department for purchasing the software required for this thesis and giving me the chance to work on cutting edge technologies.
    [Show full text]
  • Perceptual Audio Coding Contents
    12/6/2007 Perceptual Audio Coding Henrique Malvar Managing Director, Redmond Lab UW Lecture – December 6, 2007 Contents • Motivation • “Source coding”: good for speech • “Sink coding”: Auditory Masking • Block & Lapped Transforms • Audio compression •Examples 2 1 12/6/2007 Contents • Motivation • “Source coding”: good for speech • “Sink coding”: Auditory Masking • Block & Lapped Transforms • Audio compression •Examples 3 Many applications need digital audio • Communication – Digital TV, Telephony (VoIP) & teleconferencing – Voice mail, voice annotations on e-mail, voice recording •Business – Internet call centers – Multimedia presentations • Entertainment – 150 songs on standard CD – thousands of songs on portable music players – Internet / Satellite radio, HD Radio – Games, DVD Movies 4 2 12/6/2007 Contents • Motivation • “Source coding”: good for speech • “Sink coding”: Auditory Masking • Block & Lapped Transforms • Audio compression •Examples 5 Linear Predictive Coding (LPC) LPC periodic excitation N coefficients x()nen= ()+−∑ axnrr ( ) gains r=1 pitch period e(n) Synthesis x(n) Combine Filter noise excitation synthesized speech 6 3 12/6/2007 LPC basics – analysis/synthesis synthesis parameters Analysis Synthesis algorithm Filter residual waveform N en()= xn ()−−∑ axnr ( r ) r=1 original speech synthesized speech 7 LPC variant - CELP selection Encoder index LPC original gain coefficients speech . Synthesis . Filter Decoder excitation codebook synthesized speech 8 4 12/6/2007 LPC variant - multipulse LPC coefficients excitation Synthesis
    [Show full text]
  • Using Daala Intra Frames for Still Picture Coding
    Using Daala Intra Frames for Still Picture Coding Nathan E. Egge, Jean-Marc Valin, Timothy B. Terriberry, Thomas Daede, Christopher Montgomery Mozilla Mountain View, CA, USA Abstract—Recent advances in video codec technology have techniques. Unsignaled horizontal and vertical prediction (see been successfully applied to the field of still picture coding. Daala Section II-D), and low complexity prediction of chroma coef- is a new royalty-free video codec under active development that ficients from their spatially coincident luma coefficients (see contains several novel coding technologies. We consider using Daala intra frames for still picture coding and show that it is Section II-E) are two examples. competitive with other video-derived image formats. A finished version of Daala could be used to create an excellent, royalty-free image format. I. INTRODUCTION The Daala video codec is a joint research effort between Xiph.Org and Mozilla to develop a next-generation video codec that is both (1) competitive performance with the state- of-the-art and (2) distributable on a royalty free basis. In work- ing to produce a royalty free video codec, Daala introduces new coding techniques to replace those used in the traditional block-based video codec pipeline. These techniques, while not Fig. 1. State of blocks within the decode pipeline of a codec using lapped completely finished, already demonstrate that it is possible to transforms. Immediate neighbors of the target block (bold lines) cannot be deliver a video codec that meets these goals. used for spatial prediction as they still require post-filtering (dotted lines). All video codecs have, in some form, the ability to code a still frame without prior information.
    [Show full text]
  • Lapped Transforms in Perceptual Coding of Wideband Audio
    Lapped Transforms in Perceptual Coding of Wideband Audio Sien Ruan Department of Electrical & Computer Engineering McGill University Montreal, Canada December 2004 A thesis submitted to McGill University in partial fulfillment of the requirements for the degree of Master of Engineering. c 2004 Sien Ruan ° i To my beloved parents ii Abstract Audio coding paradigms depend on time-frequency transformations to remove statistical redundancy in audio signals and reduce data bit rate, while maintaining high fidelity of the reconstructed signal. Sophisticated perceptual audio coding further exploits perceptual redundancy in audio signals by incorporating perceptual masking phenomena. This thesis focuses on the investigation of different coding transformations that can be used to compute perceptual distortion measures effectively; among them the lapped transform, which is most widely used in nowadays audio coders. Moreover, an innovative lapped transform is developed that can vary overlap percentage at arbitrary degrees. The new lapped transform is applicable on the transient audio by capturing the time-varying characteristics of the signal. iii Sommaire Les paradigmes de codage audio d´ependent des transformations de temps-fr´equence pour enlever la redondance statistique dans les signaux audio et pour r´eduire le taux de trans- mission de donn´ees, tout en maintenant la fid´elit´e´elev´ee du signal reconstruit. Le codage sophistiqu´eperceptuel de l’audio exploite davantage la redondance perceptuelle dans les signaux audio en incorporant des ph´enom`enes de masquage perceptuels. Cette th`ese se concentre sur la recherche sur les diff´erentes transformations de codage qui peuvent ˆetre employ´ees pour calculer des mesures de d´eformation perceptuelles efficacement, parmi elles, la transformation enroul´e, qui est la plus largement r´epandue dans les codeurs audio de nos jours.
    [Show full text]
  • Daala: a Perceptually-Driven Still Picture Codec
    DAALA: A PERCEPTUALLY-DRIVEN STILL PICTURE CODEC Jean-Marc Valin, Nathan E. Egge, Thomas Daede, Timothy B. Terriberry, Christopher Montgomery Mozilla, Mountain View, CA, USA Xiph.Org Foundation ABSTRACT and vertical directions [1]. Also, DC coefficients are com- bined recursively using a Haar transform, up to the level of Daala is a new royalty-free video codec based on perceptually- 64x64 superblocks. driven coding techniques. We explore using its keyframe format for still picture coding and show how it has improved over the past year. We believe the technology used in Daala Multi-Symbol Entropy Coder could be the basis of an excellent, royalty-free image format. Most recent video codecs encode information using binary arithmetic coding, meaning that each symbol can only take 1. INTRODUCTION two values. The Daala range coder supports up to 16 values per symbol, making it possible to encode fewer symbols [6]. Daala is a royalty-free video codec designed to avoid tra- This is equivalent to coding up to four binary values in parallel ditional patent-encumbered techniques used in most cur- and reduces serial dependencies. rent video codecs. In this paper, we propose to use Daala’s keyframe format for still picture coding. In June 2015, Daala was compared to other still picture codecs at the 2015 Picture Perceptual Vector Quantization Coding Symposium (PCS) [1,2]. Since then many improve- Rather than use scalar quantization like the vast majority of ments were made to the bitstream to improve its quality. picture and video codecs, Daala is based on perceptual vector These include reduced overlap in the lapped transform, finer quantization (PVQ) [7].
    [Show full text]
  • Wiener Filter-Based Error Resilient Time Domain Lapped Transform Jie Liang, Member, IEEE, Chengjie Tu, Member, IEEE, Lu Gan, Member, IEEE, Trac D
    IEEE TRANSACTIONS ON IMAGE PROCESSING, SUBMITTED JAN. 2006 1 Wiener Filter-based Error Resilient Time Domain Lapped Transform Jie Liang, Member, IEEE, Chengjie Tu, Member, IEEE, Lu Gan, Member, IEEE, Trac D. Tran, Member, IEEE, and Kai-Kuang Ma, Senior Member, IEEE Abstract— In this paper, the design of the error resilient time- DCT and reducing the blocking artifact associated with DCT- domain lapped transform is formulated as a linear minimal based schemes. On the other hand, the postfilter can also mean squared error (LMMSE) problem. The optimal Wiener be designed to spread out the information of a block to its solution and several simplifications with different tradeoffs be- tween complexity and performance are developed. We also prove neighboring blocks. This is helpful when we need to recover the persymmetric structure of these Wiener filters. The existing a lost block during image transmission. mean reconstruction method is proven to be a special case of In [10], various techniques were proposed to estimate the the proposed framework. Our method also includes as a special lost data when the lapped orthogonal transform (LOT) was case the linear interpolation method used in DCT-based systems used. A mean reconstruction method and a nonlinear sharp- when there is no pre/postfiltering and when the quantization noise is ignored. The design criteria in our previous results are ening method were found to be quite effective, under the scrutinized and improved solutions are obtained. Various design assumption that DC coefficients were intact. In particular, in examples and multiple description image coding experiments the mean reconstruction method, each lost block was estimated are reported to demonstrate the performance of the proposed by averaging its available neighboring blocks.
    [Show full text]
  • Nonlinearly-Adapted Lapped Transforms for Intra-Frame Coding
    NONLINEARLY-ADAPTED LAPPED TRANSFORMS FOR INTRA-FRAME CODING Dan Lelescu DoCoMo Communications Labs USA 181 Metro Drive, San Jose, CA. 95110 Email: [email protected] ABSTRACT Lapped transforms can be implemented (e.g., [6]) as block boundary pre- and post- filtering outside a block transform codec. The use of block transforms for coding intra-frames in video coding From an implementation point of view, the lapped transform can may preclude higher coding performance due to residual correlation be added to a block transform codec without changing the existing across block boundaries and insufficient energy compaction, which block transform of the codec. The gains that a lapped transform translates into unrealized rate-distortion gains. Subjectively, the oc- brings compared to a block transform are diminished by the use currence of blocking artifacts is common. Post-filters and lapped of other complementary methods such as intra-prediction and adap- transforms offer good solutions to these problems. Lapped trans- tive post-filters. For increased performance, it is useful to design a forms offer a more general framework which can incorporate coor- signal-adapted rather than a fixed lapped transform, and evaluate the dinated pre- and post-filtering operations. Most common are fixed transform when incorporated in a state-of-art codec that uses tech- lapped transforms (such as lapped orthogonal transforms), and also niques such as intra-prediction and post-filtering. transforms with adaptive basis function length. In contrast, in this We present in this paper the determination and use of an adaptive paper we determine a lapped transform that non-linearly adapts its lapped transform for coding intra-frames in a video codec.
    [Show full text]
  • Analysis and Integrated Architecture Design for Overlap Smooth and In-Loop Deblocking Filter in Vc-1
    ANALYSIS AND INTEGRATED ARCHITECTURE DESIGN FOR OVERLAP SMOOTH AND IN-LOOP DEBLOCKING FILTER IN VC-1 Yen-Lin Lee and Truong Nguyen ECE Dept., UCSD, La Jolla, CA 92093-0407 Email: [email protected], [email protected] http://videoprocessing.ucsd.edu 0RWLRQ ABSTRACT 9HFWRU 3UHGLFWLRQ )UDPH1 =LJ]DJ (QWURS\ 7UDQVIRUP 4XDQWL]H Unlike familiar macroblock-based in-loop deblocking filter in H.264, FXUUHQW 6FDQ &RGLQJ the filters of VC-1 perform all horizontal edges (for in-loop deblock- 0RWLRQ LQWUDFRGLQJ ing filtering) or vertical edges (for overlap smoothing) first and then (VWLPDWLRQ 0RWLRQ ,QWUD &RPSHQVDWLRQ &RHIILFLHQW LQWHUFRGLQJ 3UHGLFWLRQ the other directional filtering edges. The entire procedure is very LQWUDFRGLQJ )UDPH1 time-consuming and with high memory access loading for the whole UHFRQVWUXFWHG ,QORRS 2YHUODS ,QYHUVH ,QYHUVH system. This paper presents a novel method and the efficient in- 'HEORFNLQJ 6PRRWK 7UDQVIRUP 4XDQWL]H tegrated architecture design, which involves an 12×12 overlapped )LOWHU LQWUDFRGLQJ block that combines overlap smoothing with loop filtering for per- Fig. 1. Encoding loop of VC-1 codec. formance and cost by sharing circuits and resources. This architec- ture has capability to process HDTV1080p 30fps video and HDTV 2048×1536 24fps video at 180MHz. The same concept is applicable to other video processing algorithms, especially in deblocking filter neighboring unreconstructed macroblocks. If these previous meth- for video post-processing in a frame-based order. ods directly apply to VC-1 deblocking filter, they could cause many unnecessary processing cycles and inefficient memory access. Index Terms— Overlap smoothing, in-loop deblocking filter, In this paper, we propose a method to solve the data dependency VC-1, SMPTE-421M, VLSI architecture problem and accelerate the processing time.
    [Show full text]
  • Audio Coding Based on the Modulated Lapped Transform (Mlt) and Set Partitioning in Hierarchical Trees
    AUDIO CODING BASED ON THE MODULATED LAPPED TRANSFORM (MLT) AND SET PARTITIONING IN HIERARCHICAL TREES Mohammed Raad, Alfred Mertins and Ian Burnett School of Electrical, Computer and Telecommunications Engineering University Of Wollongong Northfields Ave Wollongong NSW 2522, Australia email: [email protected] ABSTRACT an Internet audio transmission scheme [4]. The HILN coder focuses on signal bandwidth less than 8 kHz and utilizes a per- This paper presents an audio coder based on the combination ceptual re-ordering scheme of the parameters. of the Modulated Lapped Transform (MLT) with the Set Par- titioning In Hierarchical Trees (SPIHT) algorithm. SPIHT al- This paper presents an audio compression scheme that uti- lows scalable coding by transmitting more important informa- lizes the MLT and a transmission algorithm known as Set Par- tion first in an efficient manner. The results presented reveal titioning In Hierarchical Trees (SPIHT) [6]. The algorithm that the Modulated Lapped Transform (MLT) based scheme was initially proposed as an image compression solution but produces a high compression ratio for little or no loss of quality. it is general enough to have been applied to audio and elec- A modification is introduced to SPIHT which further improves trocardiogram (ECG) signal compression, in combination with the performance of the algorithm when it is being used with the Wavelet transform, as well [7][8]. The algorithm aims at uniform M-band transforms and masking. Further, the MLT- performing an ordered bit plane transmission and sorts trans- SPIHT scheme is shown to achieve high quality synthesized form coefficients in an efficient manner allowing more bits to audio at 54 kbps through subjective listening tests.
    [Show full text]
  • An Overview of Ogg Vorbis
    An Overview of Ogg Vorbis Presented by: Shi Yong MUMT 611 Overview • Ogg – A multimedia container format, contains audio and video data streams in a single file, similar to mov and avi. – Functions: framing, sync, error correction, positioning – Stream oriented, suitable for internet streaming. – Used with various Codecs: • Vorbis: Audio codec • Tremor: Fixed-point decoder • Theora: Video codec • FLAC: Free Lossless Audio Codec • Speex: Speech codec • OggWrit: Text phrase codec (subtitles) • Ogg Metadata: Arbitrary metadata format Overview • Vorbis – General purpose lossy audio compression algorithm/format, similar to MP3, WMA, etc.. – Designed to be contained in a transport mechanism that provides framing, sync and error correction functions, such as Ogg (for file transport) or RTP (for webcast) – When used with Ogg, it is called Ogg Vorbis. – A great number of player now support Ogg Vorbis – Encode CD/DAT quality at below 48kbps – Wide range of sample rates: from 8kHz to 192kHz – Strong channel representations: monaural, polyphonic, stereo, quadraphonic, 5.1, up to 255 discrete channels Overview • Both Ogg and Vorbis are developed by the Xiph.Org Foundation, a non-profit corporation (http://www.xiph.org/ ) • LGPL licensing: – open standard – open source – patent free – completely free for commercial or noncommercial use • Software can be downloaded from http://www.vorbis.com/setup/ Encode • Overview – Accepting input audio, dividing it into frames, compressing frames into raw packets – As a generic perceptual audio encoder, psycho acoustic model is used to remove redundant audio information – Function blocks is similar to most other lossy audio encoders, such as Time/Frequency Analysis, Psychoacoustic Analysis, Quantization and Encoding, Bit Allocation, Entropy Coding, etc.
    [Show full text]
  • Multi Mode Harmonic Transform Coding for Speech and Music
    THE JOURNAL OF THE ACOUSTICAL SOCIETY Cff KOREA Vol.22, No.3E 2003. 9. pp. 101-109 Multi Mode Harmonic Transform Coding for Speech and Music Jonghark *,Kim Jae-Hyun ,Shin* Insung *Lee 허 Dept, of Radio Engineering, Chungbuk National University (Received September 18 2002; revised Feb「나 ary 1 2003; accepted February 17 2003) Abstract A multi-mode harmonic transfbnn coding (MMHTC) for speech and music signals is proposed. Its structure is organized as a linear prediction model with an input of harmonic and transform-based excitation. The proposed coder also utilizes harmonic prediction and an improved quantizer of excitation signal. To efficiently quantize the excitation of music signals, rhe modulated lapped transfbnn (MLT) is introduced. In other words, the coder combines both the time domain (linear prediction) and the frequency domain technique to achieve the best perceptual quality. The proposed coder showed better speech quality than that of the 8 kbps QCELP coder at a bit-rate of 4 kbps. Keywords: Speech coding, Harmonic coding, CELP, Audio coding I. Introduction this area, however, has shown that increased quality levels at low bit rates could only be achieved at the expense of Recently, there has been increasing interests in coding higher algorithmic delay, or complexity, or a compro­ of speech and audio signals for several applications such mised quality for more diverse signals. as audio/video teleconferencing, wireless multimedia, Specially, a harmonic coding scheme showed a good wideband telephony over packet networks, and Internet quality at demanding complexity and delay at low rate applications. These applications usually require the speech coder, because the coding method uses only the modeling method of mixture signals such as speech and simple extracting structure for excitation signal by audio[l-4].
    [Show full text]