ITU-T G.711.1: Extending G.711 to Higher-Quality Wideband Speech

Total Page:16

File Type:pdf, Size:1020Kb

ITU-T G.711.1: Extending G.711 to Higher-Quality Wideband Speech HIWASAKI LAYOUT 9/22/09 2:24 PM Page 110 ITU-T STANDARDS ITU-T G.711.1: Extending G.711 to Higher-Quality Wideband Speech Yusuke Hiwasaki and Hitoshi Ohmuro, NTT Corporation ABSTRACT 7 kHz, called wideband speech, which is equiva- lent to audio signals conveyed in AM radio In March 2008 the ITU-T approved a new broadcasts. One of the most popular applications wideband speech codec called ITU-T G.711.1. of voice over IP (VoIP) is remote audio-visual This Recommendation extends G.711, the most conferences, where hands-free terminals are widely deployed speech codec, to 7 kHz audio often used. In that case, intelligibility becomes bandwidth and is optimized for voice over IP more important than when using handsets applications. The most important feature of this because participants usually sit around a terminal codec is that the G.711.1 bitstream can be at a certain distance from a loudspeaker. This is transcoded into a G.711 bitstream by simple where wideband speech coders, which can repro- truncation. G.711.1 operates at 64, 80, and 96 duce speech at high fidelity and intelligibility, are kb/s, and is designed to achieve very short delay particularly favored. and low complexity. ITU-T evaluation results Today, the majority of fixed-line digital show that the codec fulfils all the requirements telecommunications terminals are equipped with defined in the terms of reference. This article ITU-T G.711 (log-compressed pulse code modu- presents the codec requirements and design con- lation [PCM]) capability. In fact, for communica- straints, describes how standardization was con- tion using Real-Time Transport Protocol (RTP) ducted, and reports on the codec performance over IP networks, G.711 support is mandatory. and its initial deployment. Until wideband speech terminals completely replace narrowband ones, these two types of ter- INTRODUCTION minals will continue to coexist, meaning that the wideband ones must be capable of interoperat- In the early years of voice communications, ing with those that carry only G.711. In an ordi- transmission bandwidth was somewhat limited, nary telecommunications scenario, the codec and the main technological focus at the time was used during a session is negotiated between the to transmit voice at the best achievable quality terminals as the call is set up. However, there given the bandwidth constraint. This led to using are some cases where this may not be possible, speech limited to a frequency range of 300 Hz to for example, in call transfers and multipoint con- 3.4 kHz, today called narrowband speech. Howev- ferencing. Therefore, transcoding2 between dif- er, with the exponential growth of transmission ferent types of bitstream must be performed by a bandwidth for both wired and wireless communi- bitstream translator at a gateway or a signal cations, broadband connections are now more mixer at a multi-point control unit (MCU). This widely available. For fixed lines, the trend is to is problematic when those devices must accom- transport all information and services, including modate a large number of lines because voice, video, and other data, in packet-based net- transcoding usually requires a high computation- works. One advantage of a packet network such al complexity and will likely introduce quality as an Internet Protocol (IP)-based one is that it degradations and additional delay. This can be 1 These ITU-T Recom- can adapt to various bit rates, and this means an obstacle in the increased use of wideband mendations can be freely that we are now free from having to use constant voice-communication services. downloaded: bit rates and band-limited audio. This means that To overcome this obstacle, ITU-T has stan- http://itu.int/rec/T-REC- the new generation of terminals can support rich- dardized a new speech-coding algorithm, G.711.1 G/ er services and functionalities. Consequently, [1]. This codec, approved in March 2008, is an speech coding algorithms can be designed with extension of ITU-T G.711. It was initially studied 2 Transcoding is the con- emphasis on factors such as low delay, low com- under the name G.711-WB (wideband extension). version processing of one plexity, and, in particular, wider audio frequency The coding algorithm is designed to provide a encoding format to anoth- bandwidth. We have seen the emergence of low-delay, low-complexity, and high-quality wide- er. coders, such as International Telecommunication band speech addressing transcoding problems Union — Telecommunication Standardization with legacy (narrowband) terminals with a bitrate 3 Scalability enables the Sector (ITU-T) G.722, G.722.1, AMR-WB (also trade-off. The main feature of this extension is to best quality of service to known as ITU-T G.722.2), G.729.1, and G.718.1 give G.711 wideband scalability.3 It aims to be provided as the system Those coders are for encoding conversational achieve high-quality speech services over broad- load varies. speech signals in the frequency range of 50 Hz to band networks, particularly for IP phone and 110 0163-6804/09/$25.00 © 2009 IEEE IEEE Communications Magazine • October 2009 HIWASAKI LAYOUT 9/22/09 2:24 PM Page 111 In the partial mixing Location Location Location Location method, only the A B A B core bitstream (usually the lower band) is decoded MCU and mixed, and the enhancement layers are not decoded, Location Location Location Location hence the name C D C D partial. Instead, one active endpoint is Mesh connection Star connection selected from all the endpoints, and its Figure 1. Configurations for connecting remote conferences. enhancement layers are redistributed to multipoint speech conferencing, while enabling the MCU, in addition to the summation of the other endpoints. seamless interoperability with conventional termi- signals and the transmitting and receiving of 2n nals and systems equipped only with G.711. media streams. The number of mixed endpoints n In the next section one of the key applications might be limited to m (m < n, e.g., 3), selected of G.711.1, partial mixing, is described, and then on the active channels, but this would still require the design constraints of G.711.1 are detailed in considerable computational complexity. Another the section after. The succeeding section issue contributing significantly to complexity is describes how the standardization progressed, transcoding. Terminals with different coding and then a brief overview of the codec algorithm capabilities require transcoding because various is presented. The characterized results of G.711.1, coding algorithms are used in VoIP systems. Usu- the speech quality of the codec, are presented ally, this capability is implemented by decoding a next. Finally, new extensions to G.711.1 and the bitstream to a linear PCM signal and then re- status of the codec deployment are discussed. encoding that with another encoder. For intercon- nection between wideband and narrowband ARTIAL IXING coders, transcoding requires another intermediate P M step, down-/up-sampling, and this would further One of the primary applications of G.711.1 is increase the required computational complexity at audio conferencing, and partial mixing [2] is a MCUs. Another less significant downside of solution to contain its growing complexity. When transcoding is the accumulation of algorithmic considering such conferences, there are two possi- delays caused by re-encoding and maybe down- ble configurations, as shown in Fig. 1: one is a /up-sampling, and this can be problematic when mesh connection where all endpoints are connect- using codecs working on a certain frame length. ed to all others, and the other is a star connection These problems can be overcome by taking centered on a multipoint control unit (MCU). advantage of a subband scalable bitstream structure Mesh connections, such as those used in Skype, because a signal can be reconstructed by decoding do not require a server but are restricted to con- only part of the bitstream. In the partial mixing ferences with a few endpoints. In large-scale n- method, only the core bitstream (usually the lower point conferences, each endpoint needs to band) is decoded and mixed, and the enhancement transmit and receive 2(n – 1) media bitstreams layers are not decoded, hence the name partial. (counting both inbound and outbound streams). Instead, one active endpoint is selected from all the This is undesirable because n – 1 decoding com- endpoints, and its enhancement layers are redis- putation would operate simultaneously, whereas tributed to other endpoints. To implement this only one decoder is required for star connections. hybrid approach, which combines redistribution and In addition, each endpoint needs to have a suffi- mixing, the mixer must judge which endpoint to ciently large transmission bandwidth, whereas in select by detecting voice activities and/or detecting star connections only two (inbound and out- the endpoint with the largest signal power. bound) media bitstreams are required. Thus, Figure 2 illustrates how a partial mixer works. mesh connections are suitable for conferences There are three locations connected to an MCU; 4 Note that there is anoth- involving a few endpoints. However, in star con- assume that location A is talking, and locations B er type of MCU, a for- nections the computation now occurs in the and C are listening in this instance. The partial warding bridge, that also MCU, where mixing must be performed.4 Here, mixer performs conventional core (G.711) layer sends media bitstreams in the MCU has to decode all the bitstreams from mixing, but for enhancement layers, it detects the a star configuration, but endpoints, mix the obtained signals, and then re- speaker location (A) and selects a set of enhance- multiplexes media bit- encode the mixed signal.
Recommended publications
  • MIGRATING RADIO CALL-IN TALK SHOWS to WIDEBAND AUDIO Radio Is the Original Social Network
    Established 1961 2013 WBA Broadcasters Clinic Madison, WI MIGRATING RADIO CALL-IN TALK SHOWS TO WIDEBAND AUDIO Radio is the original Social Network • Serves local or national audience • Allows real-time commentary from the masses • The telephone becomes the medium • Telephone technical factors have limited the appeal of the radio “Social Network” Telephones have changed over the years But Telephone Sound has not changed (and has gotten worse) This is very bad for Radio Why do phones sound bad? • System designed for efficiency not comfort • Sampling rate of 8kHz chosen for all calls • 4 kHz max response • Enough for intelligibility • Loses depth, nuance, personality • Listener fatigue Why do phones sound so bad ? (cont) • Low end of telephone calls have intentional high- pass filtering • Meant to avoid AC power hum pickup in phone lines • Lose 2-1/2 Octaves of speech audio on low end • Not relevant for digital Why Phones Sound bad (cont) Los Angeles Times -- January 10, 2009 Verizon Communications Inc., the second-biggest U.S. telephone company, plans to do away with traditional phone lines within seven years as it moves to carry all calls over the Internet. An Internet-based service can be maintained at a fraction of the cost of a phone network and helps Verizon offer a greater range of services, Stratton said. "We've built our business over the years with circuit-switched voice being our bread and butter...but increasingly, we are in the business of selling, basically, data connectivity," Chief Marketing Officer John Stratton said. VoIP
    [Show full text]
  • Polycom Voice
    Level 1 Technical – Polycom Voice Level 1 Technical Polycom Voice Contents 1 - Glossary .......................................................................................................................... 2 2 - Polycom Voice Networks ................................................................................................. 3 Polycom UC Software ...................................................................................................... 3 Provisioning ..................................................................................................................... 3 3 - Key Features (Desktop and Conference Phones) ............................................................ 5 OpenSIP Integration ........................................................................................................ 5 Microsoft Integration ........................................................................................................ 5 Lync Qualification ............................................................................................................ 5 Better Together over Ethernet ......................................................................................... 5 BroadSoft UC-One Integration ......................................................................................... 5 Conference Link / Conference Link2 ................................................................................ 6 Polycom Desktop Connector ..........................................................................................
    [Show full text]
  • NTT DOCOMO Technical Journal
    3GPP EVS Codec for Unrivaled Speech Quality and Future Audio Communication over VoLTE EVS Speech Coding Standardization NTT DOCOMO has been engaged in the standardization of Research Laboratories Kimitaka Tsutsumi the 3GPP EVS codec, which is designed specifically for Kei Kikuiri VoLTE to further enhance speech quality, and has contributed to establishing a far-sighted strategy for making the EVS codec cover a variety of future communication services. Journal NTT DOCOMO has also proposed technical solutions that provide speech quality as high as FM radio broadcasts and that achieve both high coding efficiency and high audio quality not possible with any of the state-of-the-art speech codecs. The EVS codec will drive the emergence of a new style of speech communication entertainment that will combine BGM, sound effects, and voice in novel ways for mobile users. 2 Technical Band (AMR-WB)* [2] that is used in also encode music at high levels of quality 1. Introduction NTT DOCOMO’s VoLTE and that sup- and efficiency for non-real-time services, The launch of Voice over LTE (VoL- port wideband speech with a sampling 3GPP experts agreed to adopt high re- TE) services and flat-rate voice service frequency*3 of 16 kHz. In contrast, EVS quirements in the EVS codec for music has demonstrated the importance of high- has been designed to support super-wide- despite its main target of real-time com- quality telephony service to mobile users. band*4 speech with a sampling frequen- munication. Furthermore, considering In line with this trend, the 3rd Genera- cy of 32 kHz thereby achieving speech that telephony services using AMR-WB tion Partnership Project (3GPP) complet- of FM-radio quality*5.
    [Show full text]
  • The Growing Importance of HD Voice in Applications the Growing Importance of HD Voice in Applications White Paper
    White Paper The Growing Importance of HD Voice in Applications The Growing Importance of HD Voice in Applications White Paper Executive Summary A new excitement has entered the voice communications industry with the advent of wideband audio, commonly known as High Definition Voice (HD Voice). Although enterprises have gradually been moving to HD VoIP within their own networks, these networks have traditionally remained “islands” of HD because interoperability with other networks that also supported HD Voice has been difficult. With the introduction of HD Voice on mobile networks, which has been launched on numerous commercial mobile networks and many wireline VoIP networks worldwide, consumers can finally experience this new technology firsthand. Verizon, AT&T, T-Mobile, Deutsche Telekom, Orange and other mobile operators now offer HD Voice as a standard feature. Because mobile users tend to adopt new technology rapidly, replacing their mobile devices seemingly as fast as the newest models are released, and because landline VoIP speech is typically done via a headset, the growth of HD Voice continues to be high and in turn, the need for HD- capable applications will further accelerate. This white paper provides an introduction to HD Voice and discusses its current adoption rate and future potential, including use case examples which paint a picture that HD Voice upgrades to network infrastructure and applications will be seen as important, and perhaps as a necessity to many. 2 The Growing Importance of HD Voice in Applications White Paper Table of Contents What Is HD Voice? . 4 Where is HD Voice Being Deployed? . 4 Use Case Examples .
    [Show full text]
  • HD Voice – a Revolution in Voice Communication
    HD Voice – a revolution in voice communication Besides data capacity and coverage, which are one of the most important factors related to customers’ satisfaction in mobile telephony nowadays, we must not forget about the intrinsic characteristic of the mobile communication – the Voice. Ever since the nineties and the introduction of GSM there have not been much improvements in the area of voice communication and quality of sound has not seen any major changes. Smart Network going forward! Mobile phones made such a progress in recent years that they have almost replaced PCs, but their basic function, voice calls, is still irreplaceable and vital in mobile communication and it has to be seamless. In order to grow our customer satisfaction and expand our service portfolio, Smart Network engineers of Telenor Serbia have enabled HD Voice by introducing new network features and transitioning voice communication to all IP network. This transition delivers crystal-clear communication between the two parties greatly enhancing customer experience during voice communication over smartphones. Enough with the yelling into smartphones! HD Voice (or High-Definition Voice) represents a significant upgrade to sound quality in mobile communications. Thanks to this feature users experience clarity, smoothly reduced background noise and a feeling that the person they are talking to is standing right next to them or of "being in the same room" with the person on the other end of the phone line. On the more technical side, “HD Voice is essentially wideband audio technology, something that long has been used for conference calling and VoIP apps. Instead of limiting a call frequency to between 300 Hz and 3.4 kHz, a wideband audio call transmits at a range of 50 Hz to 7 kHz, or higher.
    [Show full text]
  • Low Bit-Rate Speech Coding with Vq-Vae and a Wavenet Decoder
    ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 735-739. IEEE, 2019. DOI: 10.1109/ICASSP.2019.8683277. c 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. LOW BIT-RATE SPEECH CODING WITH VQ-VAE AND A WAVENET DECODER Cristina Garbaceaˆ 1,Aaron¨ van den Oord2, Yazhe Li2, Felicia S C Lim3, Alejandro Luebs3, Oriol Vinyals2, Thomas C Walters2 1University of Michigan, Ann Arbor, USA 2DeepMind, London, UK 3Google, San Francisco, USA ABSTRACT compute the true information rate of speech to be less than In order to efficiently transmit and store speech signals, 100 bps, yet current systems typically require a rate roughly speech codecs create a minimally redundant representation two orders of magnitude higher than this to produce good of the input signal which is then decoded at the receiver quality speech, suggesting that there is significant room for with the best possible perceptual quality. In this work we improvement in speech coding. demonstrate that a neural network architecture based on VQ- The WaveNet [8] text-to-speech model shows the power VAE with a WaveNet decoder can be used to perform very of learning from raw data to generate speech. Kleijn et al. [9] low bit-rate speech coding with high reconstruction qual- use a learned WaveNet decoder to produce audio comparable ity.
    [Show full text]
  • Lapped Transforms in Perceptual Coding of Wideband Audio
    Lapped Transforms in Perceptual Coding of Wideband Audio Sien Ruan Department of Electrical & Computer Engineering McGill University Montreal, Canada December 2004 A thesis submitted to McGill University in partial fulfillment of the requirements for the degree of Master of Engineering. c 2004 Sien Ruan ° i To my beloved parents ii Abstract Audio coding paradigms depend on time-frequency transformations to remove statistical redundancy in audio signals and reduce data bit rate, while maintaining high fidelity of the reconstructed signal. Sophisticated perceptual audio coding further exploits perceptual redundancy in audio signals by incorporating perceptual masking phenomena. This thesis focuses on the investigation of different coding transformations that can be used to compute perceptual distortion measures effectively; among them the lapped transform, which is most widely used in nowadays audio coders. Moreover, an innovative lapped transform is developed that can vary overlap percentage at arbitrary degrees. The new lapped transform is applicable on the transient audio by capturing the time-varying characteristics of the signal. iii Sommaire Les paradigmes de codage audio d´ependent des transformations de temps-fr´equence pour enlever la redondance statistique dans les signaux audio et pour r´eduire le taux de trans- mission de donn´ees, tout en maintenant la fid´elit´e´elev´ee du signal reconstruit. Le codage sophistiqu´eperceptuel de l’audio exploite davantage la redondance perceptuelle dans les signaux audio en incorporant des ph´enom`enes de masquage perceptuels. Cette th`ese se concentre sur la recherche sur les diff´erentes transformations de codage qui peuvent ˆetre employ´ees pour calculer des mesures de d´eformation perceptuelles efficacement, parmi elles, la transformation enroul´e, qui est la plus largement r´epandue dans les codeurs audio de nos jours.
    [Show full text]
  • Classroom Mini £5,250
    DATA SHEET Classroom Mini Classroom Mini Price: £5,250 A setup designed for learners to receive remote teaching in medium-sized learning space How it works: Ideal for: The classroom setup is designed to deliver the typical classroom experience for learners, just with a remote Learners receiving remote teaching teacher rather than one who is physically present. Ideal for small to medium learning spaces of 20-30m2, the classroom mini can accommodate up to 20 learners at Medium learning spaces of 20-30m2 one time. Up to 20 learners in the classroom Remote teaching Here, the learners are taught from a screen and Teacher may be teaching from a teaching pod, loudspeaker at the front of the classroom, with the or another classroom lesson delivered by a teacher working remotely. The teacher may be working from a teaching pod or connected from another classroom on a different site. Real-time video streaming What’s included: The high frame rate and zero latency of the video streaming gives a true-to-life classroom experience. It’s Ultra high definition video streaming like an invisible wall through to the other classroom or teaching pod. Learners will feel as though the teacher High definition content sharing is there in the room with them. Optical zoom gives the teacher a view of the remote classroom, so this 5 x optical zoom (class view) immersive experience works both ways. Wide audio pickup Flawless audio feedback The high-spec audio setup also helps to give this sense of immediacy. The audio pickup, placed at the top of the room, can determine the difference between background noise and the human voice.
    [Show full text]
  • Ts 103 624 V1.1.1 (2019-11)
    ETSI TS 103 624 V1.1.1 (2019-11) TECHNICAL SPECIFICATION Characterization Methodology and Requirement Specifications for the ETSI LC3plus codec 2 ETSI TS 103 624 V1.1.1 (2019-11) Reference DTS/STQ-279 Keywords codec, listening quality, speech ETSI 650 Route des Lucioles F-06921 Sophia Antipolis Cedex - FRANCE Tel.: +33 4 92 94 42 00 Fax: +33 4 93 65 47 16 Siret N° 348 623 562 00017 - NAF 742 C Association à but non lucratif enregistrée à la Sous-Préfecture de Grasse (06) N° 7803/88 Important notice The present document can be downloaded from: http://www.etsi.org/standards-search The present document may be made available in electronic versions and/or in print. The content of any electronic and/or print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any existing or perceived difference in contents between such versions and/or in print, the prevailing version of an ETSI deliverable is the one made publicly available in PDF format at www.etsi.org/deliver. Users of the present document should be aware that the document may be subject to revision or change of status. Information on the current status of this and other ETSI documents is available at https://portal.etsi.org/TB/ETSIDeliverableStatus.aspx If you find errors in the present document, please send your comment to one of the following services: https://portal.etsi.org/People/CommiteeSupportStaff.aspx Copyright Notification No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying and microfilm except as authorized by written permission of ETSI.
    [Show full text]
  • A Dynamic Rate Adaptation Algorithm Using WB E-Model for Voice Traffic
    A dynamic rate adaptation algorithm using WB E-model for voice traffic over LTE network Duy-Huy Nguyen and Hang Nguyen Department of Wireless Network and Multimedia Services Institut Mines-Telecom, Telecom SudParis Samovar Laboratory, UMR 5157, CNRS, Evry, France fduy huy.nguyen, [email protected] Abstract—This paper presents a dynamic adaptation algorithm When voice traffic is transmitted over LTE network, the of joint source-channel code rate for enhancing voice transmis- voice signal firstly is compressed at Application layer by sion over LTE network. In order to assess the speech quality, we AMR-WB codec, and then it is packetized into RTP payload. use the Wideband(WB) E-model. In this model, both end-to-end delay and packet loss are taken into account. The goal of this When this payload goes through each layer, it is packetized paper is to find out the best suboptimal solution for improving into the corresponding packet and the header is added. In order voice traffic over LTE network with some constraints on allowed to protect the voice packet when it is delivered over a noisy maximum end-to-end delay and allowed maximum packet loss. channel, some error correcting technologies are included. The The best suboptimal choice is channel code rate corresponding to Forward Error Correction (FEC) channel code is widely used each mode of the AMR-WB codec that minimizes redundant bits generated by channel coding with an acceptable MOS reduction. in LTE network for data channels is Turbo code. Channel Besides, this algorithm can be integrated with rate control in coding reduces Bit Error Rate (BER), so that the speech AMR-WB codec to offer the required mode of LTE network.
    [Show full text]
  • Super-Wideband Bandwidth Extension for Wideband Audio
    DAGA 2010 - Berlin Super-Wideband Bandwidth Extension for Wideband Audio Codecs Using Switched Spectral Replication and Pitch Synthesis Bernd Geiser, Hauke Kr¨uger, Peter Vary Institute of Communication Systems and Data Processing ( ) RWTH Aachen University, Germany {geiser|krueger|vary}@ind.rwth-aachen.de Abstract Frame 61 Frame 62 This paper describes a new bandwidth extension algorithm which is targeted at high quality audio communication over IP networks. The algorithm is part of the Huawei/ETRI can- didate for the ITU-T super-wideband (SWB) extensions of Rec. G.729.1 and G.718. In the SWB candidate codec, the 7-14 kHz frequency band of speech and audio signals is rep- resented in terms of temporal and spectral envelopes. This description is encoded and transmitted to the decoder. In Frame 63 Frame 64 addition, the fine structure of the input signal is analyzed and compactly encoded. From this compact information, the decoder can regenerate the 7-14 kHz fine structure either by spectral replication or by pitch synthesis. Then, an adaptive envelope restoration procedure is employed. The algorithm operates in the MDCT domain to allow subsequent refine- ment coding by vector quantization of spectral coefficients. In the paper, relevant listening test results for the G.729.1- Figure 1: MDCT and pseudo spectrum representations of a SWB candidate codec that have been obtained during the stationary sinusoid over four successive signal frames. Com- ITU-T standardization process are summarized. Good audio parison with DFT amplitude spectrum. quality could be shown for both speech and music signals. Codec Overview Bandwidth Extension Algorithm As the first and most important part of the parameter set, The bandwidth extension (BWE) algorithm which is de- the BWE encoder computes a spectral envelope which is scribed in this paper has been implemented in the formed of logarithmic subband gains for subbands of equal Huawei/ETRI candidate [1] for the super-wideband (SWB) bandwidth.
    [Show full text]
  • DECT); New Generation DECT; Overview and Requirements
    ETSI TR 102 570 V1.1.1 (2007-03) Technical Report Digital Enhanced Cordless Telecommunications (DECT); New Generation DECT; Overview and Requirements 2 ETSI TR 102 570 V1.1.1 (2007-03) Reference DTR/DECT-000238 Keywords DECT, radio ETSI 650 Route des Lucioles F-06921 Sophia Antipolis Cedex - FRANCE Tel.: +33 4 92 94 42 00 Fax: +33 4 93 65 47 16 Siret N° 348 623 562 00017 - NAF 742 C Association à but non lucratif enregistrée à la Sous-Préfecture de Grasse (06) N° 7803/88 Important notice Individual copies of the present document can be downloaded from: http://www.etsi.org The present document may be made available in more than one electronic version or in print. In any case of existing or perceived difference in contents between such versions, the reference version is the Portable Document Format (PDF). In case of dispute, the reference shall be the printing on ETSI printers of the PDF version kept on a specific network drive within ETSI Secretariat. Users of the present document should be aware that the document may be subject to revision or change of status. Information on the current status of this and other ETSI documents is available at http://portal.etsi.org/tb/status/status.asp If you find errors in the present document, please send your comment to one of the following services: http://portal.etsi.org/chaircor/ETSI_support.asp Copyright Notification No part may be reproduced except as authorized by written permission. The copyright and the foregoing restriction extend to reproduction in all media.
    [Show full text]