<<

Cairo University Faculty of Engineering Electronics and Electrical Communications Department

Professional Masters Program – Major

Part 1: Voice Over IP (VoIP)

Dr. Wagdy Anis Aziz, Adjunct Doctor Senior Manager, Core Network Support, Mobinil [email protected] +201222201073 Mobile Networks Evolution Mobile Networks Evolution EVOLUTION OF PLATFORMS TOWARD SMART COMMUNICATIONS

Mobile Architecture Networks Evolution Mobile Networks Evolution – 3GPP R99 , R4 and R5 Mobile Networks Evolution – 3GPP R7 and R8 EPS – 3GPP Architecture Domains

• From an operator’s perspective, in order to provide a data-only mobile broadband service, the infrastructure must be upgraded to EPS.

• EPS provides components that allow existing 2G/ G access networks to utilize EPC components.

• For those incumbent 2G/ G operators the existing CS network can provide access to voice calls in the short term, but the deployment of IMS in conjunction with EPS would provide an All-IP network with access to services.

• Our focus here is how EPS is supporting voice services.

• There are two fundamentally different ways that voice services can be realized for LTE users; using circuit-switched or IP Multimedia Subsystem (IMS) technologies.

• Voice services based on circuit-switched technology  Circuit-switched fallback CSFB • Voice services with IMS technology  Single-Radio Voice Call Continuity SRVCC

Moving to Full IP network Why IP Networks?

1. Cost reduction - There can be a real savings in long distance telephone costs which is extremely important to most companies, particularly those with international markets.

2. Simplification - An integrated voice/data network allows more standardization and reduces total equipment needs.

3. Consolidation -The ability to eliminate points of failure, consolidate accounting systems and combine operations is obviously more efficient.

4. Advanced Applications -The long run benefits of IP include support for multimedia and multileveled applications, something which today's telephone system can't compete with.

Migration of all 2G/3G network interfaces to IP

Interface Vendor Status Main Benefits

Huawei Validated * important for the implementation of Iu-Flex feature IuCs which is needed for the completion of the MSC pool (RNC>>CS core) NSN Validated in lab, live trial in May project ALU trial planned in June Huawei Validated and deployed *Saving on transport side due to better utilization of IuPS pooled BW. NSN Validated and deployed (RNC>>PS core) *Saving on PaCo side due to better SGSN resources utilization. ALU Validated and deployed Huawei Validated and deployed

Iur * Saving on TX side due to better utilization of pooled NSN Validated and deployed BW.

G (RNC>>RNC)

3 ALU No Need Huawei Validated and deployed * Saving on TX side due to higher transmission efficiency Iub Dual Stack NSN Validated and deployed * Allows reaching higher throughput per site by using (nodeB>>RNC) the IP BW offered by MBH. ALU Validated Huawei trial planned in May * More savings on TX side Iub Full IP NSN trial planned in June * Voice & Data sharing the same transmission resources (nodeB>>RNC) pool allows for higher throughput during low voice traffic period. ALU trial planned in June

Huawei live trial ongoing * Saving (CAPEX&OPEX) from BSS side due to removal of TC *Better voice quality after removing transcoding and A NSN trials planned in May enabling TrFO calls. (BSC>>CS core) *Important for the implementation of A-Flex feature which is needed for the completion of the MSC pool

project.

ALU trial planned after B12 upgrade G

2 Huawei live trial ongoing * Offer higher data throughput by better utilization of Gb the pooled resources of packet core and transport (BSC>>PS core) NSN trial planned in May networks ALU Validated Huawei No plans yet Abis * Saving on TX side due to statistical (but (BTS>>BSC) NSN No plans yet needs introducing of new Synchronization methods) ALU No plans yet Voice Over Internet Protocol (VoIP) Topics

• What is VoIP? • Why is VoIP Attractive? • How Does it Work? • Voice Coding • VoIP Signaling Standards • QoS Impairment Factors • QoS Measurement Methods • Design of VoIP Network Using SIP

VoIP Main Factors

. VoIP Codecs – G Series , AMR. vocoder (VOice enCODER),)

. VoIP Signaling Protocols – H.323 , SIP , H.248 , Megaco , SIGTRAN, BICC .

. VoIP QoS Impairment Parameters – Delay , Packet Loss , Jitter.

. VoIP QoS Measurement Methods – Subjective Methods (MOS) and Objective Methods ( PESQ , E-Model)

What is VoIP?

• “VoIP” stands for Voice over Internet Protocol. • VoIP is a technique that encodes voice to the low rates and route the relatively low bandwidth signals as packetized "data", over dedicated transmission facilities or the "Internet" using the Internet Protocol.

• VoIP is the ability to make telephone calls and send faxes over IP-based data networks with a suitable quality of service (QoS) and superior cost/benefit.

Why is VoIP Attractive?

1. Cost reduction - As described, there can be a real savings in long distance telephone costs which is extremely important to most companies, particularly those with international markets.

2. Simplification - An integrated voice/data network allows more standardization and reduces total equipment needs.

3. Consolidation -The ability to eliminate points of failure, consolidate accounting systems and combine operations is obviously more efficient.

4. Advanced Applications -The long run benefits of VoIP include support for multimedia and multileveled applications, something which today's telephone system can't compete with.

Introduction to VoIP Codecs

• In VoIP applications, voice call is the mandatory service even when a video session is enabled. A VoIP tool (e.g., Skype,….) normally provides many voice codecs which can be selected or updated manually or automatically.

• Typical voice codecs used in VoIP include ITU-T standards such as 64 kb/s G.711 PCM, 8 kb/s G.729 and 5.3/6.3 kb/s G.723.1; 3GPP standards such as AMR; opensource codecs such as iLBC and proprietary codecs such as Skype’s SILK codec which has variable bit rates in the range of 6 to 40 kb/s.

• Some codecs can only operate at a fixed bit rate, whereas many advanced codecs can have variable bit rates which may be used for adaptive VoIP applications to improve voice quality.

• Voice codecs or speech codecs are based on different speech compression techniques which aim to remove redundancy from the speech signal to achieve compression and to reduce transmission and storage costs.

• In practice, speech compression codecs are normally compared with the 64 kb/s PCM codec which is regarded as the reference for all speech codecs. Speech codecs with the lowest data rates (e.g., 2.4 or 1.2 kb/s Vocoder) are used mainly in secure communications. • In general, the higher the speech bit rate, the higher the speech quality and the greater the bandwidth and storage requirements.

• In practice, it is always a trade-off between bandwidth utilization and speech quality.

Speech Compression and Coding Techniques

• Speech compression aims to remove redundancy in speech representation to reduce transmission bandwidth and storage space (and further to reduce cost).

There are in general three basic speech compression techniques, which are:

1- Waveform-Based Coding

2- Parametric-Based Coding

3- Hybrid Coding Techniques.

Speech Compression and Coding Techniques (1)

1- Waveform -Based Coding

• As the name implied, waveform based speech compression is mainly to remove redundancy in the speech waveform and to reconstruct the speech waveform at the decoder side as closely as possible to the original speech waveform.

• Waveform-based speech compression techniques are simple and normally low in implementation complexity, whereas their compression ratios are also low. The typical bit rate range for waveform-based speech compression coding is from 64 kb/s to 16 kb/s. At bit rate lower than 16 kb/s, the quantization error for waveform-based speech compression coding is too high, and this results in lower speech quality. Typical waveform-based speech compression codecs are PCM and ADPCM (Adaptive Differential PCM)

Speech Compression and Coding Techniques (1)

1- Waveform-Based Coding

• Typical ones are Pulse Code (PCM) and Adaptive Differential PCM (ADPCM)

• For PCM, it uses non-uniform quantization to have more fine quantization steps for small speech signal and coarse quantization steps for large speech signal (logarithmic compression). Statistics have shown that small speech signal has higher percentage in overall speech representations. Smaller quantization steps will have lower quantization error, thus better Signal-to- Ratio (SNR) for PCM coding.

• There are two PCM codecs, namely PCM μ-law which is standardized for use in North America and Japan, and PCM A-law for use in Europe and the rest of the world. ITU-T G.711 was standardized by ITU-T for PCM codecs in 1988.

• For both PCM A-law and μ-law, each sample is coded using 8 bits (compressed from 16-bit linear PCM data per sample), this yields the PCM transmission rate of 64 kb/s when 8 kHz sample rate is applied (8000 samples/s × 8 bits/sample = 64 kb/s). 64 kb/s PCM is normally used as a reference point for all other speech compression codecs.

• ADPCM, proposed by Jayant in 1974 at , was developed to further compress PCM codec based on correlation between adjacent speech samples. Consisting of adaptive quantiser and adaptive predictor, a block diagram for ADPCM encoder and decoder (codec). Speech Compression and Coding Techniques (1)

1- Waveform-Based Coding

• If an ADPCM sample is coded into 4 bits, the produced ADPCM bit rate is 4 × 8 = 32 kb/s. This means that one PCM channel (at 64 kb/s) can transmit two ADPCM channels at 32 kb/s each. If an ADPCM sample is coded into 2 bits, then ADPCM bit rate is 2 × 8 = 16 kb/s. One PCM channel can transmit four ADPCM at 16 kb/s each. ITU-T G.726 defines ADPCM bit rate at 40, 32, 24 and 16 kb/s which corresponds to 5, 4, 3, 2 bits of coding for each ADPCM sample.

• The higher the ADPCM bit rate, the higher the numbers of the quantization levels, the lower the quantization error, and thus the better the voice quality. This is why the quality for 40 kb/s ADPCM is better than that of 32 kb/s. The quality of 24 kb/s ADPCM is also better than that of 16 kb/s. Speech Compression and Coding Techniques (2)

2- Parametric-Based Coding

• Parametric -based is based on the principles of how speech is produced.

• Parametric compression only sends relevant parameters related with speech production to the receiver side and reconstructs the speech from the speech production model, Thus, high compression ratio can be achieved. The most typical example of parametric compression is Linear Prediction Coding (LPC), proposed by Atal in 1971 at Bell Labs. It was designed to emulate the human speech production mechanisms and the compression can reach the bit rate as lower as 800 bit/s (Compression Ratio reaches 80 when compared to 64 kb/s PCM). It normally operates at bit rates from 4.8 to 1.2 kb/s. The LPC based speech codecs can achieve high compression rate, however, the voice quality is also low.

• Compared to waveform-based codecs, parametric-based codecs are higher in implementation complexity, but can achieve better compression ratio.

• A typical parametric codec is Linear Prediction Coding (LPC) vocoder which has a bit rate from 1.2 to 4.8 kb/s and is normally used in secure wireless communications systems when transmission bandwidth is very limited. Speech Compression and Coding Techniques (3)

3- Hybrid Coding Techniques

• Hybrid Coding Techniques were proposed to combine the features of both waveform- based and parametric-based coding (and hence the name of hybrid coding). It keeps the nature of parametric coding which includes vocal tract filter and pitch period analysis, and voiced/unvoiced decision.

• Instead of using an impulse period train to represent the excitation signal for voiced speech segment, it uses waveform-like excitation signal for voiced, unvoiced or transition (containing both voiced or unvoiced) speech segments.

• Many different techniques are explored to represent waveform-based excitation signals such as multi-pulse excitation, codebook excitation and vector quantization.

• The most well known one, so called “Codebook Excitation Linear Prediction (CELP)” has created a huge success for hybrid speech codecs in the range of 4.8 kb/s to 16 kb/s for mobile/wireless/satellite communications achieving toll quality (MOS over 4.0) or communications quality (MOS over 3.5).

• Almost all modern speech codecs (such as G.729, G.723.1, AMR, iLBC and SILK codecs) belong to the hybrid compression coding with majority of them based on CELP techniques. Speech Compression and Coding Techniques (3)

The typical hybrid compression codecs include the following from several Standardization bodies, such as the International Telecommunication Union, Telecommunication Standardisation Sector (ITU-T), Européen Télécommunication Standards Institute (ETSI):

• LD-CELP: Low Delay CELP, used in ITU-T G.728 at 16 kb/s .

• CS-ACELP: Conjugate-Structure Algebraic-Code-Excited Linear Prediction, used in ITU-T G.729 at 8 kb/s.

• RPE/LTP: Regular Pulse Excitation/Long Term Prediction, used in ETSI GSM Full-Rate (FR) at 13 kb/s.

• VSELP: Vector Sum Excited Linear Prediction: ETSI GSM Half-Rate (HR) at 5.6 kb/s.

• ACELP: Algebraic CELP, used in ETSI GSM Enhanced Full-Rate (EFR) at 12.2 kb/s and ETSI AMR from 4.75 to 12.2 kb/s.

• ACELP/MP-MLQ: Algebraic CELP/Multi Pulse—Maximum Likelihood Quantization, used in ITU-T G.723.1 at 5.3/6.3 kb/s. Speech Analysis &

Two disciplines play a role:

• Speech analysis: is that portion of voice processing that converts speech to digital forms suitable for storage on computer systems and transmission on digital (data or telecommunications) networks. Speech analysis processes are also called digital speech encoding (or coding).

• Speech synthesis: is that portion of voice processing that reconverts speech data from a digital form to a form suitable for human usage. These functions are essentially the inverse of speech analysis. speech synthesis is also called speech decoding.

Major Techniques Used in Speech Coding

 Waveform Coding (Analysis/Synthesis) in the Time Domain G.711 PCM and G.726,G.727 ADPCM

 Vocoding (Analysis/Synthesis) in the Frequency Domain

 Hybrid Coders That combine the characteristics of the two main types G.728 LD-CELP / G.729 A CS-CELP / G.723.1 ACELP

Voice quality and bit rate for three types of coder Major Parameters of Standard Codecs

Codec Voice Look Algor. Origin Standard Type Bit rate ahead le Frame (ms) delay (ms) Kb/S (ms) G.711 PCM 64 0

16 50 0.125 0 0.125 G.726 24 25 ADPCM G.727 32 7

40 0.125 0 0.125 2 ITU-T 12.8 20 G.728 LD-CELP 0.625 0 0.625 16 7

G.729(a) CS-ACEP 8 20 5 15 11

ACELP 5.3 19 G.723.1 30 7.5 37.5 MP-MLQ 6.3 15

GSM-FR RPE-LTP 13 20 0 20 20 ETSI GSM-HR VSEPL 6.5 20 0 20 23 Major Parameters of Standard Codecs (AMR)

AMR Rates from 3GPP 4.75 KBIT/S • 5.15 KBIT/S • 5.90 KBIT/S • 6.70 KBIT/S • 7.40 KBIT/S • 7.95 KBIT/S • 10.2 KBIT/S • 12.2 KBIT/S • Sampling frequency 8 kHz/13-bit (160 samples for 20 ms frames). • AMR utilizes Voice Activity Detection (VAD) and Comfort Noise Generation (CNG) to reduce bandwidth usage during silence periods. • Algorithmic delay is 20 ms per frame. • The complexity of the algorithm is rated at 5, using a relative scale where G.711 is 1 and G.729a is 15. • PSQM testing under ideal conditions yields Mean Opinion Scores of 4.14 for AMR (12.2 kbit/s), compared to 4.45 for G.711 (µ-law) • PSQM testing under network stress yields Mean Opinion Scores of 3.79 for AMR (12.2 kbit/s), compared to 4.13 for G.711 (µ-law)

Narrowband to Fullband Speech Audio Compression

• In the last sections, we mainly discussed Narrowband (NB) speech compression, aimed at speech spectrum from 0 to 4 kHz. Not only used in VoIP systems, this 0 to 4 kHz narrowband speech, expanded from speech frequency range of 300 Hz to 3400 Hz, has also been used in traditional digital telephony in the Public Switched Telephone Networks (PSTN)

• In VoIP and mobile applications, there is a trend in recent years to use Wideband (WB) speech to provide high fidelity speech transmission quality. For WB speech, the speech spectrum is expanded to 0–7 kHz, with sampling rate at 16 kHz.

• Compared to 0–4 kHz narrowband speech, wideband speech will have more higher frequency components and have high speech quality.

• There are currently three wideband speech compression methods which have been used in different wideband speech codecs standardized by ITU-T or ETSI, They are:

1. Waveform compression based on sub-band (SB) ADPCM: such as ITU-T G.722.

2. Hybrid compression based on CELP: such as AMR-WB or ITU-T G.722.2.

3. Transform compression coding: such as ITU-T G.722.1. Narrowband to Fullband Speech Audio Compression

• The above table summarizes the Narrowband, Wideband, Super-wideband and Fullband speech/audio compression coding basic information, including signal bandwidth, sampling rate, typical bit rate range and standards examples.

• Example: WB-AMR is now used for 3G Mobile Networks to enhance Voice Quality. Narrowband to Fullband Speech Audio Compression - Summary Narrowband to Fullband Speech Audio Compression - Summary

• we have discussed key narrowband to full band speech compression codecs standardized by ITU-T, ETSI and IETF. We now summarize them in the following table which includes each codec’s basic information such as which standardization body was involved, which year was standardized, codec type, Narrowband (NB), Wideband (WB), Super-wideband (SWB) or Fullband (FB), bit rate (kb/s), length of speech frame (ms), bits per ample/frame (coded bits per sample or per frame), look-ahead time (ms), and coding’s algorithmic delay (ms).

• From this table, you should be able to see the historic development of speech compression coding standards (from 64 kb/s, 32 kb/s, 16 kb/s, 8 kb/s to 6.4/5.3 kb/s) for achieving high compression efficiency, the mobile codecs development from GSM to AMR for 2G and 3G applications, the development from single rate codec, dual-rate codec, 8- mode codec to variable rate codec for achieving high application flexibility, and the trend from narrowband (NB) codecs to wideband codecs (WB) for achieving high speech quality (even for High Definition voice). Narrowband to Fullband Speech Audio Compression - Summary

• This development has made speech compression codecs more efficient and more flexible for many different applications including VoIP.

• In the table, the columns on coded bits per sample/frame and speech frame for each codec will help you to understand payload size and to calculate VoIP bandwidth which will be covered in RTP transport protocol.

• The columns on look-ahead time and codec’s algorithmic delay will help to understand codec delay and VoIP end-to-end delay, a key QoS metric, which will be discussed in detail next part.

• It has to be mentioned that many VoIP phones (hardphones or softphones) have incorporated many different NB and even WB codecs. How to negotiate which codec to be used at each VoIP terminal and how to change the codec/mode/bit rate during a VoIP session on the fly will be discussed on SIP/SDP signaling. How does it work ?

• VoIP digitizes voice and shoots it across the Internet.

• A conversation is digitized and encapsulated into packets.

• Packets are sent across the Internet - and re-assembled at the destination and reconverted back into audio.

• Across the Internet packets can use many different routes (Packet Switching), whereas in a traditional telephone call a single dedicated circuit is required for each call (Circuit Switching).

• The routers that route traffic on the Internet are a fraction of cost of switches on traditional long distance phone networks. All this means cheaper phone calls.

Voice To/From IP

Analog IP-Network Digital Voice

CODEC: Analog to Digital Process Header

Compress Re-sequence

Create Voice Datagram Decompress

Add Header (RTP, UDP, IP, etc.) CODEC: Digital to Analog

Digital Voice IP-Network Analog Speech Codecs (Vocoders)

A VoIP telephone call

The most common, standardized encoding algorithms and their coding rate and speech quality. Vocoder Attributes

• Vocoder speech quality is a function of bit rate, complexity, and processing delay. There usually is a strong interdependence between all these attributes and they may have to be traded off against each other.

• For example, low-bit-rate vocoders tend to have more delay than higher bit rate vocoders. Low-bit-rate vocoders also require higher VLSI complexity to implement. As might be expected, low-bit-rate vocoders often have lower speech quality than the higher bit rate vocoders. Vocoder Attributes

1. Bit Rate: • Most vocoders operate at a fixed bit rate regardless of the input signal characteristics; however, the goal is to make the vocoder variable-rate. For simultaneous voice and data applications, a compromise is to create a silence compression algorithm as shown in the below table .

2. Delay: The delay in a speech coding system usually consists of two major components: • Frame delay • Speech processing delay Vocoder Attributes

3. Vocoders's Complexity :

• Vocoders are often implemented on Digital Signal Processor (DSP) hardware. Complexity can be measured in terms of computing speed in Million Instructions Per Second (MIPS), of Random Access Memory (RAM), and of Read Only Memory (ROM). Complexity determines cost.

4. Quality :

• The measure used in comparisons is how good the speech sounds under ideal conditions-namely, clean speech, no transmission errors, and only one encoding (note, however, that in the real world these ideal conditions are often not met because there can be large amounts of such background noise as street noise, office noise, air conditioning noise, etc.). Quality is measured by different methods but it finally mapped to Mean Opinion Score (MOS) value which is ranked from 1 to 5 to represent the degree of voice quality. The Used Codec Parameters

Coding Speech Complexity Frame Algorithmic Look Ahead Codec Rate quality Size Delay (ms) (Kbps) (MOS) (MIPS) (ms) (ms)

G.723.1 5.3 or 6.3 3.8 Highest (20) 7.5 30 37.5

G.729 8 4 High (14-20) 5 10 15

G.711 64 4.4 Lowest (<15) 0 0.125 0 Packet Encapsulation

A VoIP packet contains voice frame and various protocol headers VoIP Per Call Bandwidth

• The header consists of 40 bytes: For IP (20 bytes) + User Datagram Protocol (UDP) (8 bytes) +Real-Time Transport Protocol (RTP) (12 bytes) headers.

• 6 bytes for Multilink Point-to-Point Protocol (MP) or Frame Relay Forum (FRF) (L2) header.

• 1 byte for the end-of-frame flag on MP and Frame Relay frames.

• 18 bytes for Ethernet L2 headers, including 4 bytes of Frame Check Sequence (FCS) or Cyclic Redundancy Check (CRC). Bandwidth Calculation Formulas

The following calculations are used:

• Total packet size = (L2 header: MP or FR or Ethernet) + (IP/UDP/RTP header) + (voice payload size)

• PPS = (codec bit rate) / (voice payload size) • Packet Per Second

• Bandwidth = total packet size x PPS Sample Calculation (1)

For example, the required bandwidth for a G.729 call (8 Kbps codec bit rate) with RTP, MP and the default 20 bytes of voice payload is:

• Total packet size (bytes) = (MP header of 7 bytes) + (IP/UDP/RTP header of 40 bytes) + (voice payload of 20 bytes) = 67 bytes

• Total packet size (bits) = (67 bytes) x 8 bits per byte = 536 bits

• PPS = (8 Kbps codec bit rate) / (160 bits) = 50 pps Note: 160 bits = 20 bytes (default voice payload) x 8 bits per byte

• Bandwidth per call = voice packet size (536 bits) x 50 pps = 26.8 Kbps Sample Calculation (2)

• Another example, the required bandwidth for a G.729 call (8 Kbps codec bit rate) with Compressed RTP (cRTP) , MP and the default 20 bytes of voice payload is:

• Total packet size (bytes) = (MP header of 7 bytes) + (compressed IP/UDP/RTP header of 2 bytes) + (voice payload of 20 bytes) = 29 bytes • Total packet size (bits) = (29 bytes) x 8 bits per byte = 232 bits • PPS = (8 Kbps codec bit rate) / (160 bits) = 50 pps • Note: 160 bits = 20 bytes (default voice payload) x 8 bits per byte • Bandwidth per call = voice packet size (232 bits) x 50 pps = 11.6 Kbps

Codec Calculation and Terms Explanation RTP header Compression

• All VoIP packets are made up of two components: • voice samples and IP/UDP/RTP headers. Although the voice samples are compressed by the Digital Signal Processor (DSP) and may vary in size based on the codec used.

• These headers are a constant 40 bytes in length. When compared to the 20 bytes of voice samples in a default G.729 call, these headers make up a considerable amount of overhead. Using cRTP, these headers can be compressed to two or four bytes. This compression offers significant VoIP bandwidth savings. For example, a default G.729 VoIP call consumes 24 Kb without cRTP, but only 12 Kb with cRTP enabled. Voice Activity Detection (VAD)

• With circuit-switched voice networks, all voice calls use 64 Kbps fixed-bandwidth links regardless of how much of the conversation is speech and how much is silence. With VoIP networks, all conversation and silence is packetized. Using Voice Activity Detection (VAD), packets of silence can be suppressed.

• Over time and as an average on a volume of more than 24 calls, VAD may provide up to a 35 percent bandwidth savings. The savings are not realized on every individual voice call, or on any specific point measurement.

• For the purposes of network design and bandwidth engineering, VAD should not be taken into account, especially on links that carry fewer than 24 voice calls simultaneously.

• Various features such as on hold and fax render VAD ineffective. When the network is engineered for the full voice call bandwidth, all savings provided by VAD are available to data applications.

• VAD also provides Comfort Noise Generation (CNG). Because you can mistake silence for a disconnected call, CNG provides locally generated white noise so the call appears normally connected to both parties. Assignment

- Write a report about your previous experiences related to Voice over Internet Protocol (VoIP) in your company, and your company's approach to deal with the explosion happening for VoIP applications. How it will handle the high QoS requirements for VoIP and how it will deal with the VoIP QoS Impairment Parameters (Delay, Jitter, Packet Loss … etc)?

- Deadline: Monday March 16th.