6.003: Signal Processing

Fourier-Based Audio Compression

Review of , Discrete Cosine Transform (DCT) • Brief Introduction to MDCT • Additional Considerations for Audio Encoding •

2 May 2019

Today: Lossy Compression

As opposed to “lossless” compression (LZW, Huffman, , gzip, xzip, ...), “lossy” compression achieves a decrease in file size by throwing away from the original signal.

Goal: convey the “important” parts of the signal using as few bits as possible.

Lossy Compression

Key idea: through away the “unimportant” bits (i.e., bits that won’t be noticed). Doing this involves knowing something about what it means for something to be noticeable.

Many aspects of human perception are frequency based

many lossy formats use frequency-based methods (along w/ mod- → els of human perception). Lossy Compression: High-level View

To Encode: Split signal into “frames” • Transform each frame into Fourier representation • Throw away (or attentuate) some coefficients • Additional (LZW, RLE, Huffman, etc.) •

To Decode: Undo lossless compression • Transform each frame into time/spatial representation •

This is pretty ! Both JPEG and MP3, for example, work roughly this way.

Given this, one goal is to get the “important” information in a signal into relatively few coefficients in FD (“energy compaction”).

Energy Compaction

One goal is to get the “important” information in a signal into relatively few coefficients in FD (“energy compaction”). It turns out the DFT has some problems in this regard. Consider the following signal, broken into 8-sample-long frames: original signal

n 0

8 sample “frame”

n 0

Why is the DFT undesireable in this case, given our goal of compression?

Discrete Cosine Transform

It is much more common to use the DCT (Discrete Cosine Trans- form) in compression applications. The DCT (or variants thereof) are used in JPEG, AAC, , WMA, MP3, ....

The DCT (more formally, the DCT-II) is defined by: N 1 1 − π 1 X [k] = x[n] cos n + k C N N 2 nX=0     DCT: Relationship to DFT

N 1 1 − π 1 X [k] = x[n] cos n + k C N N 2 nX=0     N 1 1 − j π n / k j π n / k = x[n] e N ( +1 2) + e N ( +1 2) 2N − nX=0   N 1 1 j π 1 k − j π n k j π nk = e N 2 x[n] e N ( +1) + e N 2N − − nX=0   N 1 N 1 1 j π 1 k − j 2π ( n 1)k − j 2π nk = e− N 2 x[n]e− 2N − − + x[n]e− 2N 2N ! nX=0 nX=0 N 1 1 j π 1 k − j 2π nk j π 1 k = e− N 2 x˜[n]e− 2N = e− N 2 X˜[k] 2N   n= N   X−   where x˜[ ] is given by the following, and the DFT coefficients X˜[ ] · · are computed with an analysis window of length 2N: x[n] if 0 n < N x˜[n] =x ˜[n + 2N] = ≤ x[ n 1] if N < n < 0  − − −

Discrete Cosine Transform

The DCT is commonly used in compression applications. We can think about computing the DCT by first putting a mirrored copy of a windowed signal next to itself, and then computing the DFT of that new signal (shifted by 1/2 sample): 8 sample “frame”

n 0 16-sample shifted, mirrored frame

n 0

Why is the DCT more appropriate, given our goals? How does this approach fix the issue(s) we saw with the DFT?

The Discrete Cosine Transform

N 1 1 1 − πk(n + 2 ) XC[k] = x[n] cos N N ! nX=0

j2π k n j2π k n k 1 Re e N Im e N cos π N (n + 2 )       k = 0 n n

k = 1 n n

k = 2 n n

k = 3 n n

k = 4 n n

k = 5 n n

k = 6 n n

k = 7 n n Energy Compaction Example: Ramp

For many authentic signals (photographs, etc), the DCT has good “energy compaction”: most of the energy in the signal is represented by relatively few coefficients.

Consider DFT vs DCT of a “ramp:”

x[n]

14

12

10

8

6

4

2

0

0 2 4 6 8 10 12 14 n

Energy Compaction Example: Ramp

For many authentic signals (photographs, etc), the DCT has good “energy compaction”: most of the energy in the signal is represented by relatively few coefficients. Consider DFT vs DCT of a “ramp:”

x[n]

14

12

10

8

6

4

2

0

0 2 4 6 8 10 12 14 n

|X[k]| |XC[k]|

7 7

6 6

5 5

4 4

3 3

2 2

1 1

0 0

0 2 4 6 8 10 12 14 0 2 4 6 8 10 12 14 k k

Audio Compression

Last time, we looked at something akin to JPEG compression for images. High-level, what we did was: 2D-DCT of 8-by-8 blocks of greyscale images • In each block, zero out coefficients that are below some threshold •

Let’s try the same approach with audio. Audio Compression

That didn’t sound very good, really... :(

What were the most noticeable artifacts in the reconstructed version? Where did they come from? How did this compare to what we saw with JPEG?

Audio Compression v2

Let’s try a different approach:

Rather than zeroing out coefficients below the threshold, let’s quan- tize them differently (for example, use 8 bits for each sample below the threshold and 16 bits for each value above the threshold).

How does this compare? What artifacts remain? How can we explain them?

MDCT

The biggest issue with the last scheme was artifacts at the frame boundaries.

Many modern audio compression schemes (MP3, AAC, WMA, Vor- bis, ...) don’t use the DCT directly, but rather a related transform called the MDCT (Modified Discrete Cosine Transform), which mit- igates these issues.

This is a : 2N time-domain samples turn into N frequency-domain samples. By taking the transforms of overlapping windows and summing, we can reconstruct the original sequence exactly (similar to overlap-add method we saw with DFT). This principle is referred to as time-domain aliasing cancellation. MDCT

2.5 ] n [

x 0.0

2.5

0.0 window

2.5

0.0 MDCT

2.5

0.0 reconstructed

2.5

0.0 window

2.5

0.0 MDCT

2.5

0.0 reconstructed

2.5

sum 0.0

0 100 200 300 400 500

MDCT

Formally, the MDCT is defined by:

2N 1 1 − π 1 N 1 X [k] = x[n] cos n + + k + M 2N N 2 2 2 nX=0      N 1 − π 1 N 1 y[n] = X [k] cos n + + k + M N 2 2 2 kX=0     

Including a window function on both x[ ] and y[ ] can avoid disconti- · · nuities at the endpoints. Similar to DCT in terms of energy com- paction, but avoids issues with discontinuities on frame boundaries.

Audio Compression v3

Let’s look at a compression scheme that uses the MDCT. What Else is There?

We have been able to achieve decent compression rates, but nothing close to MP3, for example. MP3 can ahieve around a 6:1 compres- sion ratio before expert listeners are able to distinguish between compressed and original audio.

This approach is actually somewhat similar to MP3, but we’re not quite there, so what are we missing?

Psychoacoustic Modeling

Importantly, our goal is ultimately to throw away information that is perceptually unimportant. To this end, MP3 includes a model of human perception of audio, including:

Threshold of hearing: • how loud must a signal be in order to hear it? Frequency masking: • a loud component at a particular frequency “masks” nearby fre- quencies Temporal masking: • when two tones are close together in time, one can mask the other.

High-level overview

MP3 encoding process broken down into steps: 1. Filter the into frequency sub-bands 2. Determine the amount of masking for each band caused by nearby bands (in time and in freq) using the psychoacoustic model 3. If the signal is too small (or if it is “masked” by nearby frequen- cies), don’t encode it 4. Otherwise, determine the number of bits needed to represent it such that the noise introduced by quantization is not (below the masking effect) 5. Put these bits together into the proper file format Other Concerns

Other domain-specific may use other strategies; for example, some audio codecs designed to compress speech (as opposed to music, etc) will use something like LPC (discussed in the lecture on speech). They can then use a small number of bits to represent the parameters of the model, and use some additional bits to represent differences from that prediction.