Computer Mathematics

Week 5 Source coding and compression

Department of Mechanical and Electrical System Engineering last week

Central Processing Unit data bus address IR DR PC AR bus binary representations of signed numbers increment PC registers 0 ˆ 4 sign-magnitude, biased 8 CU 16 20 ˆ one’s complement, two’s complement 24 operation 28 ALU select Random PSR Access signed binary arithmetic Memory

ˆ negation Universal Serial Bus Input / Output PCI Bus Controller ˆ addition, subtraction Mouse Keyboard, HDD GPU, Audio, SSD ˆ signed overflow detection ˆ multiplication, division width conversion ˆ sign extension

floating-point numbers

2 this week

Central Processing Unit data bus address coding theory IR DR PC AR bus increment PC registers 0 ˆ 4 source coding 8 CU 16 20 24 28 operation ALU information theory concept select Random PSR Access ˆ information content Memory

Universal Serial Bus Input / Output PCI Bus binary codes Controller Mouse Keyboard HDD GPU Audio SSD Net ˆ numbers ˆ text variable-length codes ˆ UTF-8 compression ˆ Huffman’s algorithm

3 coding theory

coding theory studies ˆ the encoding (representation) of information as numbers

and how to make encodings more ˆ efficient (source coding) ˆ reliable (channel coding) ˆ secure (cryptography)

4 binary codes

a assigns one or more bits to represent some piece of information ˆ number ˆ digit, character, or other written symbol ˆ colour, pixel value ˆ audio sample, frequency, amplitude ˆ etc.

codes can be ˆ arbitrary, or designed to have desirable properties ˆ fixed length, or variable length ˆ static, or generated dynamically for specific data

5 a code for numbers movement

1 1 encoded position 1 why a code for numbers? 0 ˆ binary is ambiguous between values ˆ e.g., electro-mechanical sensor moves from position 7 to 8 ˆ no guarantee all bits change simultaneously

a better code would have ˆ one bit difference between adjacent positions

also known as minimum-change codes 6 reflected binary code

begin with a single bit encoding two positions (0 and 1) ˆ this has the property we need: 3-bit 2-bit 1-bit – only one bit changes between adjacent positions 0 0 0 0 0 0 0 0 1 0 1 1 0 1 1 1 1 0 1 0 1 0 to double the size of the code... 1 1 0 reflect ˆ repeat the bits by reflecting them 1 1 1 1 0 1 reflect – the two central positions now have the same code 1 0 0 ˆ prefix the first half with 0, and the second half with 1 ˆ this still has the property we need: – only the newly-added first bit changes between the two central positions this is Gray code, named after its inventor

7 Gray code and binary

conversion from binary to Gray code ˆ add (modulo 2) each input digit to the input digit on its left (there is an implicit 0 on the far left)

implicit 0

binary = 0 1 0 0 0 1 0 0 0 + Gray = 11 0 0 1 1 0 0 + binary = 0 1 0 0 0 1 0 0 0 conversion from Gray code to binary ˆ add (modulo 2) each input digit to the output digit on its left (there is an implicit 0 on the far left)

8 binary and Gray code rotary encoders

9 high-resolution Gray code rotary encoder

10 fixed length codes for text

Baudot code (5 bits) → International Telegraph Alphabet (ITA) No. 2 (5 bits) → ASCII (7+1 bits) 11 fixed length codes for text: ASCII

ASCII (American Standard Code for Information Interchange) ˆ latin alphabet, arabic digits ˆ common western punctuation and mathematical symbols the ASCII codes, in hexadecimal: least significant digit 0123456789ABCDEF 0 NULSOHSTXETXEOTENQACKBELBSHTNLVTNPCRSOSI 1 DLEDC1 DC2 DC3 DC4 NAKSYNETBCANEMSUBESCFSGSRSUS 2 SP ! " # $ % & ’ () * + , - . / 3 0123456789:;<=>? 4 @ABCDEFGHIJKLMNO 5 PQRSTUVWXYZ[ \ ] ^ 6 ‘ abcdefghijklmno

most significant digit 7 pqrstuvwxyz{|}~ DEL

text: H e l l o ␣ t h e r e ! hex encoding: 48 65 6C 6C 6F 20 74 68 65 72 65 21

12 fixed length codes for text: UTF-32

Unicode is a mapping of all written symbols to numbers ˆ modern languages, ancient languages, math/music/art symbols (and far too many useless emoticons) ˆ 21-bit numbering, usually written ‘U+hexadecimal-code’ ˆ first 127 characters are the same as ASCII (hexadecimal) name symbol Unicode ASCII asterisk * U+2A 2A star F U+2605 n/a shooting star U+1F320 n/a

UTF (Unicode Transformation Format) is an encoding of Unicode in binary

UTF-32 is a fixed length encoding ˆ each character’s number is represented directly as a 32-bit integer

13 variable length codes for text: UTF-8

UTF-32 is wasteful ˆ most western text uses the 7-bit ASCII subset U+20 – U+7F ˆ most other modern text uses characters with numbers < 216 – kana are in the range U+3000 – U+30FF – kanji are in the range U+4E00 – U+9FAF UTF-8 is a variable length encoding of Unicode ˆ characters are between 1 and 4 bytes long ˆ first byte encodes the length of the character ˆ most common Unicode characters are encoded as one, two or three UTF-8 bytes

number of bits code points byte 1 byte 2 byte 3 byte 4

7 0016 – 7F16 0xxxxxxx2 11 (5 + 6) 008016 – 07FF16 110xxxxx2 10xxxxxx2 16 (4 + 6 + 6) 080016 – FFFF16 1110xxxx2 10xxxxxx2 10xxxxxx2 21 (3 + 6 + 6 + 6) 01000016 – 10FFFF16 11110xxx2 10xxxxxx2 10xxxxxx2 10xxxxxx2 ↑ number of leading ‘1’s encodes the length in bytes U+2605 (‘F’) = 0010 011000 0001012 = 111000102 100110002 100001012 = E2 98 8516 14 information theory how much information is carried by a message? ˆ information is conveyed when you learn something new if the message is predictable (e.g., series of ‘1’s) ˆ minimum information content ˆ zero bits of information per bit transmitted if the message is unpredictable (e.g., series of random bits) ˆ maximum information content ˆ one bit of information per bit transmitted if the probability of receiving a given bit (or symbol, or message) ‘x’ is Px, then  1  information content of x = log2 bits Px

15 variable length encoding of known messages messages often use only a few symbols from a code e.g., in typical English text ˆ 12.702% of letters are ‘e’ ˆ 0.074% of letters are ‘z’ considering their probability in English text ˆ ‘e’ conveys few bits of information (log2(1/0.12702) = 2.98 bits) ˆ ‘z’ conveys many bits of information (log2(1/0.00074) = 10.4 bits) and yet they are both encoded using 8 bits of UTF-8 (or ASCII) if we know the message in advance we could construct an encoding optimised for it ˆ make high-probability (low information) symbols use few bits ˆ make low-probability (high information) symbols use more bits this is the goal of source coding, or ‘compression’

16 binary encodings as decision trees follow the path from the root to the symbol in the decision tree ˆ a 0 means ‘go left’ and a 1 means ‘go right’ root (start here)

0

1 UTF-8 multibyte characters 80-F7 digits and punctuation 1 00-3F capital letters 0 1 40-5F 0 1

1 0

60-63 0 1 7C-7F

1 0

. . .d e f g . . .x y z { . . .

01100101 (6516) 01111010 (7A16) for English, we really want ‘e’ to have a shorter path (be closer to the root) than ‘z’ 17

Huffman codes are variable-length codes that are optimal for a known message the message is analysed to find the frequency (probability) of each symbol a binary encoding is then constructed, from least to most probable symbols ˆ create a list of ‘subtrees’ containing just each symbol labeled with its probability ˆ repeatedly – remove the two lowest-probability subtrees s, t – combine them into a new subtree p – label p with the sum of the probabilities of s and t – insert p into the list ˆ until only one subtree remains the final remaining subtree in the list is an optimal binary encoding for the message

18 Huffman coding example

known message ‘googling’: f symbol P (s) 1 l i n 1/8 (3.00 bits of information) 2 o 2/8 (2.00 bits) 3 g 3/8 (1.42 bits) repeatedly combine two lowest-probability subtrees

1. 1 1 1 2 3 code table: symbol encoding l i n o g n 00 2. 12 2 3 n o g 1 1 o 01 l i l 100 3. 2 3 3 g i 101 1 1 1 2 l i n o g 11

4. 3 5 g o o g l i n g 12 2 3 encoded message: 11 01 01 11 100 101 00 11 n o g 1 1 l i message length: 64 bits (UTF-8) vs. 18 bits (encoded) 5. 8 compression ratio: 64/18 = 3.6 0 1 3 5 0 1 0 1 12 2 3 actual information: 3 × 3.0 + 2 × 2.0 + 3 × 1.42 = 17.26 bits n o 0 1 g .. 1 1 encoding optimal? d17.26e = 18 (^) l i 19 common source coding algorithms zip ˆ Huffman coding with a dynamic encoding tree ˆ tree updated to keep most recent common byte sequences efficiently encoded video ˆ mp2, or MPEG-2 (Moving Picture Expert’s Group layer 2), for DVDs ˆ mp4, or MPEG-4, for web video ˆ avi (Audio Video Interleave), wmv (Windows Media Video), flv (Flash Video) lossy audio (decoded audio has degraded quality and is permanently damaged) ˆ mp3, or MPEG-3 (Motion Picture Expert’s Group layer 3) ˆ AAC (Apple Audio Codec), wma (Windows Media Audio) lossless audio (decoded audio is identical to the original) ˆ ALAC (Apple Lossless Audio Codec) ˆ FLAC (Free Lossless Audio Codec) (CODEC = coder-decoder, a program that both encodes and decodes data) 20 next week

Central Processing Unit data bus address coding theory IR DR PC AR bus increment PC registers 0 ˆ channel coding 4 8 CU 16 20 24 28 operation information theory concept ALU select Random ˆ PSR Access Hamming distance Memory

Universal Serial Bus Input / Output PCI Bus error detection Controller Mouse Keyboard HDD GPU Audio SSD Net ˆ motivation ˆ parity ˆ cyclic redundancy checks error correction ˆ motivation ˆ block parity ˆ Hamming codes

21 homework

practice converting between binary and gray code ˆ write a Python program to do it practice encoding and decoding some UTF-8 characters ˆ write a Python program to do it practice constructing Huffman codes ˆ write a Python program to do it ask about anything you do not understand ˆ it will be too late for you to try to catch up later! ˆ I am always happy to explain things differently and practice examples with you

22 glossary

ASCII — American Standard Code for Information Interchange. A 7-bit encoding of the western alphabet, digits, and common punctuation symbols. Almost always stored one character per 8-bit byte. binary code — a code that assigns a pattern of binary digits to represent each distinct piece of information, such as each symbol in an alphabet. channel coding — the theory of information encoding for the purpose of protecting it against damage. compression — an efficient encoding of information that reduces its size when stored or transmitted. cryptography — an encoding of information for the purpose of protecting it from unauthorised access. encoding — a mapping of symbols or messages onto patterns of bits.

fixed length — an encoding in which every symbol is encoded using the same number of bits.

Gray code — an encoding of numbers with the property that adjacent values differ by only one bit. information content — the theoretical number of bits of information associated with a symbol or message. minimum-change — an encoding of symbols in which adjacent codes differ by only one bit. path — a route through a tree from the root to some given node. 23 glossary root — the topmost node in a tree. source coding — the theory of information encoding for the purpose of reducing its size. tree — a hierarchical structure representing parent-child relationships between nodes.

Unicode — a numbering of all symbols used in modern and ancient written languages, and other written systems of representation such as music.

UTF-32 — a fixed length encoding of Unicode in which each character’s number is encoded directly in a 32-bit integer.

UTF-8 — a variable length encoding of Unicode in which each character’s number is encoded in one, two, three or four bytes.

UTF — Unicode Transformation Format. A family of binary encodings for Unicode characters. variable length — an encoding in which different symbols are encoded using different numbers of bits.

24