ES/N0, 40, 159 Eb/N0, 40, 159 1X Mode, 389 3GPP, 361 3X Mode, 389 3Rd Generation Partnership Project, 361 Active Set, 298, 374

Total Page:16

File Type:pdf, Size:1020Kb

ES/N0, 40, 159 Eb/N0, 40, 159 1X Mode, 389 3GPP, 361 3X Mode, 389 3Rd Generation Partnership Project, 361 Active Set, 298, 374 Index ES/N0, 40, 159 Channel coding for CDMA, 294 Eb/N0, 40, 159 Channel diversity, 192 1X mode, 389 Channel estimation, 181, 227 3GPP, 361 Channel estimation 3X mode, 389 in frequency direction, 189 3rd Generation Partnership Project, 361 in time and frequency direction, 192 Active set, 298, 374, 386 in time direction, 186 Adaptive antennas, 301 Channel simulation, 67 Alamouti scheme, 87, 235 Chip, 305 Antenna diversity, 235 Chip Antipodal modulation, 36 duration, 306 ASK, 40 period, 306 Autocorrelation function, 58 pulse, 305 Autocorrelation matrix, 200 Closed-loop power control, 286, 374, AWGN, 25 379 Closed-loop transmit diversity, 368 Barker code, 282, 352 Cluster size, 274, 290 Base station, 269 Coarse/acquisition code, 356 Base station controller, 269 Code allocation, 364, 376, 381 Baseband, 18 Code rate, 94 BCJR algorithm, 125 Code tree, 277, 282 Bessel function, 58 Code word, 93 BICM, 214 Coherence bandwidth, 53, 194 Block code, 93 Complementary code keying, 353 Block interleaver, 194 Complex baseband, 18 BPSK, 209 Complex baseband stochastic processes, BS, 269 27 C/A code, 356 Compressed mode, 375, 392 Capacity, 290, 295 Concatenated coding, 103 CDMA,1,265 Convolutional code, 114, 209, 367, 382, cdma2000, 386 389 cdmaOne, 380 Convolutional interleaver, 196 Cell search, 371, 378, 384, 391 Correlation coefficient, 305 Channel coding, 93, 245 Correlation frequency, 53 Theory and Applications of OFDM and CDMA Henrik Schulze and Christian Luders¨ 2005 John Wiley & Sons, Ltd 404 INDEX Correlation matrix, 308 per useful bit Eb,40 Correlation time, 53 Equivalent independently fading Coverage probability, 273 channel, 201 Cyclic prefix, 154 Estimation error, 185 ETSI, 361 DAB, 176, 208, 209, 242 Euclidean distance, 5 DAMPS, 358 Euclidean norm, 5 DCH, 367 European Telecommunications Standards Decision threshold, 34 Institute, 361 Decoding delay, 193, 195, 198 Extrinsic LLR, 113 Decorrelating receiver, 328 Dedicated channel, 367 Fading, 51 Dedicated physical control channel, 367 Fading amplitude, 72 Dedicated physical data channel, 367 FDD, 361 Delay power spectrum, 61 FFT, 153 Delay spread, 53, 60 Forward link, 269 Detector, 7, 8 Forward-backward recursions, 127 Digital American Mobile Phone System, Fourier base, 1 358 Fourier coefficients, 1 Discontinuous transmission, 293 Frequency allocation, 290, 295, 363, Discrete AWGN channel, 29 376, 380, 387 Discrete fading channel, 73, 74 Frequency division duplex, 361 Diversity, 78 Frequency hopping, 352 Diversity branch spectrum, 202 Frequency interleaving, 249 Diversity degree Frequency selectivity, 51, 60 of the channel, 199 Fundamental channel, 389 Diversity spectrum, 199 DL, 269 Galois field, 133 Doppler frequency, 51 Gaussian Nyquist pulse, 11 Doppler spectrum, 57 Gaussian random variable, 25 Downlink, 269 Generator, 115 DPCCH, 367 Generator DPDCH, 367 polynomial, 115, 280 DPSK, 44 Global Positioning System, 355 DQPSK, 250 Global System for Mobile DS-CDMA, 306, 362 Communications, 358 DTX, 293 Gold codes, 281, 365 DVB, 208, 209, 251 GPP2, 362 DVB-C, 252 GPS, 355, 381, 388 DVB-S, 252 GSM, 358 DVB-T, 183, 213, 252 Guard period, 377 Dynamic channel allocation, 301 Hadamard matrices, 13 Hamming codes, 103 Effective spreading factor, 266 Hamming distance, 94, 199 Energy Handover, 270, 374, 379, 386 per symbol ES,39 Handover gain, 272, 274, 296 INDEX 405 Handover hysteresis, 296, 298 m-sequences, 279, 365 Hard capacity planning, 293 Macrodiversity, 274, 296 Hard handover, 270, 374, 379, 386 MAI, 316, 320, 321, 346 Hierarchical cell structure, 294 MAP receiver, 112, 321, 326 Hierarchical networks, 363 Matched filter, 8 Hilbert space, 5 Matched filter base, 308 Matrix block interleaver, 195 IEEE 802.11, 352, 354 Maximum likelihood, 32 IMT-2000, 360 Maximum ratio combining (MRC), Inner-loop power control, 288 79 Institute of Electrical and Electronics maxlog, 219 Engineers, 352 maxlog MAP, 114 Integral kernel, 63 MC-CDMA, 306, 362 Intelligent antennas, 301 Metric, 216 Intercell interference, 274 Microdiversity, 296 Interference, 285, 290 Midamble, 377 Interference averaging, 292 Minimum mean square error, 337 Interference cancellation, 339, 365 MLSE, 32, 121 Interference diversity, 292 MLSE Interleaver, 194 for fading channels, 72–74 Interleaving, 192, 227 MLSE receiver, 321–323, 325, 326, International Mobile 347 Telecommunications, 360 MMSE receiver, 337 International Telecommunications Mobile radio channel, 51 Union, 360 Mobile reception Intersymbol interference, 9, 54 of DVB-T, 257 Intracell interference, 274 Mobile station, 269 IS-136, 358 Mobile switching center, 269 IS-95, 358, 380 Modulation schemes, 38 ISI, 154, 346 Monte-Carlo Simulations, 90 Isotropic Doppler spectrum, 58 MRC, 80, 344 ITU, 360 MS, 269 MUD receiver, 315 Jakes Doppler spectrum, 58 Multicarrier CDMA, 387, 389 JD-CDMA, 375 Multicarrier transmission, 145, 149 Joint detection, 301, 375, 378 Multipath diversity, 267, 297 Multipath propagation, 51 Kasami codes, 282 Multiple access interference, 316 Multiplexing, 247 Linear CDMA receivers, 328 Multiuser detection, 315 Linear code, 94 Linear modulation schemes, 38 Near–far problem, 285, 315, 321 LLR, 218 Network capacity, 292 Local mean, 273 Noise, 25 Log-likelihood ratio (LLR), Normplot, 166 105 NSC encoder, 119 Long-term fading, 271 Null symbol, 176 406 INDEX Nyquist Primary scrambling code, 365 base, 9 Primary synchronization code word, criterion, 10 372, 378, 392 pulse, 9 Primitive element, 133 Processing gain, 268 OFDM, 1, 145 Projection theorem, 184 OFDM Propagation loss, 271 and nonlinearities, 166 Pseudoinverse, 331 symbol, 159 Pseudonoise sequences, 279 OFDM with convolutional coding, 208, 213 QAM, 40, 209, 213, 227, 252 OFDM with QAM, 213 QPSK, 208 OFDM with QPSK, 208 Quadrature Omni cell, 270 components, 20 Open-loop power control, 286, 374, 379 demodulator, 22 Orthogonal detector base, 309 modulator, 20 Orthogonal modulation, 37 Orthogonal signals, 1, 5 Radio network controller, 269 Orthogonal variable spreading factor Raised cosine pulses, 11 codes, 277 RAKE receiver, 267, 342, 346 Orthogonality factor, 290 Random access, 373, 379, 385 Orthogonality principle, 184 Random access preamble, 373, 385 Orthonormal base, 2 Rate matching, 367 Orthonormal transmit bases, 13 Receive antenna diversity, 236 Out-of-band radiation, 162 Receive space, 308 Outer-loop power control, 288 Reed–Solomon code, 254 OVSF codes, 277, 364, 376 Reed–Solomon codes, 131 Repetition codes, 101 P code, 356 Representation of CDMA signals, 304 Paging channel, 382 Reverse link, 269 Pairwise error probabilities, 34 Rice factor, 67 Pairwise error probabilities Ricean channel, 67 for diversity, 82 Rolloff factor, 10 for fading channels, 76 Root-raised cosine filter, 370 Parity check, 94 RSC encoder, 119 Parseval’s equation, 6 Passband, 18 Sampling Theorem, 228, 232 PC, 285 Sampling theorem, 6 Pilot channel, 369, 382, 383, 389 Scalar product, 2 Pilot grid, 182, 183 Scalar product Pilot symbols, 181, 368 of signals, 5 PN, 279 Scattering function, 62 Power budget handover, 270 Schwarz inequality, 6 Power control, 285, 374, 379, 386, 392 Secondary scrambling code, 365 Power density spectrum, 152, 160, 161 Secondary synchronization code word, Power efficiency, 38, 40, 44 372, 378 Precision code, 356 Sector cell, 270 INDEX 407 Sectorization, 271, 275, 302 TDD-CDMA, 362 Shift register, 279 Threshold metric, 216 Signal energy, 9 Time division CDMA, 375 Signal space, 5 Time division duplex, 301, 361 Signals of finite energy, 5 Time interleaving, 193, 247 Signature pulses, 305 Time interleaving Signature vector, 305 for DAB, 194 Signature waveforms, 305 Time variance, 51, 54 Simplex codes, 102 Time-variant impulse response, 64 sinc base, 7 Time-variant transfer function, 64 Single parity check codes, 101 Traffic channel, 382 Single-user matched filter (SUMF) Training sequence, 377 receiver, 315, 316 Transmission frame, 244 SISO decoder, 113 Transmit Slotted mode, 299, 375 base, 8 Smart antennas, 301, 369 pulse, 8 Smoothing window, 165 symbol, 8 SNR, 39, 40, 159 Transmit antenna diversity, 86, 237 Soft capacity planning, 293 Transmit space, 307 Soft handover, 270, 295, 374, 379, 386 Transmitter, 7 Softer handover, 296 Transport format combination indicator, SOVA, 124 369 Spectrum mask, 162 Trellis, 116 Spread spectrum, 265 Turbo code, 389 Spreading, 265 Turbo coding, 367 Spreading factor, 266, 364, 376, 381, 389 UL, 269 Spreading sequence, 265 UMTS, 361 SSS processes, 57 Universal Mobile Telecommunication State diagram, 117 System, 361 Successive interference cancellation, 339 Universal Terrestrial Radio Access, 361 Sufficient statistics, 30 Uplink, 269 SUMF receiver, 315, 316 UTRA, 361 Supplemental channel, 389 UTRA FDD, 363 Switched beams, 302 UTRA TDD, 375 Synchronization, 175, 371, 378, 381, UWC-136, 362 384, 391 Synchronization channel, 382 Viterbi algorithm, 121 Synchronous transmission, 315 Viterbi equalizer, 347 Synchronous transmission of CDMA, 307 Walsh Systematic code, 94 functions, 13 modulation, 37, 382 TD-CDMA, 361, 375 Walsh–Hadamard TD-SCDMA, 361, 375 codes, 96, 388 TDD, 301, 361 Walsh–Hadamard codes, 381 TDD mode, 375 Walsh–Hadamard matrices, 13 408 INDEX Walsh–Hadamard Wiener estimator, 183 codes, 101 Wiener filtering, 186 WCDMA, 361 Wiener–Hopf equation, 185 White Gaussian noise, 25 Wireless LAN, 352 Whitening matched filter, 308 WLAN, 208, 209, 213 Wideband CDMA, 342, 361, 362 WSS processes, 57 Wideband multicarrier channel, 199 WSSUS model, 63, 186.
Recommended publications
  • Reed-Solomon Error Correction
    R&D White Paper WHP 031 July 2002 Reed-Solomon error correction C.K.P. Clarke Research & Development BRITISH BROADCASTING CORPORATION BBC Research & Development White Paper WHP 031 Reed-Solomon Error Correction C. K. P. Clarke Abstract Reed-Solomon error correction has several applications in broadcasting, in particular forming part of the specification for the ETSI digital terrestrial television standard, known as DVB-T. Hardware implementations of coders and decoders for Reed-Solomon error correction are complicated and require some knowledge of the theory of Galois fields on which they are based. This note describes the underlying mathematics and the algorithms used for coding and decoding, with particular emphasis on their realisation in logic circuits. Worked examples are provided to illustrate the processes involved. Key words: digital television, error-correcting codes, DVB-T, hardware implementation, Galois field arithmetic © BBC 2002. All rights reserved. BBC Research & Development White Paper WHP 031 Reed-Solomon Error Correction C. K. P. Clarke Contents 1 Introduction ................................................................................................................................1 2 Background Theory....................................................................................................................2 2.1 Classification of Reed-Solomon codes ...................................................................................2 2.2 Galois fields............................................................................................................................3
    [Show full text]
  • Linear Block Codes
    Linear Block codes (n, k) Block codes k – information bits n - encoded bits Block Coder Encoding operation n-digit codeword made up of k-information digits and (n-k) redundant parity check digits. The rate or efficiency for this code is k/n. 푘 푁푢푚푏푒푟 표푓 푛푓표푟푚푎푡표푛 푏푡푠 퐶표푑푒 푒푓푓푐푒푛푐푦 푟 = = 푛 푇표푡푎푙 푛푢푚푏푒푟 표푓 푏푡푠 푛 푐표푑푒푤표푟푑 Note: unlike source coding, in which data is compressed, here redundancy is deliberately added, to achieve error detection. SYSTEMATIC BLOCK CODES A systematic block code consists of vectors whose 1st k elements (or last k-elements) are identical to the message bits, the remaining (n-k) elements being check bits. A code vector then takes the form: X = (m0, m1, m2,……mk-1, c0, c1, c2,…..cn-k) Or X = (c0, c1, c2,…..cn-k, m0, m1, m2,……mk-1) Systematic code: information digits are explicitly transmitted together with the parity check bits. For the code to be systematic, the k-information bits must be transmitted contiguously as a block, with the parity check bits making up the code word as another contiguous block. Information bits Parity bits A systematic linear block code will have a generator matrix of the form: G = [P | Ik] Systematic codewords are sometimes written so that the message bits occupy the left-hand portion of the codeword and the parity bits occupy the right-hand portion. Parity check matrix (H) Will enable us to decode the received vectors. For each (kxn) generator matrix G, there exists an (n-k)xn matrix H, such that rows of G are orthogonal to rows of H i.e., GHT = 0, where HT is the transpose of H.
    [Show full text]
  • An Introduction to Coding Theory
    An introduction to coding theory Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India Feb. 6, 2017 Lecture #6A: Some simple linear block codes -I Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Examples of linear block codes Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Examples of linear block codes Repetition code Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Examples of linear block codes Repetition code Single parity check code Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Dual code. Examples of linear block codes Repetition code Single parity check code Hamming code Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Dual code Two n-tuples u and v are orthogonal if their inner product (u, v)is zero, i.e., n (u, v)= (ui · vi )=0 i=1 Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Dual code Two n-tuples u and v are orthogonal if their inner product (u, v)is zero, i.e., n (u, v)= (ui · vi )=0 i=1 For a binary linear (n, k) block code C,the(n, n − k) dual code, Cd is defined as set of all codewords, v that are orthogonal to all the codewords u ∈ C.
    [Show full text]
  • Tcom 370 Notes 99-8 Error Control: Block Codes
    TCOM 370 NOTES 99-8 ERROR CONTROL: BLOCK CODES THE NEED FOR ERROR CONTROL The physical link is always subject to imperfections (noise/interference, limited bandwidth/distortion, timing errors) so that individual bits sent over the physical link cannot be received with zero error probability. A bit error -6 rate (BER) of 10 , which may sound quite low and very good, actually leads 1 on the average to an error every -th second for transmission at 10 Mbps. 10 -7 Even with better links, say BER=10 , one would make on the average one error in transferring a binary file of size 1.25 Mbytes. This is not acceptable for "reliable" data transmission. We need to provide in the data link control protocols (Layer 2 of the ISO 7- layer OSI protocol architecture) a means for obtaining better reliability than can be guaranteed by the physical link itself. Note: Error control can be (and is) also incorporated at a higher layer, the transport layer. ERROR CONTROL TECHNIQUES Error Detection and Automatic Request for Retransmission (ARQ) This is a "feedback" mode of operation and depends on the receiver being able to detect that an error has occurred. (Error detection is easier than error correction at the receiver). Upon detecting an error in a frame of transmitted bits, the receiver asks for a retransmission of the frame. This may happen at the data link layer or at the transport layer. The characteristics of ARQ techniques will be discussed in more detail in a subsequent set of notes, where the delays introduced by the ARQ process will be considered explicitly.
    [Show full text]
  • An Introduction to Coding Theory
    An introduction to coding theory Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India Feb. 6, 2017 Lecture #7A: Bounds on the size of a code Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Hamming bound Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Hamming bound Perfect codes Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Hamming bound Perfect codes Singleton bound Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Hamming bound Perfect codes Singleton bound Maximum distance separable codes Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Hamming bound Perfect codes Singleton bound Maximum distance separable codes Plotkin Bound Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Hamming bound Perfect codes Singleton bound Maximum distance separable codes Plotkin Bound Gilbert-Varshamov bound Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Bounds on the size of a code The basic problem is to find the largest code of a given length, n and minimum distance, d.
    [Show full text]
  • Coding Theory: Linear-Error Correcting Codes 1 Basic Definitions
    Anna Dovzhik 1 Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014 Sharing data across channels, such as satellite, television, or compact disc, often comes at the risk of error due to noise. A well-known example is the task of relaying images of planets from space; given the incredible distance that this data must travel, it is be to expected that interference will occur. Since about 1948, coding theory has been utilized to help detect and correct corrupted messages such as these, by introducing redundancy into an encoded message, which provides a means by which to detect errors. Although non-linear codes exist, the focus here will be on algebraic coding, which is efficient and often used in practice. 1 Basic Definitions The following build up a basic vocabulary of coding theory. Definition 1.1 If A = a1; a2; : : : ; aq, then A is a code alphabet of size q and an 2 A is a code symbol. For our purposes, A will be a finite field Fq. Definition 1.2 A q-ary word w of length n is a vector that has each of its components in the code alphabet. Definition 1.3 A q-ary block code is a set C over an alphabet A, where each element, or codeword, is a q-ary word of length n. Note that jCj is the size of C. A code of length n and size M is called an (n; M)-code. Example 1.1 [3, p.6] C = f00; 01; 10; 11g is a binary (2,4)-code taken over the code alphabet F2 = f0; 1g .
    [Show full text]
  • Part I Basics of Coding Theory
    Part I Basics of coding theory CHAPTER 1: BASICS of CODING THEORY ABSTRACT - September 24, 2014 Coding theory - theory of error correcting codes - is one of the most interesting and applied part of informatics. Goals of coding theory are to develop systems and methods that allow to detect/correct errors caused when information is transmitted through noisy channels. All real communication systems that work with digitally represented data, as CD players, TV, fax machines, internet, satellites, mobiles, require to use error correcting codes because all real channels are, to some extent, noisy { due to various interference/destruction caused by the environment Coding theory problems are therefore among the very basic and most frequent problems of storage and transmission of information. Coding theory results allow to create reliable systems out of unreliable systems to store and/or to transmit information. Coding theory methods are often elegant applications of very basic concepts and methods of (abstract) algebra. This first chapter presents and illustrates the very basic problems, concepts, methods and results of coding theory. prof. Jozef Gruska IV054 1. Basics of coding theory 2/50 CODING - BASIC CONCEPTS Without coding theory and error-correcting codes there would be no deep-space travel and pictures, no satellite TV, no compact disc, no . no . no . Error-correcting codes are used to correct messages when they are (erroneously) transmitted through noisy channels. channel message code code W Encoding word word Decoding W user source C(W) C'(W)noise Error correcting framework Example message Encoding 00000 01001 Decoding YES YES user YES or NO YES 00000 01001 NO 11111 00000 A code C over an alphabet Σ is a subset of Σ∗(C ⊆ Σ∗): A q-nary code is a code over an alphabet of q-symbols.
    [Show full text]
  • Coding Theory: Introduction to Linear Codes and Applications
    InSight: RIVIER ACADEMIC JOURNAL, VOLUME 4, NUMBER 2, FALL 2008 CODING THEORY: INTRODUCTION TO LINEAR CODES AND APPLICATIONS Jay Grossman* Undergraduate Student, B.A. in Mathematics Program, Rivier College Coding Theory Basics Coding theory is an important study which attempts to minimize data loss due to errors introduced in transmission from noise, interference or other forces. With a wide range of theoretical and practical applications from digital data transmission to modern medical research, coding theory has helped enable much of the growth in the 20th century. Data encoding is accomplished by adding additional informa- tion to each transmitted message to enable the message to be decoded even if errors occur. In 1948 the optimization of this redundant data was discussed by Claude Shannon from Bell Laboratories in the United States, but it wouldn’t be until 1950 that Richard Hamming (also from Bell Labs) would publish his work describing a now famous group of optimized linear codes, the Hamming Codes. It is said he developed this code to help correct errors in punch tape. Around the same time John Leech from Cambridge was describing similar codes in his work on group theory. Notation and Basic Properties To work more closely with coding theory it is important to define several important properties and notation elements. These elements will be used throughout the further exploration of coding theory and when discussing its applications. There are many ways to represent a code, but perhaps the simplest way to describe a given code is as a set of codewords, i.e. {000,111}, or as a matrix with all the codewords forming the rows, such as: This example is a code with two codewords 000 and 111 each with three characters per codeword.
    [Show full text]
  • Block Error Correction Codes and Convolution Codes
    S-72.333 Postgraduate Course in Radio Communications - Wireless Local Area Network (WLAN) 1 Block Error Correction Codes and Convolution Codes Mei Yen, Cheong can be transmitted across the channel at rate less than the Abstract—In wireless communications, error control is an channel capacity with arbitrarily low error rate. Since the important feature for compensating transmission impairments publication of this theory more than 40 years ago, control such as interference and multipath fading which cause high bit coding theorists have been working towards discovering these error rates in the received data. Forward Error Correction codes. Among the error control codes found since then are (FEC) is one of the data link layer protocols for error control. some block codes such as BCH and Reed-Solomon codes and This paper gives an overeview to error control coding employed in FEC, particularly block codes and convolutional codes. convolutional codes which will be discussed in this paper. Finally, some consideration of code selection will be discussed. The paper is organized as follows. In Section II the concept of FEC will be introduced. Section III discusses first block codes I. INTRODUCTION in general and then some particular cyclic codes, namely BCH codes and Reed-Solomon codes. Convolutional code is he trend towards portable personal computers or presented in section IV and finally section V discusses some T workstations are rising quickly in the recent years. considerations of code selection and some methods to WLAN networks are becoming increasingly popular. Users of enhanced error control schemes. the network are demanding for ever higher quality of service (QoS) and multi-variety services (not only data, but also packet voice, video, etc.).
    [Show full text]
  • MAS309 Coding Theory
    MAS309 Coding theory Matthew Fayers January–March 2008 This is a set of notes which is supposed to augment your own notes for the Coding Theory course. They were written by Matthew Fayers, and very lightly edited my me, Mark Jerrum, for 2008. I am very grateful to Matthew Fayers for permission to use this excellent material. If you find any mistakes, please e-mail me: [email protected]. Thanks to the following people who have already sent corrections: Nilmini Herath, Julian Wiseman, Dilara Azizova. Contents 1 Introduction and definitions 2 1.1 Alphabets and codes . 2 1.2 Error detection and correction . 2 1.3 Equivalent codes . 4 2 Good codes 6 2.1 The main coding theory problem . 6 2.2 Spheres and the Hamming bound . 8 2.3 The Singleton bound . 9 2.4 Another bound . 10 2.5 The Plotkin bound . 11 3 Error probabilities and nearest-neighbour decoding 13 3.1 Noisy channels and decoding processes . 13 3.2 Rates of transmission and Shannon’s Theorem . 15 4 Linear codes 15 4.1 Revision of linear algebra . 16 4.2 Finite fields and linear codes . 18 4.3 The minimum distance of a linear code . 19 4.4 Bases and generator matrices . 20 4.5 Equivalence of linear codes . 21 4.6 Decoding with a linear code . 27 5 Dual codes and parity-check matrices 30 5.1 The dual code . 30 5.2 Syndrome decoding . 34 1 2 Coding Theory 6 Some examples of linear codes 36 6.1 Hamming codes . 37 6.2 Existence of codes and linear independence .
    [Show full text]
  • Digital Communication Systems ECS 452 Asst
    Digital Communication Systems ECS 452 Asst. Prof. Dr. Prapun Suksompong [email protected] 5. Channel Coding (A Revisit) Office Hours: BKD, 4th floor of Sirindhralai building Monday 14:00-16:00 Thursday 10:30-11:30 1 Friday 12:00-13:00 Review of Section 3.6 We mentioned the general form of channel coding over BSC. In particular, we looked at the general form of block codes. (n,k) codes: n-bit blocks are used to conveys k-info-bit block over BSC. Assume n > k Rate: . We showed that the minimum distance decoder is the same as the ML decoder. This chapter: less probability analysis; more on explicit codes. 2 GF(2) The construction of the codes can be expressed in matrix form using the following definition of addition and multiplication of bits: These are modulo-2 addition and modulo-2 multiplication, respectively. The operations are the same as the exclusive-or (XOR) operation and the AND operation. We will simply call them addition and multiplication so that we can use a matrix formalism to define the code. The two-element set {0, 1} together with this definition of addition and multiplication is a number system called a finite field or a Galois field, and is denoted by the label GF(2). 3 GF(2) The construction of the codes can be expressed in matrix form using the following definition of addition and multiplication of bits: Note that x 0 x x 1 x xx 0 The above property implies x x Something that when added with x, gives 0.
    [Show full text]
  • Math 550 Coding and Cryptography Workbook J. Swarts 0121709 Ii Contents
    Math 550 Coding and Cryptography Workbook J. Swarts 0121709 ii Contents 1 Introduction and Basic Ideas 3 1.1 Introduction . 3 1.2 Terminology . 4 1.3 Basic Assumptions About the Channel . 5 2 Detecting and Correcting Errors 7 3 Linear Codes 13 3.1 Introduction . 13 3.2 The Generator and Parity Check Matrices . 16 3.3 Cosets . 20 3.3.1 Maximum Likelihood Decoding (MLD) of Linear Codes . 21 4 Bounds for Codes 25 5 Hamming Codes 31 5.1 Introduction . 31 5.2 Extended Codes . 33 6 Golay Codes 37 6.1 The Extended Golay Code : C24. ....................... 37 6.2 The Golay Code : C23.............................. 40 7 Reed-Muller Codes 43 7.1 The Reed-Muller Codes RM(1, m) ...................... 46 8 Decimal Codes 49 8.1 The ISBN Code . 49 8.2 A Single Error Correcting Decimal Code . 51 8.3 A Double Error Correcting Decimal Code . 53 iii iv CONTENTS 8.4 The Universal Product Code (UPC) . 56 8.5 US Money Order . 57 8.6 US Postal Code . 58 9 Hadamard Codes 61 9.1 Background . 61 9.2 Definition of the Codes . 66 9.3 How good are the Hadamard Codes ? . 67 10 Introduction to Cryptography 71 10.1 Basic Definitions . 71 10.2 Affine Cipher . 73 10.2.1 Cryptanalysis of the Affine Cipher . 74 10.3 Some Other Ciphers . 76 10.3.1 The Vigen`ereCipher . 76 10.3.2 Cryptanalysis of the Vigen`ereCipher : The Kasiski Examination . 77 10.3.3 The Vernam Cipher . 79 11 Public Key Cryptography 81 11.1 The Rivest-Shamir-Adleman (RSA) Cryptosystem .
    [Show full text]