Chapter 3 Source Codes, Line Codes & Error Control

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 3 Source Codes, Line Codes & Error Control Chapter 3 Source Codes, Line Codes & Error Control 3.1 Primary Communication • Information theory deals with mathematical representation and anal- ysis of a communication system rather than physical sources and channels. • It is based on the application of probability theory i.e.,) calculation of probability of error. Discrete message The information source is said to be discrete if it emits only one symbol from finite number of symbols or message. • Let source generating `N' number of set of alphabets. X = fx1; x2; x3; ::::; xM g • The information source generates any one alphabet from the set. The probability of various symbols in X can be written as P (X = xk) = Pk k = 1; 2; ::::M M Pk = 1 (1) Xk=1 Discrete memoryless source (DMS) It is defined as when the current output symbol depends only on the cur- rent input symbol and not an any of the previous symbols. • Letter or symbol or character 3.2 Communication Enginnering – Any individual member of the alphabet set. • Message or word: – A finite sequence of letters of the alphabet set • Length of a word – Number of letters in a word • Encoding – Process of converting a word of finite set into another format of encoded word. • Decoding – Inverse process of converting a given encoded word to a origi- nal format. 3.2 Block Diagram of Digital Communication System Information Source Channel Base band Formatter source Encoder Encoder Modulator Channel Source Channel Base band Destination Deformatter Decoder Decoder Demodulator Fig .3.1 Typical communication system (or) Digital communication system Information source It may analog or digital. Example: voice, video Formetter It converts analog signal into a digital signal. Source Codes, Line Codes & Error Control 3.3 Source encoder • Is used efficient representation of data generated by source. • It represents digital signal into a few digits as possible depending on the information content of message. (i.e.,) minimizes the require- ments of digits. Channel encoder • Some redundancy is introduced in message to combine noise in chan- nel. Baseband modulator • Encoded signal is modulated here by precise modulating techniques. Channel • Transmitted signal gets corrupted by random noise, thermal noise, shot noise, atmospheric noise. Channel decoder • It removes the redundancy bits by channel decoding algorithm. Deformetter It converts digital data into a discrete form or analog form. • The above communication system is used to carry information bear- ing baseband signal from one place to another over a communication channel. • Performance of communication symbol measured by probability of error (Pe) • Condition to get error free communication is Entropy of the source < capacity of a channel • Capacity of a channel: The ability of a channel to convey information. • Entropy: Average information per symbol. 3.4 Communication Enginnering 3.3 Amount of Information • Amount of information defined interms of probability i.e., 1 I = f P i • Probability of occurrence of event is more, very less amount of infor- mation, otherwise probability of occurrence of an event is less then there will be more amount of information. • Example: If a dog bites a man the probability of occurrence is more so loess information. Otherwise if a man bites a dog the probability of occur- rence is less hence more information. 1 I (xj) = f (1) P (xj) Xj ! Event P (xj) ! Probability of an event I(xj) ! Amount of information Equation (1) can be rewrite as 1 1 I (xj) = log bits or Ik = log bits (2) P (xj) Pk Definition The amount of information Ixj, is related to the logarithm on the inverse of the probability of occurrence of an event P (xj). 3.4 Average Information or Entropy Definition The entropy of a source is defined as the source which produces average information per message or symbol in a particular interval. Let m1, m2, m3, .... mk, `k' different messages with p1, p2,, p3, .... pk, be corresponding probabilities of occurrences. Source Codes, Line Codes & Error Control 3.5 Example: Message generated by source is 0 0 ABCACBCABCAABC A; B; C ! m1; m2; m3:::: ) k = 3 Then the number of m1 message is m1 = P1L L ! Total no. of messages generated by the source. m1 ! A; L = 15 m1 = P115 Similarly for m2 ! B; L = 15 m2 = P215 The amount of information in messages `m1' is given as, 1 I = log 1 2 P 1 Total amount of information due to m1 message is 1 I = P L log t1 1 2 P 1 similarly total amount of information due to `m2' message is, 1 I = P L log t2 2 2 P 2 Thus the total amount of information due to `L' messages, is given as It = It1 + It2 + :::: + Itk 1 1 1 I = P log + P log + ::::P log t 1 2 P 2 2 P k 2 P 1 2 k Total information ) Average information = Number of messages I Average information = t L 3.6 Communication Enginnering Average information per message is nothing but entropy H(x) or H I H = t L 1 1 1 P1L log2 P1 + P2L log2 P2 + :::: + PkL log2 P ) H (s) = k L 1 1 1 L P1L log2 P1 + P2L log2 P2 + :::: + PkL log2 P = k h L i M 1 H = Pk log2 bits/symbol (3) Pk Xk=1 The entropy `H' of discrete memoryless source is bounded as 0 ≤ H ≤ log2 M 3.4.1 Properties of entropy Property 1: H = 0, if Pk = 0 or 1 Entropy is zero, when its probability of event is possible or not When Pk = 0 M 1 H = Pk log2 Pk Xk=1 1 = 0 log 2 0 H = 0 When Pk = 1 M 1 H = Pk log2 Pk Xk=1 1 = 1 log 2 1 H = 1 Property 2: All the symbols are equi-probable 1 H = Pk log2 Pk 1 1 1 H = P log + P log + :::: + P log 1 2 P 2 2 P M 2 P 1 2 M Source Codes, Line Codes & Error Control 3.7 For a minimum number of equally likely messages probability is 1 P = P = P ::::P = 1 2 3 M M 1 1 1 H = log (k) + log (M) + :::: + log (M) M 2 M 2 M 2 M H = log (M) M 2 H = log2 (M) Property 3: Upper bound on entropy 0 ≤ Hmax ≤ log2 k Consider any two probability distribution (P1; P2::::Pn) and (q1; q2::::qn) are the alphabet X = fx1; x2; :::xmg of a DMS source. Then M M qk Pk log2 qk Pk log10 x Pk log2 = * log2 x = Pk log102 log10 2 Xk=1 Xk=1 By a property of natural log, log x 6 x − 1; x > 0 M log qk M 2 Pk 1 qk Pk ≤ Pk − 1 log10 2 log10 2 Pk Xk=1 Xk=1 M 1 ≤ (qk − Pk) log10 2 Xk=1 M M ≤ log10 2 qk − pk ! Xk=1 Xk=1 W.K.T M M Pk = qk = 1 Xk=1 Xk=1 M qk Pk log2 6 0 Pk Xk=1 3.8 Communication Enginnering M M 1 Pk log2 qk + Pk log2 6 0 Pk Xk=1 Xk=1 M M 1 Pk log2 6 − Pk log2 qk Pk Xk=1 Xk=1 1 Sub q = k m M M 1 1 Pk log2 ≤ Pk log2 Pk qk Xk=1 Xk=1 M ≤ Pk log2 M Xk=1 M ≤ log2 M Pk Xk=1 M 1 Pk log2 ≤ log2 M Pk Xk=1 H ≤ log2 M The entropy H holds all the equiprobable symbols. 3.4.2 Entropy of a binary memoryless source (BMS) • Assume that the source is memoryless so that successive symbols emitted by the source are statistically independent. • Consider symbol `0' occurs with probability P0 and symbol `1' with probability P1 = 1 − P0. Entropy os BMS 2 1 H = Pk log2 Pk Xk=1 1 1 = P log + (1 − P ) log 0 2 P 0 2 1 − P 0 0 H = −P0 log2 P0 − (1 − P0) log (1 − P0) 1. When P0 = 0, H = 0 Source Codes, Line Codes & Error Control 3.9 2. When P0 = 1, H = 0 1 3. When P = P = , (i.e.,) symbol 0 and 1 are equally probable then 0 1 2 H = 1. 1 0.8 0.6 0.4 Entropy(H) 0.2 0.5 1 0 Symbol Probability Fig .3.2 Plot of Entropy Vs Probability 3.4.3 Extension of a discrete memoryless source • Consider a blocks with n-successive symbols rather than individual symbols. Each block is produced by an extended source alphabet (Xn) that has kn distinct blocks, where k = number of distinct sym- bols in the source alphabet (X) of original source. Extended entropy H (Xn) = nH (X) 3.4.4 Differential entropy Consider continuous random variable `X' having probability density func- tion of fX (x). 1 1 H = fX (x) log2 dx fX (x) −∞Z 3.4.5 Information rate (R) Rate of information (R) is defined as the average number of bits of infor- mation transmitted per second. R = rH bit/sec The channel types are classified as, 1. Discrete Memoryless Channel (DMC) 3.10 Communication Enginnering 2. Binary Communication Channel (BCC) 3. Binary Symmetric Channel (BSC) 4. Binary Erasable Channel (BEC) 5. Lossless Channel 6. Deterministic Channel 3.4.6 Discrete Memoryless Channel (BMC) Discrete: Channel is said to be discrete when alphabets X have finite. Memoryless: channel is said to be memoryless, when the current out- put symbols depends only on the current input symbol and not an any of the previous symbols. • It is a statistical model with an input X and output Y that is a noisy version of X.
Recommended publications
  • Presentation on Digital Communication (ECE)​​ III- B.TECH V- Semester (AUTONOMOUS-R16) Prepared By, Dr. S.Vinoth Mr. G.Kiran
    Presentation on Digital communication (ECE)​​ III- B.TECH V- Semester (AUTONOMOUS-R16) Prepared by, Dr. S.Vinoth Mr. G.Kiran Kumar (Associate Professor) (Assistant Professor) UNIT -I Pulse Digital Modulation 2 COMMUNICATION •The transmission of information from source to the destination through a channel or medium is called communication. •Basic block diagram of a communication system: 3 COMMUNICATION cont.. • Source: analog or digital • Transmitter: transducer, amplifier, modulator, oscillator, power amp., antenna • Channel: e.g. cable, optical fiber, free space • Receiver: antenna, amplifier, demodulator, oscillator, power amplifier, transducer • Destination : e.g. person, (loud) speaker, computer 4 Necessity of Digital communication • Good processing techniques are available for digital signals, such as medium. • Data compression (or source coding) • Error Correction (or channel coding)(A/D conversion) • Equalization • Security • Easy to mix signals and data using digital techniques 5 Necessity of Digitalization •In Analog communication, long distance communication suffers from many losses such as distortion, interference & security. •To overcome these problems, signals are digitalized. Information is transferred in the form of signal. 6 PULSE MODULATION In Pulse modulation, a periodic sequence of rectangular pulses, is used as a carrier wave. It is divided into analog and digital modulation. 7 Analog Pulse Modulation In Analog modulation , If the amplitude, duration or position of a pulse is varied in accordance with the instantaneous values of the baseband modulating signal, then such a technique is called as • Pulse Amplitude Modulation (PAM) or • Pulse Duration/Width Modulation (PDM/PWM), or • Pulse Position Modulation (PPM). 8 Digital modulation •In Digital Modulation, the modulation technique used is Pulse Code Modulation (PCM) where the analog signal is converted into digital form of 1s and 0s.
    [Show full text]
  • 2.1 Fundamentals of Data and Signals the Major Function of the Physical
    2.1 Fundamentals of Data and Signals The major function of the physical layer is to move data in the form of electromagnetic signals across a transmission medium. Whether the data may be numerical statistics from another computer, sending animated pictures from a design workstation, or causing a bell to ring at a distant control center, you are working with the transmission of data across network connections. Analog and Digital Data Data can be analog or digital. The term analog data refers to information that is continuous, Digital data refers to information that has discrete states. For example, an analog clock that has hour, minute, and second hands gives information in a continuous form, the movements of the hands are continuous. On the other hand, a digital clock that reports the hours and the minutes will change suddenly from 8:05 to 8:06. Analog and Digital Signals: An analog signal has infinitely many levels of intensity over a period of time. As the wave moves from value A to value B, it passes through and includes an infinite number of values along its path. A digital signal, on the other hand, can have only a limited number of defined values. Although each value can be any number, it is often as simple as 1 and 0. The following program illustrates an analog signal and a digital signal. The curve representing the analog signal passes through an infinite number of points. The vertical lines of the digital signal, however, demonstrate the sudden jump that the signal makes from value to value.
    [Show full text]
  • UNIT: 3 Digital and Analog Transmission
    UNIT: 3 Digital and Analog Transmission DIGITAL-TO-ANALOG CONVERSION Digital-to-analog conversion is the process of changing one of the characteristics of an analog signal based on the information in digital data. Figure 5.1 shows the relationship between the digital information, the digital-to-analog modulating process, and the resultant analog signal. A sine wave is defined by three characteristics: amplitude, frequency, and phase. When we vary anyone of these characteristics, we create a different version of that wave. So, by changing one characteristic of a simple electric signal, we can use it to represent digital data. Before we discuss specific methods of digital-to-analog modulation, two basic issues must be reviewed: bit and baud rates and the carrier signal. Aspects of Digital-to-Analog Conversion Before we discuss specific methods of digital-to-analog modulation, two basic issues must be reviewed: bit and baud rates and the carrier signal. Data Element Versus Signal Element Data element is the smallest piece of information to be exchanged, the bit. We also defined a signal element as the smallest unit of a signal that is constant. Data Rate Versus Signal Rate We can define the data rate (bit rate) and the signal rate (baud rate). The relationship between them is S= N/r baud where N is the data rate (bps) and r is the number of data elements carried in one signal element. The value of r in analog transmission is r =log2 L, where L is the type of signal element, not the level. Carrier Signal In analog transmission, the sending device produces a high-frequency signal that acts as a base for the information signal.
    [Show full text]
  • 16.1 Digital “Modes”
    Contents 16.1 Digital “Modes” 16.5 Networking Modes 16.1.1 Symbols, Baud, Bits and Bandwidth 16.5.1 OSI Networking Model 16.1.2 Error Detection and Correction 16.5.2 Connected and Connectionless 16.1.3 Data Representations Protocols 16.1.4 Compression Techniques 16.5.3 The Terminal Node Controller (TNC) 16.1.5 Compression vs. Encryption 16.5.4 PACTOR-I 16.2 Unstructured Digital Modes 16.5.5 PACTOR-II 16.2.1 Radioteletype (RTTY) 16.5.6 PACTOR-III 16.2.2 PSK31 16.5.7 G-TOR 16.2.3 MFSK16 16.5.8 CLOVER-II 16.2.4 DominoEX 16.5.9 CLOVER-2000 16.2.5 THROB 16.5.10 WINMOR 16.2.6 MT63 16.5.11 Packet Radio 16.2.7 Olivia 16.5.12 APRS 16.3 Fuzzy Modes 16.5.13 Winlink 2000 16.3.1 Facsimile (fax) 16.5.14 D-STAR 16.3.2 Slow-Scan TV (SSTV) 16.5.15 P25 16.3.3 Hellschreiber, Feld-Hell or Hell 16.6 Digital Mode Table 16.4 Structured Digital Modes 16.7 Glossary 16.4.1 FSK441 16.8 References and Bibliography 16.4.2 JT6M 16.4.3 JT65 16.4.4 WSPR 16.4.5 HF Digital Voice 16.4.6 ALE Chapter 16 — CD-ROM Content Supplemental Files • Table of digital mode characteristics (section 16.6) • ASCII and ITA2 code tables • Varicode tables for PSK31, MFSK16 and DominoEX • Tips for using FreeDV HF digital voice software by Mel Whitten, KØPFX Chapter 16 Digital Modes There is a broad array of digital modes to service various needs with more coming.
    [Show full text]
  • Shannon's Coding Theorems
    Saint Mary's College of California Department of Mathematics Senior Thesis Shannon's Coding Theorems Faculty Advisors: Author: Professor Michael Nathanson Camille Santos Professor Andrew Conner Professor Kathy Porter May 16, 2016 1 Introduction A common problem in communications is how to send information reliably over a noisy communication channel. With his groundbreaking paper titled A Mathematical Theory of Communication, published in 1948, Claude Elwood Shannon asked this question and provided all the answers as well. Shannon realized that at the heart of all forms of communication, e.g. radio, television, etc., the one thing they all have in common is information. Rather than amplifying the information, as was being done in telephone lines at that time, information could be converted into sequences of 1s and 0s, and then sent through a communication channel with minimal error. Furthermore, Shannon established fundamental limits on what is possible or could be acheived by a communication system. Thus, this paper led to the creation of a new school of thought called Information Theory. 2 Communication Systems In general, a communication system is a collection of processes that sends information from one place to another. Similarly, a storage system is a system that is used for storage and later retrieval of information. Thus, in a sense, a storage system may also be thought of as a communication system that sends information from one place (now, or the present) to another (then, or the future) [3]. In a communication system, information always begins at a source (e.g. a book, music, or video) and is then sent and processed by an encoder to a format that is suitable for transmission through a physical communications medium, called a channel.
    [Show full text]
  • Information Theory and Coding
    Information Theory and Coding Introduction – What is it all about? References: C.E.Shannon, “A Mathematical Theory of Communication”, The Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, July, October, 1948. C.E.Shannon “Communication in the presence of Noise”, Proc. Inst of Radio Engineers, Vol 37 No 1, pp 10 – 21, January 1949. James L Massey, “Information Theory: The Copernican System of Communications”, IEEE Communications Magazine, Vol. 22 No 12, pp 26 – 28, December 1984 Mischa Schwartz, “Information Transmission, Modulation and Noise”, McGraw Hill 1990, Chapter 1 The purpose of telecommunications is the efficient and reliable transmission of real data – text, speech, image, taste, smell etc over a real transmission channel – cable, fibre, wireless, satellite. In achieving this there are two main issues. The first is the source – the information. How do we code the real life usually analogue information, into digital symbols in a way to convey and reproduce precisely without loss the transmitted text, picture etc.? This entails proper analysis of the source to work out the best reliable but minimum amount of information symbols to reduce the source data rate. The second is the channel, and how to pass through it using appropriate information symbols the source. In doing this there is the problem of noise in all the multifarious ways that it appears – random, shot, burst- and the problem of one symbol interfering with the previous or next symbol especially as the symbol rate increases. At the receiver the symbols have to be guessed, based on classical statistical communications theory. The guessing, unfortunately, is heavily influenced by the channel noise, especially at high rates.
    [Show full text]
  • Source Coding: Part I of Fundamentals of Source and Video Coding Full Text Available At
    Full text available at: http://dx.doi.org/10.1561/2000000010 Source Coding: Part I of Fundamentals of Source and Video Coding Full text available at: http://dx.doi.org/10.1561/2000000010 Source Coding: Part I of Fundamentals of Source and Video Coding Thomas Wiegand Berlin Institute of Technology and Fraunhofer Institute for Telecommunications | Heinrich Hertz Institute Germany [email protected] Heiko Schwarz Fraunhofer Institute for Telecommunications | Heinrich Hertz Institute Germany [email protected] Boston { Delft Full text available at: http://dx.doi.org/10.1561/2000000010 Foundations and Trends R in Signal Processing Published, sold and distributed by: now Publishers Inc. PO Box 1024 Hanover, MA 02339 USA Tel. +1-781-985-4510 www.nowpublishers.com [email protected] Outside North America: now Publishers Inc. PO Box 179 2600 AD Delft The Netherlands Tel. +31-6-51115274 The preferred citation for this publication is T. Wiegand and H. Schwarz, Source Coding: Part I of Fundamentals of Source and Video Coding, Foundations and Trends R in Signal Processing, vol 4, nos 1{2, pp 1{222, 2010 ISBN: 978-1-60198-408-1 c 2011 T. Wiegand and H. Schwarz All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, mechanical, photocopying, recording or otherwise, without prior written permission of the publishers. Photocopying. In the USA: This journal is registered at the Copyright Clearance Cen- ter, Inc., 222 Rosewood Drive, Danvers, MA 01923. Authorization to photocopy items for internal or personal use, or the internal or personal use of specific clients, is granted by now Publishers Inc for users registered with the Copyright Clearance Center (CCC).
    [Show full text]
  • Design of Return to Zero (RZ) Bipolar Digital to Digital Encoding Data Transmission
    IOSR Journal of Engineering (IOSRJEN) www.iosrjen.org ISSN (e): 2250-3021, ISSN (p): 2278-8719 Vol. 05, Issue 02 (February. 2015), ||V4|| PP 33-36 Design of Return to Zero (RZ) Bipolar Digital to Digital Encoding Data Transmission Amna mohammed Elzain1,Abdelrasoul Jabar Alzubaidi2 Alazhery University- Engineering college Sudan University of Science and technology –Engineering College- Electronics Engineering School Abstract: - Line encoding is the method used to represent the digital information on the media. A pattern, that uses either voltage or current, is used to represent the 1s and 0s of the digital signal on the transmission link. Common types of line encoding methods used in data communications are: Unipolar line encoding, Polar line encoding, Bipolar line encoding and Manchester line encoding. This paper deals with the design of bipolar digital to digital encoding (RZ) data transmission using latch SN 74373, darlington amplifier ULN 2003-500 m A, solid state relay, personnel computer and Turbo C++ programming language ). Keywords: - Bipolar, Encoding, Digital to Digital, Latch SN 74373, Darlington Amplifier ULN 2003-500 m A, Solid State Relay (SSR) ,Turbo C++ , Computer. I. INTRODUCTION A computer network is used for communication of data from one station to another station in the network [1]. Data and signals are two of the basic building blocks of any computer network. Data must be encoded into signals before it can be transported across communication media [2]. Encoding means conversion of data into bit stream [3]. There are different encoding schemes available: Analog-to-Digital Digital-to-Analog Analog-to-Analog Digital-to-Digital Digital-to-Digital Digital-to-Digital encoding is the representation of digital information by a digital signal .
    [Show full text]
  • Fundamental Limits of Video Coding: a Closed-Form Characterization of Rate Distortion Region from First Principles
    1 Fundamental Limits of Video Coding: A Closed-form Characterization of Rate Distortion Region from First Principles Kamesh Namuduri and Gayatri Mehta Department of Electrical Engineering University of North Texas Abstract—Classical motion-compensated video coding minimum amount of rate required to encode video. On methods have been standardized by MPEG over the the other hand, a communication technology may put years and video codecs have become integral parts of a limit on the maximum data transfer rate that the media entertainment applications. Despite the ubiquitous system can handle. This limit, in turn, places a bound use of video coding techniques, it is interesting to note on the resulting video quality. Therefore, there is a that a closed form rate-distortion characterization for video coding is not available in the literature. In this need to investigate the tradeoffs between the bit-rate (R) paper, we develop a simple, yet, fundamental character- required to encode video and the resulting distortion ization of rate-distortion region in video coding based (D) in the reconstructed video. Rate-distortion (R-D) on information-theoretic first principles. The concept of analysis deals with lossy video coding and it establishes conditional motion estimation is used to derive the closed- a relationship between the two parameters by means of form expression for rate-distortion region without losing a Rate-distortion R(D) function [1], [6], [10]. Since R- its generality. Conditional motion estimation offers an D analysis is based on information-theoretic concepts, elegant means to analyze the rate-distortion trade-offs it places absolute bounds on achievable rates and thus and demonstrates the viability of achieving the bounds derives its significance.
    [Show full text]
  • Optical Modulation for High Bit Rate Transport Technologies by Ildefonso M
    Technology Note Optical Modulation for High Bit Rate Transport Technologies By Ildefonso M. Polo I October, 2009 Scope There are plenty of highly technical and extremely mathematical articles published about optical modulation formats, showing complex formulas, spectral diagrams and almost unreadable eye diagrams, which can be considered normal for every emerging technology. The main purpose of this article is to demystify optical modulation in a way that the rest of us can visualize and understand them. Nevertheless, some of these modulations are so complex that they can’t be properly represented in a simple time domain graph, so polar (constellation) or spherical coordinates are often used to represent the different states of the signal. Within this document some of these polar diagrams have been enhanced with the state diagram (blue) to indicate the possible transitions and logic. Introduction Back in the early ‘90s, copper lines moved from digital baseband line coding (e.g. 2B1Q, 4B3T, AMI, and HDB3, among others) to complex modulation schemes to increase speed, reach, and reliability. We were all skeptical that a technology like DSL would have been able to transmit 256 simultaneous QAM16 signals and achieve 8 Mbit/s. Today copper is already reaching the 155 Mbit/s mark. This is certainly a full circle. We moved from analog to digital transmission to increase data rates and reliability, and then we resorted to analog signals (through modulation) to carry digital information farther, faster and more reliably. Back then, 155 Mbit/s were only thought of for fiber optics transmission. It is also interesting to note that only a few years ago we seemed to be under the impression that ‘fiber optics offered an almost infinite amount of bandwidth’ or more than we would ever need.
    [Show full text]
  • Nalanda Open University Course Name: BCA Part II Paper-X (Computer Networking) Coordinator: A
    Nalanda Open University Course Name: BCA Part II Paper-X (Computer Networking) Coordinator: A. N. Pandey E-mail ID : [email protected] E-CONTENT Topic- DATA ENCODING AND COMMUNICATION TECHNIQUE (Unit-3) 3.1 INTRODUCTION We knew that how data is transmitted through a channel (wired or wireless) in the form of an Analog or Digital signal. Now we will elaborate on the technique to produce these types of signals from analog to digital data. It is required that information must be encoded into signals before it can be transported across communication media. In more precise words we may say that the waveform pattern of voltage or current used to represent the 1s and 0s of a digital signal on a transmission link is called digital to digital line encoding. There are different encoding schemes (figure 1) available: Figure 1: Encoding Conversion Technique • Analog data as an analog signal: Analog data in electrical form can be transmitted as baseband signals easily and cheaply. This is done with voice transmission over voice-grade lines. One common use of modulation is to shift the bandwidth of a baseband signal to another portion of the spectrum. In this way multiple signals, each at a different position on the spectrum, can share the same transmission medium. This is known as frequency division multiplexing. • Analog data as a digital signal : Conversion of analog data to digital form permits the use of modern digital transmission and switching equipment. The advantages of the digital approach. • Digital data as an analog signal: Some transmission media, such as optical fiber and unguided media, will only propagate analog signals.
    [Show full text]
  • Beyond Traditional Transform Coding
    Beyond Traditional Transform Coding by Vivek K Goyal B.S. (University of Iowa) 1993 B.S.E. (University of Iowa) 1993 M.S. (University of California, Berkeley) 1995 A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Engineering|Electrical Engineering and Computer Sciences in the GRADUATE DIVISION of the UNIVERSITY OF CALIFORNIA, BERKELEY Committee in charge: Professor Martin Vetterli, Chair Professor Venkat Anantharam Professor Bin Yu Fall 1998 Beyond Traditional Transform Coding Copyright 1998 by Vivek K Goyal Printed with minor corrections 1999. 1 Abstract Beyond Traditional Transform Coding by Vivek K Goyal Doctor of Philosophy in Engineering|Electrical Engineering and Computer Sciences University of California, Berkeley Professor Martin Vetterli, Chair Since its inception in 1956, transform coding has become the most successful and pervasive technique for lossy compression of audio, images, and video. In conventional transform coding, the original signal is mapped to an intermediary by a linear transform; the final compressed form is produced by scalar quantization of the intermediary and entropy coding. The transform is much more than a conceptual aid; it makes computations with large amounts of data feasible. This thesis extends the traditional theory toward several goals: improved compression of nonstationary or non-Gaussian signals with unknown parameters; robust joint source–channel coding for erasure channels; and computational complexity reduction or optimization. The first contribution of the thesis is an exploration of the use of frames, which are overcomplete sets of vectors in Hilbert spaces, to form efficient signal representations. Linear transforms based on frames give representations with robustness to random additive noise and quantization, but with poor rate–distortion characteristics.
    [Show full text]