Bitview Manual V2.3.00 WAVECOM W-BV

Total Page:16

File Type:pdf, Size:1020Kb

Bitview Manual V2.3.00 WAVECOM W-BV BitView Manual V2.3.00 WAVECOM W-BV by WAVECOM ELEKTRONIK AG PUBLISHED BY WAVECOM ELEKTRONIK AG Hammerstrasse 8 CH-8180 Buelach Switzerland Phone +41-44-872 70 60 Fax +41-44-872 70 66 Email: [email protected] Internet: http://www.wavecom.ch © by WAVECOM ELEKTRONIK AG. All rights reserved. Reproduction in whole or in part in any form is prohibited without written consent of the copyright owner. The publication of information in this document does not imply freedom from patent or other protective rights of WA- VECOM ELEKTRONIK AG or others. All brand names in this document are trademarks or registered trademarks of their owners. Specifications are subject to change without further notice Printed: Wednesday, February 11, 2009, 12:49:05 Contents Introduction 1 Introduction .................................................................................................... 1 Training ........................................................................................................ 1 Source Code .................................................................................................... 1 Company Profile ............................................................................................... 1 Revisions ....................................................................................................... 2 Requirements................................................................................................... 2 Limitations ..................................................................................................... 2 Installation 3 W-BV Software Installation ................................................................................... 3 CodeMeter .NET API installation ................................................................. 3 CodeMeter Runtime installation ................................................................... 4 MatLab Runtime installation ....................................................................... 6 BitView installation ................................................................................ 7 BitView defaults installation ....................................................................... 8 Paths ................................................................................................. 9 W-BV Hardware Installation .................................................................................. 9 CmStick ............................................................................................. 9 W-BV Licensing ............................................................................................. 10 W-BV Software Options & Updates ............................................................. 10 W-BV license checking .......................................................................... 10 Getting Started 11 Program Start ................................................................................................ 11 Menu 13 Bit Stream Processing ....................................................................................... 13 Analysis Sets ................................................................................................. 14 Reports ....................................................................................................... 15 Properties Window .......................................................................................... 16 History Explorer Window ................................................................................... 17 Toolbox Window ............................................................................................ 18 Preferences ................................................................................................... 18 Layout Settings .............................................................................................. 19 Function Library 23 Common Functions .......................................................................................... 23 Auto-Calculation .................................................................................. 23 Enable or Disable Functions ..................................................................... 25 Source/Sink .................................................................................................. 25 Import Text Data .................................................................................. 25 Import Hex Data .................................................................................. 25 Import Binary Data ............................................................................... 26 Import IAS Bitstream ............................................................................. 26 Export Text Data .................................................................................. 27 Synchronization .............................................................................................. 27 Preamble .......................................................................................... 27 Binary Modulation ........................................................................................... 27 NRZ-I ............................................................................................. 27 BitView Manual V2.3.00 WAVECOM W-BV Contents iii NRZ-M ............................................................................................ 28 NRZ-S ............................................................................................. 28 Bi-Phase-L (Manchester) ......................................................................... 28 Bi-Phase-M ....................................................................................... 29 Bi-Phase-S ........................................................................................ 29 DBi-Phase-M ..................................................................................... 29 DBi-Phase-S ...................................................................................... 30 Bit Manipulation ............................................................................................. 30 De-Stuffing (HDLC) ............................................................................. 30 Mirroring .......................................................................................... 30 Rotation ........................................................................................... 31 Shift ............................................................................................... 31 Polarity ............................................................................................ 31 De-Interleave Bit Block .......................................................................... 32 De-Interleaving Stream ........................................................................... 32 AND / OR / XOR / NOT ......................................................................... 33 AND / OR / XOR / NOT Range ................................................................. 34 Extraction (Mask) ................................................................................. 34 Extraction (Range) ................................................................................ 34 Cutting............................................................................................. 35 Decoding/Equalizer .......................................................................................... 35 Viterbi-Decoding ................................................................................. 35 De-Puncturing .................................................................................... 35 Difference-Decoding ............................................................................. 36 BCH-Decoding ................................................................................... 36 Block Code Analysis ............................................................................. 37 Convolutional Code Analysis .................................................................... 38 General Reed Solomon Decoding ............................................................... 39 CRC & Polynomial .......................................................................................... 40 CRC (1..32) ....................................................................................... 40 CRC-8 ............................................................................................. 40 CRC-10 ............................................................................................ 41 CRC-12 ............................................................................................ 41 CRC-16 ............................................................................................ 42 CRC-CCITT ...................................................................................... 42 CRC-32 ............................................................................................ 42 Parity (Even/Odd/Mark/Space) .................................................................. 43 Parity from H-Matrix ............................................................................. 43 Parity from polynomial ........................................................................... 44 Unpacking/Decompress ..................................................................................... 45 Unzip .............................................................................................. 45 Descrambling ...............................................................................................
Recommended publications
  • PSK31 Audio Beacon Kit
    PSK31 Audio Beacon Kit Build this programmable single-chip generator of PSK31-encoded audio data streams and use it as a signal generator, a beacon input to your SSB rig — or as the start of a single-chip PSK31 controller! Version 2, July 2001 Created by the New Jersey QRP Club The NJQRP “PSK31 Audio Beacon Kit” 1 PSK31 Audio Beacon Kit OVERVIEW Here’s an easy, fun and intriguingly use- ting at a laptop equipped with DigiPan Thank you for purchasing the PSK31 phase relationships at bit transitions, and ful project that has evolved from an on- software. Zudio Beacon from the New Jersey QRP the schematic has been augmented with going design effort to reduce the complex- Club! We think you’ll have fun assem- Construction is simple and straightfor- helpful notations. ity of a PSK31 controller. bling and operating this inexpensive-yet- ward and you’ll have immediate feedback For the very latest information, software flexible audio modulator as PSK31-en- A conventional PC typically provides on how your Beacon works when you updates, and kit assembly & usage tips, coded data streams. the relatively intensive computing power plug in a 9V battery and speaker. please be sure to see the PSK31 Beacon required for PSK31 modulation and de- This project was first introduced as a kit Beacon Features website at http://www.njqrp.org/ modulation. at the Atlanticon QRP Forum in March psk31beacon/psk31beacon.html . Visit n Single-chip implementation of 2001, and then published in a feature ar- soon and often, as it’s guaranteed to be With this Beacon project, however, the PSK31 encoding and audio waveform gen- ticle of QST magazine in its August 2001 helpful! PSK31 modulation computations have eration.
    [Show full text]
  • XAPP616 "Huffman Coding" V1.0
    Product Obsolete/Under Obsolescence Application Note: Virtex Series R Huffman Coding Author: Latha Pillai XAPP616 (v1.0) April 22, 2003 Summary Huffman coding is used to code values statistically according to their probability of occurence. Short code words are assigned to highly probable values and long code words to less probable values. Huffman coding is used in MPEG-2 to further compress the bitstream. This application note describes how Huffman coding is done in MPEG-2 and its implementation. Introduction The output symbols from RLE are assigned binary code words depending on the statistics of the symbol. Frequently occurring symbols are assigned short code words whereas rarely occurring symbols are assigned long code words. The resulting code string can be uniquely decoded to get the original output of the run length encoder. The code assignment procedure developed by Huffman is used to get the optimum code word assignment for a set of input symbols. The procedure for Huffman coding involves the pairing of symbols. The input symbols are written out in the order of decreasing probability. The symbol with the highest probability is written at the top, the least probability is written down last. The least two probabilities are then paired and added. A new probability list is then formed with one entry as the previously added pair. The least symbols in the new list are then paired. This process is continued till the list consists of only one probability value. The values "0" and "1" are arbitrarily assigned to each element in each of the lists. Figure 1 shows the following symbols listed with a probability of occurrence where: A is 30%, B is 25%, C is 20%, D is 15%, and E = 10%.
    [Show full text]
  • Image Compression Through DCT and Huffman Coding Technique
    International Journal of Current Engineering and Technology E-ISSN 2277 – 4106, P-ISSN 2347 – 5161 ©2015 INPRESSCO®, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Image Compression through DCT and Huffman Coding Technique Rahul Shukla†* and Narender Kumar Gupta† †Department of Computer Science and Engineering, SHIATS, Allahabad, India Accepted 31 May 2015, Available online 06 June 2015, Vol.5, No.3 (June 2015) Abstract Image compression is an art used to reduce the size of a particular image. The goal of image compression is to eliminate the redundancy in a file’s code in order to reduce its size. It is useful in reducing the image storage space and in reducing the time needed to transmit the image. Image compression is more significant for reducing data redundancy for save more memory and transmission bandwidth. An efficient compression technique has been proposed which combines DCT and Huffman coding technique. This technique proposed due to its Lossless property, means using this the probability of loss the information is lowest. Result shows that high compression rates are achieved and visually negligible difference between compressed images and original images. Keywords: Huffman coding, Huffman decoding, JPEG, TIFF, DCT, PSNR, MSE 1. Introduction that can compress almost any kind of data. These are the lossless methods they retain all the information of 1 Image compression is a technique in which large the compressed data. amount of disk space is required for the raw images However, they do not take advantage of the 2- which seems to be a very big disadvantage during dimensional nature of the image data.
    [Show full text]
  • The Strengths and Weaknesses of Different Image Compression Methods Samuel Teare and Brady Jacobson Lossy Vs Lossless
    The Strengths and Weaknesses of Different Image Compression Methods Samuel Teare and Brady Jacobson Lossy vs Lossless Lossy compression reduces a file size by permanently removing parts of the data that may be redundant or not as noticeable. Lossless compression guarantees the original data can be recovered or decompressed from the compressed file. PNG Compression PNG Compression consists of three parts: 1. Filtering 2. LZ77 Compression Deflate Compression 3. Huffman Coding Filtering Five types of Filters: 1. None - No filter 2. Sub - difference between this byte and the byte to its left a. Sub(x) = Original(x) - Original(x - bpp) 3. Up - difference between this byte and the byte above it a. Up(x) = Original(x) - Above(x) 4. Average - difference between this byte and the average of the byte to the left and the byte above. a. Avg(x) = Original(x) − (Original(x-bpp) + Above(x))/2 5. Paeth - Uses the byte to the left, above, and above left. a. The nearest of the left, above, or above left to the estimate is the Paeth Predictor b. Paeth(x) = Original(x) - Paeth Predictor(x) Paeth Algorithm Estimate = left + above - above left Distance to left = Absolute(estimate - left) Distance to above = Absolute(estimate - above) Distance to above left = Absolute(estimate - above left) The byte with the smallest distance is the Paeth Predictor LZ77 Compression LZ77 Compression looks for sequences in the data that are repeated. LZ77 uses a sliding window to keep track of previous bytes. This is then used to compress a group of bytes that exhibit the same sequence as previous bytes.
    [Show full text]
  • An Optimized Huffman's Coding by the Method of Grouping
    An Optimized Huffman’s Coding by the method of Grouping Gautam.R Dr. S Murali Department of Electronics and Communication Engineering, Professor, Department of Computer Science Engineering Maharaja Institute of Technology, Mysore Maharaja Institute of Technology, Mysore [email protected] [email protected] Abstract— Data compression has become a necessity not only the 3 depending on the size of the data. Huffman's coding basically in the field of communication but also in various scientific works on the principle of frequency of occurrence for each experiments. The data that is being received is more and the symbol or character in the input. For example we can know processing time required has also become more. A significant the number of times a letter has appeared in a text document by change in the algorithms will help to optimize the processing processing that particular document. After which we will speed. With the invention of Technologies like IoT and in assign a variable string to the letter that will represent the technologies like Machine Learning there is a need to compress character. Here the encoding take place in the form of tree data. For example training an Artificial Neural Network requires structure which will be explained in detail in the following a lot of data that should be processed and trained in small paragraph where encoding takes place in the form of binary interval of time for which compression will be very helpful. There tree. is a need to process the data faster and quicker. In this paper we present a method that reduces the data size.
    [Show full text]
  • Arithmetic Coding
    Arithmetic Coding Arithmetic coding is the most efficient method to code symbols according to the probability of their occurrence. The average code length corresponds exactly to the possible minimum given by information theory. Deviations which are caused by the bit-resolution of binary code trees do not exist. In contrast to a binary Huffman code tree the arithmetic coding offers a clearly better compression rate. Its implementation is more complex on the other hand. In arithmetic coding, a message is encoded as a real number in an interval from one to zero. Arithmetic coding typically has a better compression ratio than Huffman coding, as it produces a single symbol rather than several separate codewords. Arithmetic coding differs from other forms of entropy encoding such as Huffman coding in that rather than separating the input into component symbols and replacing each with a code, arithmetic coding encodes the entire message into a single number, a fraction n where (0.0 ≤ n < 1.0) Arithmetic coding is a lossless coding technique. There are a few disadvantages of arithmetic coding. One is that the whole codeword must be received to start decoding the symbols, and if there is a corrupt bit in the codeword, the entire message could become corrupt. Another is that there is a limit to the precision of the number which can be encoded, thus limiting the number of symbols to encode within a codeword. There also exist many patents upon arithmetic coding, so the use of some of the algorithms also call upon royalty fees. Arithmetic coding is part of the JPEG data format.
    [Show full text]
  • Packet Status Register Every Ohio
    TAPR #97 AUTUMN 2005 President’s Corner 1 TAPR Elects New President 2 PACKET 2005 DCC Report 3 New SIG for AX.25 Layer 2 Discussions 4 20th Annual SW Ohio Digital Symposium 4 STATUS AX.25 as a Layer 2 Protocol 5 Towards a Next Generation Amateur Radio Network 6 TM REGISTER TAPR Order Form 29 President’s Corner Mi Column es Su Column This marks my inaugural from you to confirm this. We are moving to more membership experience and involvement in projects. column in PSR as TAPR widely embrace and promote cutting edge technology, We are continuing to streamline the management of President. Some of you may and while this is not really new for us, it does mean that the office, and to assist this initiative, I will draw your remember me from my past we will be contemplating a shift in focus so that packet attention to the improvements that have been made incarnation on the TAPR will not really be our main or only focus. Digital modes to the web site. The responses we have seen so far have Board of Directors. It seems and technology will still be central, but I think you’ll been overwhelmingly positive, and we welcome further like another lifetime ago, but it wasn’t so long ago that appreciate our attempts to marry digital with analog feedback. TAPR was born in Tucson. In fact, next year will mark (you remember analog, don’t you?) and microwaves, I would like to make this column YOUR column, the twenty-fifth anniversary of our incorporation.
    [Show full text]
  • Modification of Adaptive Huffman Coding for Use in Encoding Large Alphabets
    ITM Web of Conferences 15, 01004 (2017) DOI: 10.1051/itmconf/20171501004 CMES’17 Modification of Adaptive Huffman Coding for use in encoding large alphabets Mikhail Tokovarov1,* 1Lublin University of Technology, Electrical Engineering and Computer Science Faculty, Institute of Computer Science, Nadbystrzycka 36B, 20-618 Lublin, Poland Abstract. The paper presents the modification of Adaptive Huffman Coding method – lossless data compression technique used in data transmission. The modification was related to the process of adding a new character to the coding tree, namely, the author proposes to introduce two special nodes instead of single NYT (not yet transmitted) node as in the classic method. One of the nodes is responsible for indicating the place in the tree a new node is attached to. The other node is used for sending the signal indicating the appearance of a character which is not presented in the tree. The modified method was compared with existing methods of coding in terms of overall data compression ratio and performance. The proposed method may be used for large alphabets i.e. for encoding the whole words instead of separate characters, when new elements are added to the tree comparatively frequently. Huffman coding is frequently chosen for implementing open source projects [3]. The present paper contains the 1 Introduction description of the modification that may help to improve Efficiency and speed – the two issues that the current the algorithm of adaptive Huffman coding in terms of world of technology is centred at. Information data savings. technology (IT) is no exception in this matter. Such an area of IT as social media has become extremely popular 2 Study of related works and widely used, so that high transmission speed has gained a great importance.
    [Show full text]
  • Amateur Extra License Class
    Amateur Extra License Class 1 Amateur Extra Class Chapter 8 Radio Modes and Equipment 2 1 Modulation Systems FCC Emission Designations and Terms • Specified by ITU. • Either 3 or 7 characters long. • If 3 characters: • 1st Character = The type of modulation of the main carrier. • 2nd Character = The nature of the signal(s) modulating the main carrier. • 3rd Character = The type of information to be transmitted. • If 7 characters, add a 4-character bandwidth designator in front of the 3-character designator. 3 Modulation Systems FCC Emission Designations and Terms • Type of Modulation. N Unmodulated Carrier A Amplitude Modulation R Single Sideband Reduced Carrier J Single Sideband Suppressed Carrier C Vestigial Sideband F Frequency Modulation G Phase Modulation P, K, L, M, Q, V, W, X Various Types of Pulse Modulation 4 2 Modulation Systems FCC Emission Designations and Terms • Type of Modulating Signal. 0 No modulating signal 1 A single channel containing quantized or digital information without the use of a modulating sub-carrier 2 A single channel containing quantized or digital information with the use of a modulating sub-carrier 3 A single channel containing analog information 7 Two or more channels containing quantized or digital information 8 Two or more channels containing analog information X Cases not otherwise covered 5 Modulation Systems FCC Emission Designations and Terms • Type of Transmitted Information. N No information transmitted A Telegraphy - for aural reception B Telegraphy - for automatic reception C Facsimile D Data transmission, telemetry, telecommand E Telephony (including sound broadcasting) F Television (video) W Combination of the above X Cases not otherwise covered 6 3 Modulation Systems FCC Emission Designations and Terms • 3-character designator examples: • A1A = CW.
    [Show full text]
  • Information Theory Revision (Source)
    ELEC3203 Digital Coding and Transmission – Overview & Information Theory S Chen Information Theory Revision (Source) {S(k)} {b i } • Digital source is defined by digital source source coding 1. Symbol set: S = {mi, 1 ≤ i ≤ q} symbols/s bits/s 2. Probability of occurring of mi: pi, 1 ≤ i ≤ q 3. Symbol rate: Rs [symbols/s] 4. Interdependency of {S(k)} • Information content of alphabet mi: I(mi) = − log2(pi) [bits] • Entropy: quantifies average information conveyed per symbol q – Memoryless sources: H = − pi · log2(pi) [bits/symbol] i=1 – 1st-order memory (1st-order Markov)P sources with transition probabilities pij q q q H = piHi = − pi pij · log2(pij) [bits/symbol] Xi=1 Xi=1 Xj=1 • Information rate: tells you how many bits/s information the source really needs to send out – Information rate R = Rs · H [bits/s] • Efficient source coding: get rate Rb as close as possible to information rate R – Memoryless source: apply entropy coding, such as Shannon-Fano and Huffman, and RLC if source is binary with most zeros – Generic sources with memory: remove redundancy first, then apply entropy coding to “residauls” 86 ELEC3203 Digital Coding and Transmission – Overview & Information Theory S Chen Practical Source Coding • Practical source coding is guided by information theory, with practical constraints, such as performance and processing complexity/delay trade off • When you come to practical source coding part, you can smile – as you should know everything • As we will learn, data rate is directly linked to required bandwidth, source coding is to encode source with a data rate as small as possible, i.e.
    [Show full text]
  • Probability Interval Partitioning Entropy Codes Detlev Marpe, Senior Member, IEEE, Heiko Schwarz, and Thomas Wiegand, Senior Member, IEEE
    SUBMITTED TO IEEE TRANSACTIONS ON INFORMATION THEORY 1 Probability Interval Partitioning Entropy Codes Detlev Marpe, Senior Member, IEEE, Heiko Schwarz, and Thomas Wiegand, Senior Member, IEEE Abstract—A novel approach to entropy coding is described that entropy coding while the assignment of codewords to symbols provides the coding efficiency and simple probability modeling is the actual entropy coding. For decades, two methods have capability of arithmetic coding at the complexity level of Huffman dominated practical entropy coding: Huffman coding that has coding. The key element of the proposed approach is given by a partitioning of the unit interval into a small set of been invented in 1952 [8] and arithmetic coding that goes back disjoint probability intervals for pipelining the coding process to initial ideas attributed to Shannon [7] and Elias [9] and along the probability estimates of binary random variables. for which first practical schemes have been published around According to this partitioning, an input sequence of discrete 1976 [10][11]. Both entropy coding methods are capable of source symbols with arbitrary alphabet sizes is mapped to a approximating the entropy limit (in a certain sense) [12]. sequence of binary symbols and each of the binary symbols is assigned to one particular probability interval. With each of the For a fixed probability mass function, Huffman codes are intervals being represented by a fixed probability, the probability relatively easy to construct. The most attractive property of interval partitioning entropy (PIPE) coding process is based on Huffman codes is that their implementation can be efficiently the design and application of simple variable-to-variable length realized by the use of variable-length code (VLC) tables.
    [Show full text]
  • Fast Algorithm for PQ Data Compression Using Integer DTCWT and Entropy Encoding
    International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 22 (2017) pp. 12219-12227 © Research India Publications. http://www.ripublication.com Fast Algorithm for PQ Data Compression using Integer DTCWT and Entropy Encoding Prathibha Ekanthaiah 1 Associate Professor, Department of Electrical and Electronics Engineering, Sri Krishna Institute of Technology, No 29, Chimney hills Chikkabanavara post, Bangalore-560090, Karnataka, India. Orcid Id: 0000-0003-3031-7263 Dr.A.Manjunath 2 Principal, Sri Krishna Institute of Technology, No 29, Chimney hills Chikkabanavara post, Bangalore-560090, Karnataka, India. Orcid Id: 0000-0003-0794-8542 Dr. Cyril Prasanna Raj 3 Dean & Research Head, Department of Electronics and communication Engineering, MS Engineering college , Navarathna Agrahara, Sadahalli P.O., Off Bengaluru International Airport,Bengaluru - 562 110, Karnataka, India. Orcid Id: 0000-0002-9143-7755 Abstract metering infrastructures (smart metering), integration of distributed power generation, renewable energy resources and Smart meters are an integral part of smart grid which in storage units as well as high power quality and reliability [1]. addition to energy management also performs data By using smart metering Infrastructure sustains the management. Power Quality (PQ) data from smart meters bidirectional data transfer and also decrease in the need to be compressed for both storage and transmission environmental effects. With this resilience and reliability of process either through wired or wireless medium. In this power utility network can be improved effectively. Work paper, PQ data compression is carried out by encoding highlights the need of development and technology significant features captured from Dual Tree Complex encroachment in smart grid communications [2].
    [Show full text]