Data-Integrity

Total Page:16

File Type:pdf, Size:1020Kb

Data-Integrity Fundamentals of Data Representation: Error checking and correction When you send data across the internet or even from your USB to a computer you are sending millions upon millions of ones and zeros. What would happen if one of them got corrupted? Think of this situation: You are buying a new game from an online retailer and put £40 into the payment box. You click on send and the number 40 is sent to your bank stored in a byte: 00101000. Now imagine if the second most significant bit got corrupted on its way to the bank, and the bank received the following: 01101000. You'd be paying £104 for that game! Error Checking and Correction stops things like this happening. There are many ways to detect and correct corrupted data, we are going to learn two. Parity bits Sometime when you see ASCII code it only appears to have 7 bits. Surely they should be using a byte to represent a character, after all that would mean they could represent more characters than the measily 128 they can currently store (Note there is extended ASCII that uses the 8th bit as well but we don't need to cover that here). The eigth bit is used as a parity bit to detect that the data you have been sent is correct. It will not be able to tell you which digit is wrong, so it isn't corrective. There are two types of parity odd and even. If you think back to primary school you would have learnt about odd and even numbers, hold that thought, we are going to need it. Odd numbers : 1,3,5,7,9 Even numbers : 0,2,4,6,8 (note 0 is here too) Example: How to detect errors using parity bits When we send binary data we need to count the number of 1s that are present in it. For example sending the ASCII character 'B' 1000010. This has two occurrences of 1. We can then apply another bit to the front of it and send it across the internet. If we are using even parity 01000010 If we are using odd parity 11000010 Now when the data gets to the other end and we were using even parity 01001010 - odd parity, there has been an error in transmission, ask for data to be sent again 01000010 - even parity, the data has been sent correctly Checksum A checksum or hash sum is a small-size datum from a block ofdigital data for the purpose of detecting errors which may have been introduced during its transmission or storage. It is usually Exercise: Test your knowledge of parity bits Try and apply the correct parity bit to the following: _1011010 (even parity) [Collapse] Answer : 01011010 _1011010 (odd parity) [Collapse] Answer : 11011010 _1111110 (even parity) [Collapse] Answer : 01111110 _0000000 (odd parity) [Collapse] Answer : 10000000 However, if we receive 10010110, knowing that the number had odd parity, where is the error? The best we can do is ask for the data to be resent and hope it's correct next time. Parity bits only provide detective error. We need something that detects and corrects.. applied to an installation file after it is received from the download server. By themselves checksums are often used to verify data integrity, but should not be relied upon to also verify data authenticity. The actual procedure which yields the checksum, given a data input is called a checksum function or checksum algorithm. Depending on its design goals, a good checksum algorithm will usually output a significantly different value, even for small changes made to the input. This is especially true ofcryptographic hash functions, which may be used to detect many data corruption errors and verify overall data integrity; if the computed checksum for the current data input matches the stored value of a previously computed checksum, there is a very high probability the data has not been accidentally altered or corrupted. Checksum functions are related to hash functions, fingerprints, randomization functions, and cryptographic hash functions. However, each of those concepts has different applications and therefore different design goals. Checksums are used as cryptographic primitives in larger authentication algorithms. For cryptographic systems with these two specific design goals, see HMAC. Check digits and parity bits are special cases of checksums, appropriate for small blocks of data (such as Social Security numbers, bank account numbers, computer words, single bytes, etc.). Some error-correcting codes are based on special checksums which not only detect common errors but also allow the original data to be recovered in certain cases. 1 Checksum algorithms o 1.1 Parity byte or parity word o 1.2 Modular sum o 1.3 Position-dependent checksums o 1.4 General considerations 2 See also 3 References 4 External links Checksum algorithms Parity byte or parity word The simplest checksum algorithm is the so-called longitudinal parity check, which breaks the data into "words" with a fixed number n of bits, and then computes the exclusive or of all those words. The result is appended to the message as an extra word.To check the integrity of a message, the receiver computes the exclusive or (XOR) of all its words, including the checksum; if the result is not a word with n zeros, the receiver knows a transmission error occurred. With this checksum, any transmission error which flips a single bit of the message, or an odd number of bits, will be detected as an incorrect checksum. However, an error which affects two bits will not be detected if those bits lie at the same position in two distinct words. Also swapping of two or more words will not be detected. If the affected bits are independently chosen at random, the probability of a two-bit error being undetected is 1/n. Modular sum A variant of the previous algorithm is to add all the "words" as unsigned binary numbers, discarding any overflow bits, and append the two's complement of the total as the checksum. To validate a message, the receiver adds all the words in the same manner, including the checksum; if the result is not a word full of zeros, an error must have occurred. This variant too detects any single-bit error, but the promodular sum is used in SAE J1708.[1] Position-dependent checksums The simple checksums described above fail to detect some common errors which affect many bits at once, such as changing the order of data words, or inserting or deleting words with all bits set to zero. The checksum algorithms most used in practice, such as Fletcher's checksum, Adler-32, and cyclic redundancy checks (CRCs), address these weaknesses by considering not only the value of each word but also its position in the sequence. This feature generally increases the cost of computing the checksum. General considerations A single-bit transmission error then corresponds to a displacement from a valid corner (the correct message and checksum) to one of the m adjacent corners. An error which affects k bits moves the message to a corner which is ksteps removed from its correct corner. The goal of a good checksum algorithm is to spread the valid corners as far from each other as possible, so as to increase the likelihood "typical" transmission errors will end up in an invalid corner. Data validation and verification Validation and verification are two ways to check that the data entered into a computer is correct. Data entered incorrectly is of little use. Validation Validation is an automatic computer check to ensure that the data entered is sensible and reasonable. It does not check the accuracy of data. For example, a secondary school student is likely to be aged between 11 and 16. The computer can be programmed only to accept numbers between 11 and 16. This is arange check. However, this does not guarantee that the number typed in is correct. For example, a student's age might be 14, but if 11 is entered it will be valid but incorrect. Types of validation There are a number of validation types that can be used to check the data that is being entered. Validation How it works Example usage type Check digit the last one or two digits in a code are bar code readers in supermarkets use check digits used to check the other digits are correct Format checks the data is in the right format a National Insurance number is in the form LL 99 99 check 99 L where L is any letter and 9 is any number Length checks the data isn't too short or too a password which needs to be six letters long check long Lookup looks up acceptable values in a table there are only seven possible days of the week table Validation How it works Example usage type Presence checks that data has been entered into in most databases a key fieldcannot be left blank check a field Range check checks that a value falls within the number of hours worked must be less than 50 and specified range more than 0 Spell check looks up words in a dictionary when word processing Verification Verification is performed to ensure that the data entered exactly matches the original source. There are two main methods of verification: 1. Double entry - entering the data twice and comparing the two copies. This effectively doubles the workload, and as most people are paid by the hour, it costs more too. 2. Proofreading data - this method involves someone checking the data entered against the original document.
Recommended publications
  • Hash Functions
    Hash Functions A hash function is a function that maps data of arbitrary size to an integer of some fixed size. Example: Java's class Object declares function ob.hashCode() for ob an object. It's a hash function written in OO style, as are the next two examples. Java version 7 says that its value is its address in memory turned into an int. Example: For in an object of type Integer, in.hashCode() yields the int value that is wrapped in in. Example: Suppose we define a class Point with two fields x and y. For an object pt of type Point, we could define pt.hashCode() to yield the value of pt.x + pt.y. Hash functions are definitive indicators of inequality but only probabilistic indicators of equality —their values typically have smaller sizes than their inputs, so two different inputs may hash to the same number. If two different inputs should be considered “equal” (e.g. two different objects with the same field values), a hash function must re- spect that. Therefore, in Java, always override method hashCode()when overriding equals() (and vice-versa). Why do we need hash functions? Well, they are critical in (at least) three areas: (1) hashing, (2) computing checksums of files, and (3) areas requiring a high degree of information security, such as saving passwords. Below, we investigate the use of hash functions in these areas and discuss important properties hash functions should have. Hash functions in hash tables In the tutorial on hashing using chaining1, we introduced a hash table b to implement a set of some kind.
    [Show full text]
  • CS 61C: Great Ideas in Computer Architecture Dependability: Parity
    CS 61C: Great Ideas in Computer Architecture Dependability: Parity, RAID, ECC Instructor: Alan Christopher 8/07/2014 Summer 2014 -- Lecture #27 1 Review of Last Lecture • MapReduce Data Level Parallelism – Framework to divide up data to be processed in parallel – Handles worker failure and laggard jobs automatically – Mapper outputs intermediate (key, value) pairs – Optional Combiner in-between for better load balancing – Reducer “combines” intermediate values with same key 8/07/2014 Summer 2014 -- Lecture #27 2 Agenda • Dependability • Administrivia • RAID • Error Correcting Codes 8/07/2014 Summer 2014 -- Lecture #27 3 Six Great Ideas in Computer Architecture 1. Layers of Representation/Interpretation 2. Technology Trends 3. Principle of Locality/Memory Hierarchy 4. Parallelism 5. Performance Measurement & Improvement 6. Dependability via Redundancy 8/07/2014 Summer 2014 -- Lecture #27 4 Great Idea #6: Dependability via Redundancy • Redundancy so that a failing piece doesn’t make the whole system fail 2 of 3 agree 1+1=2 1+1=2 1+1=2 1+1=1 FAIL! 8/07/2014 Summer 2014 -- Lecture #27 5 Great Idea #6: Dependability via Redundancy • Applies to everything from datacenters to memory – Redundant datacenters so that can lose 1 datacenter but Internet service stays online – Redundant routes so can lose nodes but Internet doesn’t fail – Redundant disks so that can lose 1 disk but not lose data (Redundant Arrays of Independent Disks/RAID) – Redundant memory bits of so that can lose 1 bit but no data (Error Correcting Code/ECC Memory) 8/07/2014 Summer
    [Show full text]
  • SELECTION of CYCLIC REDUNDANCY CODE and CHECKSUM March 2015 ALGORITHMS to ENSURE CRITICAL DATA INTEGRITY 6
    DOT/FAA/TC-14/49 Selection of Federal Aviation Administration William J. Hughes Technical Center Cyclic Redundancy Code and Aviation Research Division Atlantic City International Airport New Jersey 08405 Checksum Algorithms to Ensure Critical Data Integrity March 2015 Final Report This document is available to the U.S. public through the National Technical Information Services (NTIS), Springfield, Virginia 22161. U.S. Department of Transportation Federal Aviation Administration NOTICE This document is disseminated under the sponsorship of the U.S. Department of Transportation in the interest of information exchange. The U.S. Government assumes no liability for the contents or use thereof. The U.S. Government does not endorse products or manufacturers. Trade or manufacturers’ names appear herein solely because they are considered essential to the objective of this report. The findings and conclusions in this report are those of the author(s) and do not necessarily represent the views of the funding agency. This document does not constitute FAA policy. Consult the FAA sponsoring organization listed on the Technical Documentation page as to its use. This report is available at the Federal Aviation Administration William J. Hughes Technical Center’s Full-Text Technical Reports page: actlibrary.tc.faa.gov in Adobe Acrobat portable document format (PDF). Technical Report Documentation Page 1. Report No. 2. Government Accession No. 3. Recipient's Catalog No. DOT/FAA/TC-14/49 4. Title and Subitle 5. Report Date SELECTION OF CYCLIC REDUNDANCY CODE AND CHECKSUM March 2015 ALGORITHMS TO ENSURE CRITICAL DATA INTEGRITY 6. Performing Organization Code 220410 7. Author(s) 8. Performing Organization Report No.
    [Show full text]
  • PIC Serial Communication Modules
    PIC Serial Communication Modules Synchronous Serial Port (SSP Module) {clock signal is required} - Serial Peripheral Interface (SPI) - Inter Integrated Circuit (IIC -> I2C) Serial Interface Asynchronous Communication Modules (clock signal is not required) - Universal Asynchronous Receiver Transmitter (UART) UART • A Universal Asynchronous Receiver-Transmitter (UART) is used for serial communications – usually via a cable. • The UART generates signals with the same timing as the RS-232 standard used by the Personal Computer’s COM ports. • The UART input/output uses 0V for logic 0 and 5V for logic 1. • The RS-232 standard (and the COM port) use +12V for logic 0 and –12V for logic 1. • To convert between these voltages levels we need an additional integrated circuit (such as Maxim’s MAX232). Parts of an RS-232 Frame • A frame transmits a single character and is in general composed of: 1) A start bit (always logic 0) 2) Data bits (5, 6, 7, or 8 of them) 3) A parity bit (optional, even or odd parity) 4) A stop bit (always logic 1) RS232 Level Converter PIC & MAX232 Connection UART Timing Accuracy • Since the transmitter and receiver keep track of time independently between clock recovery synchronization points, the combined inaccuracy of the transmitter and receiver’s clocks can not be too large. • For RS-232 communication, this combined inaccuracy is about 5%. • Usually, an RC oscillator is too inaccurate and a crystal is required. UART Bit Time and Baud Rate • The bit time (units of time) is the time from the start of one serial data bit value to the start of another.
    [Show full text]
  • Linear-XOR and Additive Checksums Don't Protect Damgård-Merkle
    Linear-XOR and Additive Checksums Don’t Protect Damg˚ard-Merkle Hashes from Generic Attacks Praveen Gauravaram1! and John Kelsey2 1 Technical University of Denmark (DTU), Denmark Queensland University of Technology (QUT), Australia. [email protected] 2 National Institute of Standards and Technology (NIST), USA [email protected] Abstract. We consider the security of Damg˚ard-Merkle variants which compute linear-XOR or additive checksums over message blocks, inter- mediate hash values, or both, and process these checksums in computing the final hash value. We show that these Damg˚ard-Merkle variants gain almost no security against generic attacks such as the long-message sec- ond preimage attacks of [10,21] and the herding attack of [9]. 1 Introduction The Damg˚ard-Merkle construction [3, 14] (DM construction in the rest of this article) provides a blueprint for building a cryptographic hash function, given a fixed-length input compression function; this blueprint is followed for nearly all widely-used hash functions. However, the past few years have seen two kinds of surprising results on hash functions, that have led to a flurry of research: 1. Generic attacks apply to the DM construction directly, and make few or no assumptions about the compression function. These attacks involve attacking a t-bit hash function with more than 2t/2 work, in order to violate some property other than collision resistance. Exam- ples of generic attacks are Joux multicollision [8], long-message second preimage attacks [10,21] and herding attack [9]. 2. Cryptanalytic attacks apply to the compression function of the hash function.
    [Show full text]
  • Hash (Checksum)
    Hash Functions CSC 482/582: Computer Security Topics 1. Hash Functions 2. Applications of Hash Functions 3. Secure Hash Functions 4. Collision Attacks 5. Pre-Image Attacks 6. Current Hash Security 7. SHA-3 Competition 8. HMAC: Keyed Hash Functions CSC 482/582: Computer Security Hash (Checksum) Functions Hash Function h: M MD Input M: variable length message M Output MD: fixed length “Message Digest” of input Many inputs produce same output. Limited number of outputs; infinite number of inputs Avalanche effect: small input change -> big output change Example Hash Function Sum 32-bit words of message mod 232 M MD=h(M) h CSC 482/582: Computer Security 1 Hash Function: ASCII Parity ASCII parity bit is a 1-bit hash function ASCII has 7 bits; 8th bit is for “parity” Even parity: even number of 1 bits Odd parity: odd number of 1 bits Bob receives “10111101” as bits. Sender is using even parity; 6 1 bits, so character was received correctly Note: could have been garbled, but 2 bits would need to have been changed to preserve parity Sender is using odd parity; even number of 1 bits, so character was not received correctly CSC 482/582: Computer Security Applications of Hash Functions Verifying file integrity How do you know that a file you downloaded was not corrupted during download? Storing passwords To avoid compromise of all passwords by an attacker who has gained admin access, store hash of passwords Digital signatures Cryptographic verification that data was downloaded from the intended source and not modified. Used for operating system patches and packages.
    [Show full text]
  • Cryptographic Checksum
    Cryptographic checksum • A function that produces a checksum (smaller than message) • A digest function using a key only known to sender and recipient • Ensure both integrity and authenticity • Used for code-tamper protection and message-integrity protection in transit • The choices of hashing algorithms are Message Digest (MD) by Ron Rivest of RSA Labs or Secure Hash Algorithm (SHA) led by US government • MD: MD4, MD5; SHA: SHA1, SHA2, SHA3 k k message m tag Alice Bob Generate tag: Verify tag: ? tag S(k, m) V(k, m, tag) = `yes’ 4/19/2019 Zhou Li 1 Constructing cryptographic checksum • HMAC (Hash Message Authentication Code) • Problem to solve: how to merge key with hash H: hash function; k: key; m: message; example of H: SHA-2-256 ; output is 256 bits • Building a MAC out of a hash function: Paddings to make fixed-size block https://en.wikipedia.org/wiki/HMAC 4/19/2019 Zhou Li 2 Cryptoanalysis on crypto checksums • MD5 broken by Xiaoyun Input Output Wang et al. at 2004 • Collisions of full MD5 less than 1 hour on IBM p690 cluster • SHA-1 theoretically broken by Xiaoyun Wang et al. at 2005 • 269 steps (<< 280 rounds) to find a collision • SHA-1 practically broken by Google at 2017 • First collision found with 6500 CPU years and 100 GPU years Current Secure Hash Standard Properties https://shattered.io/static/shattered.pdf 4/19/2019 Zhou Li 3 Elliptic Curve Cryptography • RSA algorithm is patented • Alternative asymmetric cryptography: Elliptic Curve Cryptography (ECC) • The general algorithm is in the public domain • ECC can provide similar
    [Show full text]
  • What the Heck Is RS-232 Anyway…
    Tech Note What the heck is RS-232 anyway… It just occurred to me that there is an entire generation of technicians in the work force that did not grow up in the days of the TRS-80 and Commodore 64 PC. Rather, they were brought up on broadband and Wi-Fi. To them that funny looking 9-Pin connector that we old folks cut our teeth on is a mystery full of uncertainty and doubt – with a little bit of fear mixed in for good measure. In this paper, I hope to dispel some of the mystery and give you the essentials of RS-232. What is it? At one time, RS-232 / EIA-232 was the most widely used communication standard on the planet. It was defined and redefined many times. The “EIA” stands for “Electronic Industries Association” and the “RS” stands for “Recommended Standard.” That being the case, it was always rather loose. The physical characteristics of the hardware include both a 25 pin and 9 pin D sub connector. RS- 232 is capable of operating at data rates up to 20 Kbps and can push data about 50 ft. The absolute maximum data rate is difficult to nail down due the differences in the transmission line and cable length. It is possible to operate at some pretty high data rates if the distance is short. The voltage levels are defined as a range from -12 to +12 volts. RS-232 is also single ended. This means that a single electrical signal is compared to a common signal (ground) to determine binary logic states.
    [Show full text]
  • Design and Its Application of Microprocessor
    Design and Its application of Microprocessor The 8088 And 8086 Microprocessors: Programming, Interfacing, Software, Hardware And Applications Suk-Ju Kang Dong-A University [email protected] 1 Chapter 9. Memory Devices, Circuits, and Subsystem Design 2 In This Chapter, … 9.1 Program and Data Storage 9.2 Read-Only Memory 9.3 Random Access Read/Write Memories 9.4 Parity, the Parity Bit, and Parity- Checker/Generator Circuit 9.5 FLASH Memory 9.6 Wait-State Circuitry 9.7 8088/8086 Microcomputer System Memory Circuitry 3 Program and Data Storage The memory unit of a microcomputer is partitioned into a primary storage section and secondary storage section 4 Program and Data Storage The basic input/output system (BIOS) are programs held in ROM. ◦ They are called firmware because of their permanent nature ◦ The typical size of a BIOS ROM used in a PC today is 256 Kbytes Programs are normally read in from the secondary memory storage device, stored in the program storage part of memory, and then run 5 Read-Only Memory ROM, PROM, and EPROM ◦ Mask-programmable read-only memory (ROM) ◦ One-time-programmable read-only memory (PROM) ◦ Erasable read-only memory (EPROM) 6 Read-Only Memory Block diagram of a read-only memory Address bus ◦ Data bus ◦ Control bus Chip enable (CE) Output enable (OE) 7 Read-Only Memory Read operation 8 Random Access Read/Write Memories The memory section of a microcomputer system is normally formed from both read-only memories and random access read/write memories (RAM) RAM is different from ROM in two ways: ◦ Data stored in RAM is not permanent in nature RAM is normally used to store temporary data and application programs for execution ◦ RAM is volatile If power is removed from RAM, the stored data are lost 9 Random Access Read/Write Memories Static and dynamic RAMs ◦ For a static RAM (SRAM), data remain valid as long as the power supply is not turned off.
    [Show full text]
  • FIPS 198, the Keyed-Hash Message Authentication Code (HMAC)
    ARCHIVED PUBLICATION The attached publication, FIPS Publication 198 (dated March 6, 2002), was superseded on July 29, 2008 and is provided here only for historical purposes. For the most current revision of this publication, see: http://csrc.nist.gov/publications/PubsFIPS.html#198-1. FIPS PUB 198 FEDERAL INFORMATION PROCESSING STANDARDS PUBLICATION The Keyed-Hash Message Authentication Code (HMAC) CATEGORY: COMPUTER SECURITY SUBCATEGORY: CRYPTOGRAPHY Information Technology Laboratory National Institute of Standards and Technology Gaithersburg, MD 20899-8900 Issued March 6, 2002 U.S. Department of Commerce Donald L. Evans, Secretary Technology Administration Philip J. Bond, Under Secretary National Institute of Standards and Technology Arden L. Bement, Jr., Director Foreword The Federal Information Processing Standards Publication Series of the National Institute of Standards and Technology (NIST) is the official series of publications relating to standards and guidelines adopted and promulgated under the provisions of Section 5131 of the Information Technology Management Reform Act of 1996 (Public Law 104-106) and the Computer Security Act of 1987 (Public Law 100-235). These mandates have given the Secretary of Commerce and NIST important responsibilities for improving the utilization and management of computer and related telecommunications systems in the Federal government. The NIST, through its Information Technology Laboratory, provides leadership, technical guidance, and coordination of government efforts in the development of standards and guidelines in these areas. Comments concerning Federal Information Processing Standards Publications are welcomed and should be addressed to the Director, Information Technology Laboratory, National Institute of Standards and Technology, 100 Bureau Drive, Stop 8900, Gaithersburg, MD 20899-8900. William Mehuron, Director Information Technology Laboratory Abstract This standard describes a keyed-hash message authentication code (HMAC), a mechanism for message authentication using cryptographic hash functions.
    [Show full text]
  • Table of Contents Local Transfers
    Table of Contents Local Transfers......................................................................................................1 Checking File Integrity.......................................................................................................1 Local File Transfer Commands...........................................................................................3 Shift Transfer Tool Overview..............................................................................................5 Local Transfers Checking File Integrity It is a good practice to confirm whether your files are complete and accurate before you transfer the files to or from NAS, and again after the transfer is complete. The easiest way to verify the integrity of file transfers is to use the NAS-developed Shift tool for the transfer, with the --verify option enabled. As part of the transfer, Shift will automatically checksum the data at both the source and destination to detect corruption. If corruption is detected, partial file transfers/checksums will be performed until the corruption is rectified. For example: pfe21% shiftc --verify $HOME/filename /nobackuppX/username lou% shiftc --verify /nobackuppX/username/filename $HOME your_localhost% sup shiftc --verify filename pfe: In addition to Shift, there are several algorithms and programs you can use to compute a checksum. If the results of the pre-transfer checksum match the results obtained after the transfer, you can be reasonably certain that the data in the transferred files is not corrupted. If
    [Show full text]
  • The Effectiveness of Checksums for Embedded Control Networks
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, VOL. 6, NO. 1, JANUARY-MARCH 2009 59 The Effectiveness of Checksums for Embedded Control Networks Theresa C. Maxino, Member, IEEE, and Philip J. Koopman, Senior Member, IEEE Abstract—Embedded control networks commonly use checksums to detect data transmission errors. However, design decisions about which checksum to use are difficult because of a lack of information about the relative effectiveness of available options. We study the error detection effectiveness of the following commonly used checksum computations: exclusive or (XOR), two’s complement addition, one’s complement addition, Fletcher checksum, Adler checksum, and cyclic redundancy codes (CRCs). A study of error detection capabilities for random independent bit errors and burst errors reveals that the XOR, two’s complement addition, and Adler checksums are suboptimal for typical network use. Instead, one’s complement addition should be used for networks willing to sacrifice error detection effectiveness to reduce computational cost, the Fletcher checksum should be used for networks looking for a balance between error detection and computational cost, and CRCs should be used for networks willing to pay a higher computational cost for significantly improved error detection. Index Terms—Real-time communication, networking, embedded systems, checksums, error detection codes. Ç 1INTRODUCTION common way to improve network message data However, the use of less capable checksum calculations Aintegrity is appending a checksum. Although it is well abounds.Onewidelyusedalternativeistheexclusiveor(XOR) known that cyclic redundancy codes (CRCs) are effective at checksum, usedby HART[4],Magellan [5],and manyspecial- error detection, many embedded networks employ less purpose embedded networks (for example, [6], [7], and [8]).
    [Show full text]