FACTOID # 19: Cheap sloppy joes: Looking for reduced-price lunches for schoolchildren? Head for Oklahoma!
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Error detection and correction

In mathematics, computer science, telecommunication, and information theory, error detection and correction has great practical importance in maintaining data (information) integrity across noisy channels and less-than-reliable storage media. Wikipedia does not have an article with this exact name. ... It has been suggested that this article or section be merged with error detection and correction. ... Euclid, Greek mathematician, 3rd century BC, as imagined by by Raphael in this detail from The School of Athens. ... Computer science, or computing science, is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. ... Copy of the original phone of Alexander Graham Bell at the Musée des Arts et Métiers in Paris Telecommunication is the transmission of signals over a distance for the purpose of communication. ... Not to be confused with information technology, information science, or informatics. ...

Contents

General definitions of terms

Error detection and Error correction

  • Error detection is the ability to detect errors caused by noise or other impairments during transmission from the transmitter to the receiver.
  • Error correction has an additional feature that enables identification and correction of the errors.

There are two ways to design the channel code and protocol for an error correcting system. Channel Codes In a wireless network following the IEEE 802. ... For other senses of this word, see protocol. ...

  • Automatic repeat request (ARQ): The transmitter sends the data and also an error detection code, which the receiver uses to check for errors. If it does not find any errors, it sends a message (an ACK, or acknowledgment) back to the transmitter. The transmitter re-transmits any data that was not ACKed.
  • Forward error correction (FEC): The transmitter encodes the data with an error-correcting code and sends the coded message. The receiver never sends any messages back to the transmitter. The receiver decodes what it receives into the "most likely" data. The codes are designed so that it would take an "unreasonable" amount of noise to trick the receiver into misinterpreting the data.

Automatic Repeat-reQuest (ARQ) is an error control method for data transmission which makes use of acknowledgments and timeouts to achieve reliable data transmission. ... Automatic Repeat-reQuest (ARQ) is an error control method for data transmission which makes use of acknowledgments and timeouts to achieve reliable data transmission. ... In telecommunication, forward error correction (FEC) is a system of error control for data transmission, whereby the sender adds redundant data to its messages, which allows the receiver to detect and correct errors (within some bound) without the need to ask the sender for additional data. ... FEC is an acronym which can have the following meanings: Fast Ethernet Channel (a method for bundling ethernet channels) Family Entertainment Center Farnell Electronic Components (a distributor in the United Kingdom) Federal Election Commission (administers and enforces campaign finance legislation in the United States) Florida East Coast Railway (AAR reporting...

Error detection schemes

Several schemes exist to achieve error detection, and are generally quite simple. All error detection codes (which include all error-detection-and-correction codes) transmit more bits than were in the original data. Most codes are "systematic" — the transmitter sends the original data bits, followed by check bits — extra bits (usually referred to as redundancy in the literature) which accompany data bits for the purpose of error detection. In coding theory, a systematic code is one in which the input data are embedded in the encoded output. ...


(In a system that uses a "non-systematic" code, such as some raptor codes, data bits are transformed into at least as many code bits, and the transmitter sends only the code bits). In computer science, raptor codes are one of the first known classes of fountain codes with linear time encoding and decoding. ...


Repetition schemes

Variations on this theme exist. Given a stream of data that is to be sent, the data is broken up into blocks of bits, and in sending, each block is sent some predetermined number of times. For example, if we want to send "1011", we may repeat this block three times each. In computing, triple modular redundancy (TMR) is a fault tolerant form of N-modular redundancy, in which three systems perform a process and that result is processed by a voting system to produce a single output. ...


Suppose we send "1011 1011 1011", and this is received as "1010 1011 1011". As one group is not the same as the other two, we can determine that an error has occurred. This scheme is not very efficient, and can be susceptible to problems if the error occurs in exactly the same place for each group (e.g. "1010 1010 1010" in the example above will be detected as correct in this scheme).


The scheme however is extremely simple, and is in fact used in some transmissions of numbers stations. Numbers stations are shortwave radio stations of uncertain origin. ...


Parity schemes

Main article: Parity bit

The stream of data is broken up into blocks of bits, and the number of 1 bits is counted. Then, a "parity bit" is set (or cleared) if the number of one bits is odd (or even). (This scheme is called even parity; odd parity can also be used.) If the tested blocks overlap, then the parity bits can be used to isolate the error, and even correct it if the error affects a single bit: this is the principle behind the Hamming code. A parity bit is a binary digit that indicates whether the number of bits with value of one in a given set of bits is even or odd. ... In telecommunication, a Hamming code is a linear error-correcting code named after its inventor, Richard Hamming. ...


There is a limitation to parity schemes. A parity bit is only guaranteed to detect an odd number of bit errors (one, three, five, and so on). If an even number of bits (two, four, six and so on) are flipped, the parity bit appears to be correct, even though the data are corrupt.


Polarity schemes

One less commonly used form of error correction and detection is transmitting a polarity reversed bitstream simultaneously with the bitstream it is meant to correct. This scheme is very weak at detecting bit errors, and marginally useful for byte or word error detection and correction. However, at the physical layer in the OSI model, this scheme can aid in error correction and detection. This article does not cite any references or sources. ... The Open Systems Interconnection Basic Reference Model (OSI Reference Model or OSI Model for short) is a layered, abstract description for communications and computer network protocol design, developed as part of the Open Systems Interconnection (OSI) initiative. ...


Polarity symbol reversal is (probably) the simplest form of Turbo code, but technically not a Turbo code at all. In electrical engineering and digital communications, turbo codes are a class of recently-developed high-performance error correction codes finding use in deep space satellite communications and other applications where designers seek to achieve maximal information transfer over a limited-bandwidth communication link in the presence of data-corrupting noise. ...

  • Turbo codes DO NOT work at the bit level.
  • Turbo codes typically work at the character or symbol level depending on their placement in the OSI model.
  • Character here refers to Baudot, ASCII-7, the 8-bit byte or the 16-bit word.

Original transmitted symbol 1011 The Baudot code, named after its inventor Émile Baudot, is a character set predating EBCDIC and ASCII and used originally and primarily on teleprinters. ... Image:ASCII fullsvg There are 95 printable ASCII characters, numbered 32 to 126. ... In computer science a byte (pronounced bite) is a unit of measurement of information storage, most often consisting of eight bits. ... A word is a unit of language that carries meaning and consists of one or more morphemes which are linked more or less tightly together, and has a phonetical value. ...

  • transmit 1011 on carrier wave 1 (CW1)
  • transmit 0100 on carrier wave 2 (CW2)

Receiver end

  • do bits polarities of (CW1) <> (CW2)?
  • if CW1 == CW2, signal bit error (triggers more complex ECC)

This polarity reversal scheme works fairly well at low data rates (below 300 baud) with very redundant data like telemetry data.[specify] Telemetry is a technology that allows the remote measurement and reporting of information of interest to the system designer or operator. ...


Cyclic redundancy checks

Main article: Cyclic redundancy check

More complex error detection (and correction) methods make use of the properties of finite fields and polynomials over such fields. A cyclic redundancy check (CRC) is a type of function that takes as input a data stream of any length and produces as output a value of a certain fixed size. ...


The cyclic redundancy check considers a block of data as the coefficients to a polynomial and then divides by a fixed, predetermined polynomial. The coefficients of the result of the division is taken as the redundant data bits, the CRC.


On reception, one can recompute the CRC from the payload bits and compare this with the CRC that was received. A mismatch indicates that an error occurred.


Checksum

Main article: Checksum

A checksum of a message is an arithmetic sum of message code words of a certain word length, for example byte values, and their carry value. The sum is negated by means of ones-complement, and stored or transferred as an extra code word extending the message. A checksum is a form of redundancy check, a simple way to protect the integrity of data by detecting errors in data that are sent through space (telecommunications) or time (storage). ... A checksum is a form of redundancy check, a simple way to protect the integrity of data by detecting errors in data that are sent through space (telecommunications) or time (storage). ...


On the receiver side, a new checksum may be calculated, from the extended message. If the new checksum is not 0, error is detected.


Hamming distance based checks

If we want to detect d bit errors in an n bit word we can map every n bit word into a bigger n+d+1 bit word so that the minimum Hamming distance between each valid mapping is d+1. This way, if one receives a n+d+1 word that doesn't match any word in the mapping (with a Hamming distance x <= d+1 from any word in the mapping) it can successfully detect it as an errored word. Even more, d or less errors will never transform a valid word into another, because the Hamming distance between each valid word is at least d+1, and such errors only lead to invalid words that are detected correctly. Given a stream of m*n bits, we can detect x <= d bit errors successfully using the above method on every n bit word. In fact, we can detect a maximum of m*d errors if every n word is transmitted with maximum d errors. In information theory, the Hamming distance between two strings of equal length is the number of positions for which the corresponding symbols are different. ...


Error correction

The above methods are sufficient to determine whether some data has been received in error. But often, this is not enough. Consider an application such as simplex teletype over radio (SITOR). If a message needs to be received quickly and without error, merely knowing where the errors occurred may not be enough, the second condition is not satisfied as the message will be incomplete. Suppose then the receiver waits for a message to be repeated (since the situation is simplex), the first condition is not satisfied since the receiver will have to wait (possibly a long time) for the message to be repeated to fill the gaps left by the errors. A simplex communication system is one where all signals flow in one direction. ... SITOR (simplex teletype over radio) is a transmission mode used to transmit textual messages akin to regular radioteletype (RTTY), but implements simple error correction. ...


It would be advantageous if the receiver could somehow determine what the error was and thus correct it. Is this even possible? Yes, consider the NATO phonetic alphabet -- if a sender were to be sending the word "WIKI" with the alphabet by sending "WHISKEY INDIA KILO INDIA" and this was received (with * signifying letters received in error) as "W***KEY I**I* **LO **DI*", it would be possible to correct all the errors here since there is only one word in the NATO phonetic alphabet which starts with "W" and ends in "KEY", and similarly for the other words. This idea is also present in some error correcting codes (ECC). FAA radiotelephony phonetic alphabet and Morse code chart. ...


Automatic repeat request

Automatic Repeat-reQuest (ARQ) is an error control method for data transmission which makes use of error detection codes, acknowledgment and/or negative acknowledgement messages and timeouts to achieve reliable data transmission. An acknowledgment is a message sent by the receiver to the transmitter to indicate that it has correctly received a data frame. Automatic Repeat-reQuest (ARQ) is an error control method for data transmission which uses acknowledgments and timeouts to achieve reliable data transmission. ... Automatic Repeat-reQuest (ARQ) is an error control method for data transmission which uses acknowledgments and timeouts to achieve reliable data transmission. ...


Usually, when the transmitter does not receive the acknowledgment before the timeout occurs (i.e. within a reasonable amount of time after sending the data frame), it retransmits the frame until it is either correctly received or the error persists beyond a predetermined number of retransmissions.


A few types of ARQ protocols are Stop-and-wait ARQ, Go-Back-N ARQ and Selective Repeat ARQ. Stop-and-wait ARQ is one kind of ARQ Protocol, in which the communication is done one frame at time. ... Go-Back-N ARQ is a specific instance of ARQ Protocol, in which the sending process continues to send a number of frames specified by a window size without receiving an ACK packet from the receiver. ... Selective Repeat ARQ is a specific instance of ARQ Protocol, in which the sending process continues to send a number of frames specified by a window size even after a frame loss. ...


Hybrid ARQ is a combination of ARQ and forward error correction. Hybrid ARQ (HARQ) is a variation of the ARQ error control method, which gives better performance than ordinary ARQ, particularly over wireless channels, at the cost of increased implementation complexity. ...


Error-correcting code

An error-correcting code (ECC) is a code in which each data signal conforms to specific rules of construction so that departures from this construction in the received signal can generally be automatically detected and corrected. It is used in computer data storage, for example in dynamic RAM, and in data transmission. In telecommunication, forward error correction (FEC) is a system of error control for data transmission, whereby the sender adds redundant data to its messages, which allows the receiver to detect and correct errors (within some bound) without the need to ask the sender for additional data. ... In communications, a code is a rule for converting a piece of information (for example, a letter, word, or phrase) into another form or representation, not necessarily of the same type. ... Look up storage in Wiktionary, the free dictionary. ... DRAM is a type of random access memory that stores each bit of data in a separate capacitor. ... Data transmission is the conveyance of any kind of information from one space to another. ...


Some codes can correct a certain number of bit errors and only detect further numbers of bit errors. Codes which can correct one error are termed single error correcting (SEC), and those which detect two are termed double error detecting (DED). The simplest error correcting codes can correct single-bit errors and detect double-bit errors. There are codes which can correct and detect more errors than these.


An error-correcting code which corrects all errors of up to n bits correctly is also an error-detecting code which can detect at least all errors of up to 2n bits.


Two main categories are convolutional codes and block codes. Examples of the latter are Hamming code, BCH code, Reed-Solomon code, Reed-Muller code, Binary Golay code, and turbo code. In telecommunication, a convolutional code is a type of error-correcting code in which (a) each m-bit information symbol (each m-bit string) to be encoded is transformed into an n-bit symbol, where m/n is the code rate (n ≥ m) and (b) the transformation is a function... The Railway Block Code is a system of bells used to send simple messages about train operations from one signalbox to another. ... In telecommunication, a Hamming code is a linear error-correcting code named after its inventor, Richard Hamming. ... A BCH (Bose, Ray-Chaudhuri, Hocquenghem) code is a much studied code within the study of coding theory and more specifically error-correcting codes. ... Reed-Solomon error correction is a coding scheme which works by first constructing a polynomial from the data symbols to be transmitted and then sending an over-sampled plot of the polynomial instead of the original symbols themselves. ... The Reed-Muller codes are a family of linear error-correcting codes used in communications. ... In mathematics and computer science, a binary Golay code is a type of error-correcting code used in digital communications. ... In electrical engineering and digital communications, turbo codes are a class of recently-developed high-performance error correction codes finding use in deep space satellite communications and other applications where designers seek to achieve maximal information transfer over a limited-bandwidth communication link in the presence of data-corrupting noise. ...


Shannon's theorem is an important theorem in error correction which describes the maximum attainable efficiency of an error-correcting scheme versus the levels of noise interference expected. In general, these methods put redundant information into the data stream following certain algebraic or geometric relations so that the decoded stream, if damaged in transmission, can be corrected. The effectiveness of the coding scheme is measured in terms of code rate, which is the code length divided by the useful information, and the Coding gain, which is the difference of the SNR levels of the uncoded and coded systems required to reach the same BER levels. In information theory, the Shannon-Hartley theorem states the maximum amount of error-free digital data (that is, information) that can be transmitted over a communication link with a specified bandwidth in the presence of noise interference. ... The code rate or information rate of a forward error correction (FEC) code, for example a convolutional code, states what portion of the total amount of information that is useful (non redundant). ... In coding theory and related engineering problems, coding gain is the measure in the difference between the signal to noise ratio (SNR) levels between the uncoded system and coded system required to reach the same bit error rate (BER) levels when used with the error correcting code (ECC). ... Signal-to-noise ratio (often abbreviated SNR or S/N) is an electrical engineering concept defined as the ratio of a signal power to the noise power corrupting the signal. ... In telecommunication, an error ratio is the ratio of the number of bits, elements, characters, or blocks incorrectly received to the total number of bits, elements, characters, or blocks sent during a specified time interval. ...


Error-correcting memory

Because soft errors are extremely common in the DRAM of computers used in satellites and space probes, such memory is structured as ECC memory (also called "EDAC protected memory"). Typically every bit of memory is refreshed at least 15 times per second. During this memory refresh, the memory controller reads each word of memory and writes the (corrected) word back. [citation needed] Such memory controllers traditionally use a Hamming code, although some use triple modular redundancy. Even though a single cosmic ray can upset many physically neighboring bits in a DRAM, such memory systems are designed so that neighboring bits belong to different words, so such single event upsets (SEUs) cause only a single error in any particular word, and so can be corrected by a single-bit error correcting code. As long as no more than a single bit in any particular word is hit by an error between refreshes, such a memory system presents the illusion of an error-free memory. [1] [2] In electronics and computing, an error is a signal or datum which is wrong. ... Memory refresh is the process of periodically reading information from an area of computer memory, and immediately rewriting the read information to the same area with no modifications. ... In telecommunication, a Hamming code is a linear error-correcting code named after its inventor, Richard Hamming. ... In computing, triple modular redundancy (TMR) is a fault tolerant form of N-modular redundancy, in which three systems perform a process and that result is processed by a voting system to produce a single output. ... A single event upset (SEU) is a change of state, or voltage pulse caused when a high-energy particle strikes a sensitive node in a micro-electronic device, such as in a microprocessor, semiconductor memory, or power transistors. ...


ECC memory provides greater data accuracy and system uptime by protecting against soft errors in computer memory.


Applications

Applications that require low latency (such as telephone conversations) cannot use Automatic Repeat reQuest (ARQ); they must use Forward Error Correction (FEC). By the time an ARQ system discovers an error and re-transmits it, the re-sent data will arrive too late to be any good. Automatic Repeat-reQuest (ARQ) is an error control method for data transmission which makes use of acknowledgments and timeouts to achieve reliable data transmission. ... In telecommunication, forward error correction (FEC) is a system of error control for data transmission, whereby the sender adds redundant data to its messages, which allows the receiver to detect and correct errors (within some bound) without the need to ask the sender for additional data. ... Automatic Repeat-reQuest (ARQ) is an error control method for data transmission which makes use of acknowledgments and timeouts to achieve reliable data transmission. ...


Applications where the transmitter immediately forgets the information as soon as it is sent (such as most television cameras) cannot use ARQ; they must use FEC because when an error occurs, the original data is no longer available. (This is also why FEC is used in data storage systems such as RAID and distributed data store). Automatic Repeat-reQuest (ARQ) is an error control method for data transmission which makes use of acknowledgments and timeouts to achieve reliable data transmission. ... In telecommunication, forward error correction (FEC) is a system of error control for data transmission, whereby the sender adds redundant data to its messages, which allows the receiver to detect and correct errors (within some bound) without the need to ask the sender for additional data. ... In telecommunication, forward error correction (FEC) is a system of error control for data transmission, whereby the sender adds redundant data to its messages, which allows the receiver to detect and correct errors (within some bound) without the need to ask the sender for additional data. ... In computing, specifically computer storage, a Redundant Array of Independent Drives (or Disks), also known as Redundant Array of Inexpensive Drives (or Disks), (RAID) is an umbrella term for data storage schemes that divide and/or replicate data among multiple hard drives. ... A distributed data store is a network in which a user stores his or her information on a number of peer network nodes. ...


Applications that require extremely low error rates (such as digital money transfers) must use ARQ. Automatic Repeat-reQuest (ARQ) is an error control method for data transmission which makes use of acknowledgments and timeouts to achieve reliable data transmission. ...


The Internet

In a typical TCP/IP stack, error detection is performed at multiple levels: The Internet protocol suite is the set of communications protocols that implement the protocol stack on which the Internet runs. ...

  • Each Ethernet frame carries a CRC-32 checksum. The receiver discards frames if their checksums don't match.
  • The IPv4 header contains a header checksum of the contents of the header (excluding the checksum field). Packets with checksums that don't match are discarded.
  • The checksum was omitted from the IPv6 header, because most current link layer protocols have error detection.
  • UDP has an optional checksum. Packets with wrong checksums are discarded.
  • TCP has a checksum of the payload, TCP header (excluding the checksum field) and source- and destination addresses of the IP header. Packets found to have incorrect checksums are discarded and eventually get retransmitted when the sender receives a triple-ack or a timeout occurs.

Ethernet is a large, diverse family of frame-based computer networking technologies that operates at many speeds for local area networks (LANs). ... In telecommunications, a frame is a packet which has been encoded for transmission over a particular link. ... A cyclic redundancy check (CRC) is a type of function that takes as input a data stream of any length and produces as output a value of a certain fixed size. ... A checksum is a form of redundancy check, a simple way to protect the integrity of data by detecting errors in data that are sent through space (telecommunications) or time (storage). ... Internet Protocol version 4 is the fourth iteration of the Internet Protocol (IP) and it is the first version of the protocol to be widely deployed. ... A packet is the fundamental unit of information carriage in all modern computer networks. ... It has been suggested that IPv6 internet be merged into this article or section. ... The data link layer is level two of the seven-level OSI model. ... User Datagram Protocol (UDP) is one of the core protocols of the Internet protocol suite. ... The Transmission Control Protocol (TCP) is one of the core protocols of the Internet protocol suite, often simply referred to as TCP/IP. Using TCP, applications on networked hosts can create connections to one another, over which they can exchange streams of data using Stream Sockets. ... The TCP uses various variations of an additive-increase-multiplicative-decrease (AIMD) scheme, with other schemas such as slow-start in order to achieve congestion avoidance. ... In telecommunication, the term time-out has the following meanings: A network parameter related to an enforced event designed to occur at the conclusion of a predetermined elapsed time. ...

Deep-space telecommunications

NASA's Deep Space Missions ECC Codes (code imperfectness)

NASA has used many different error correcting codes. For missions between 1969 and 1977 the Mariner spacecraft used a Reed-Muller code. The noise these spacecraft were subject to was well approximated by a "bell-curve" (normal distribution), so the Reed-Muller codes were well suited to the situation. Image File history File links NASA_ECC_Codes-imperfection. ... Image File history File links NASA_ECC_Codes-imperfection. ... This article is about the American space agency. ... The Reed-Muller codes are a family of linear error-correcting codes used in communications. ... The normal distribution, also called the Gaussian distribution, is an important family of continuous probability distributions, applicable in many fields. ...


The Voyager 1 & Voyager 2 spacecraft transmitted color pictures of Jupiter and Saturn in 1979 and 1980. For the album by The Verve, see Voyager 1 (album). ... Trajectory Voyager 2 is an unmanned interplanetary spacecraft, launched on August 20, 1977. ... For other uses, see Jupiter (disambiguation). ... Adjectives: Saturnian Atmosphere [3] Scale height: 59. ...

  • Color image transmission required 3 times the amount of data, so the Golay (24,12,8) code was used.
  • This Golay code is only 3-error correcting, but it could be transmitted at a much higher data rate.
  • Voyager 2 went on to Uranus and Neptune and the code was switched to a concatenated Reed-Solomon code-Convolutional code for its substantially more powerful error correcting capabilities.
  • Current DSN error correction is done with dedicated hardware.
  • For some NASA deep space craft such as those in the Voyager program, Cassini-Huygens (Saturn), New Horizons (Pluto) and Deep Space 1 -- the use of hardware ECC may not be feasible for the full duration of the mission.

The different kinds of deep space and orbital missions that are conducted suggest that trying to find a "one size fits all" error correction system will be an ongoing problem for some time to come. A Golay code can be binary or ternary. ... For other uses, see Uranus (disambiguation). ... For other uses, see Neptune (disambiguation). ... Reed-Solomon error correction is a coding scheme which works by first constructing a polynomial from the data symbols to be transmitted and then sending an over-sampled plot of the polynomial instead of the original symbols themselves. ... In telecommunication, a convolutional code is a type of error-correcting code in which (a) each m-bit information symbol (each m-bit string) to be encoded is transformed into an n-bit symbol, where m/n is the code rate (n ≥ m) and (b) the transformation is a function... Voyager Project redirects here. ... Cassini-Huygens is a joint NASA/ESA/ASI unmanned space mission intended to study Saturn and its moons. ... Adjectives: Saturnian Atmosphere [3] Scale height: 59. ... New Horizons on the launchpad New Horizons is a robotic spacecraft mission conducted by NASA. It is expected to be the first spacecraft to fly by and study the dwarf planet Pluto and its moons, Charon, Nix and Hydra. ... Adjectives: Plutonian Atmosphere Surface pressure: 0. ... The spacecraft Deep Space 1 was launched October 24, 1998 on top of a Delta II rocket. ...

  • For missions close to the earth the nature of the "noise" is different from that on a spacecraft headed towards the outer planets
  • In particular, if a transmitter on a spacecraft far from earth is operating at a low power, the problem of correcting for noise gets larger with distance from the earth

Satellite broadcasting (DVB)

Block 2D & 3D bit allocation models used by ECC coding systems in terrestrial telecommunications

The demand for satellite transponder bandwidth continues to grow, fueled by the desire to deliver television (including new channels and High Definition TV) and IP data. Transponder availability and bandwidth constraints have limited this growth, because transponder capacity is determined by the selected modulation scheme and Forward error correction (FEC) rate. Image File history File links Block-ECC-Codes_2D_3D_types. ... Image File history File links Block-ECC-Codes_2D_3D_types. ... An Ontario Highway 407 toll transponder In telecommunication, the term transponder (short-for Transmitter-responder and sometimes abbreviated to XPDR, XPNDR or TPDR) has the following meanings: An automatic device that receives, amplifies, and retransmits a signal on a different frequency (see also broadcast translator). ... High-definition television (HDTV) means broadcast of television signals with a higher resolution than traditional formats (NTSC, SECAM, PAL) allow. ... In telecommunications, modulation is the process of varying a periodic waveform, i. ... In telecommunication, forward error correction (FEC) is a system of error control for data transmission, whereby the sender adds redundant data to its messages, which allows the receiver to detect and correct errors (within some bound) without the need to ask the sender for additional data. ...


Overview

  • QPSK coupled with traditional Reed Solomon and Viterbi codes have been used for nearly 20 years for the delivery of digital satellite TV.
  • Higher order modulation schemes such as 8PSK, 16QAM and 32QAM have enabled the satellite industry to increase transponder efficiency by several orders of magnitude.
  • This increase in the information rate in a transponder comes at the expense of an increase in the carrier power to meet the threshold requirement for existing antennas.
  • Tests conducted using the latest chipsets demonstrate that the performance achieved by using Turbo Codes may be even lower than the 0.8 dB figure assumed in early designs.

Quadrature phase-shift keying (quadriphase, quaternary phase-shift keying) is a form of modulation in which a carrier is sent in four phases, 45, 135, 225, and 315 degrees, and the change in phase from one symbol to the next encodes two bits per symbol. ... In telecommunication, the term phase-shift keying (PSK) has the following meanings: In digital transmission, angle modulation in which the phase of the carrier is discretely varied in relation either to a reference phase or to the phase of the immediately preceding signal element, in accordance with data being transmitted. ... Quadrature amplitude modulation (QAM) is a modulation scheme which conveys data by changing (modulating) the amplitude of two carrier waves. ... Quadrature amplitude modulation (QAM) is a modulation scheme which conveys data by changing (modulating) the amplitude of two carrier waves. ... The decibel (dB) is a logarithmic unit of measurement that expresses the magnitude of a physical quantity (usually power) relative to a specified or implied reference level. ...

Data storage

Error detection and correction codes are often used to improve the reliability of data storage media.


A "parity track" was present on the first magnetic tape data storage in 1951. The "Optimal Rectangular Code" used in group code recording tapes not only detects but also corrects single-bit errors. Magnetic tape has been used for data storage for over 50 years. ... Group Code Recording (GCR) is a floppy disk data encoding format used by the Apple II and Commodore Business Machines in the 5¼ disk drives for their 8-bit computers (the best-known drives being the Disk II for the Apple II family and the Commodore 1541, used with the...


Some file formats, such as the ZIP (file format) include a checksum (most often CRC32) to detect corruption and truncation. A file format is a particular way to encode information for storage in a computer file. ... The ZIP file format is a popular data compression and archival format. ... A cyclic redundancy check (CRC) is a type of hash function used to produce a checksum which is a small integer from a large block of data, such as network traffic or computer files, in order to detect errors in transmission or duplication. ...


Reed Solomon codes are used in compact discs to correct errors caused by scratches. A compact disc or CD is an optical disc used to store digital data, originally developed for storing digital audio. ...


Modern hard drives use CRC codes to detect and Reed-Solomon codes to correct minor errors in sector reads, and to recover data from sectors that have "gone bad" and store that data in the spare sectors[3].


RAID systems use a variety of error correction techniques, to correct errors cause when a hard drive completely fails. In computing, specifically computer storage, a Redundant Array of Independent Drives (or Disks), also known as Redundant Array of Inexpensive Drives (or Disks), (RAID) is an umbrella term for data storage schemes that divide and/or replicate data among multiple hard drives. ...


Information theory and error detection and correction

Information theory tells us that whatever the probability of error in transmission or storage, it is possible to construct error-correcting codes in which the likelihood of failure is arbitrarily low, although this requires adding increasing amounts of redundant data to the original, which might not be practical when the error probability is very high. Shannon's theorem sets an upper bound to the error correction rate that can be achieved (and thus the level of noise that can be tolerated) using a fixed amount of redundancy, but does not tell us how to construct such an optimal encoder. Not to be confused with information technology, information science, or informatics. ... In telecommunication, a redundancy check is extra data added to a message for the purposes of error detection and error correction. ... In information theory, the Shannon-Hartley theorem states the maximum amount of error-free digital data (that is, information) that can be transmitted over a communication link with a specified bandwidth in the presence of noise interference. ... For the Irish mythological figure, see Naoise. ...


Error-correcting codes can be divided into block codes and convolutional codes. Other block error-correcting codes, such as Reed-Solomon codes, transform a chunk of bits into a (longer) chunk of bits in such a way that errors up to some threshold in each block can be detected and corrected. The Railway Block Code is a system of bells used to send simple messages about train operations from one signalbox to another. ... In telecommunication, a convolutional code is a type of error-correcting code in which (a) each m-bit information symbol (each m-bit string) to be encoded is transformed into an n-bit symbol, where m/n is the code rate (n ≥ m) and (b) the transformation is a function... Reed-Solomon error correction is a coding scheme which works by first constructing a polynomial from the data symbols to be transmitted and then sending an over-sampled plot of the polynomial instead of the original symbols themselves. ...


However, in practice errors often occur in bursts rather than at random. This is often compensated for by shuffling (interleaving) the bits in the message after coding. Then any burst of bit-errors is broken up into a set of scattered single-bit errors when the bits of the message are unshuffled (de-interleaved) before being decoded. In telecommunication, an error burst is a contiguous sequence of symbols, received over a data transmission channel, such that the first and last symbols are in error and there exists no contiguous subsequence of m correctly received symbols within the error burst. ...


List of error-correction, error-detection methods

This list contains methods of error correction (Reed-Solomon, for example is a method) and practical techniques for error correction (like the Check digit, a practical method).


Practical uses of Error Correction methods In telecommunication, a Berger code is an unidirectional error detecting code, named after its inventor, J. M. Berger. ... In computer memory systems, Chipkill is a form of advanced Error Checking and Correcting (ECC) computer memory technology that protects computer memory systems from any single memory chip failure as well as multi-bit errors from any portion of a single memory chip. ... In coding theory, a constant-weight code is an error detection and correction code where all codewords share the same Hamming weight. ... In telecommunication, a convolutional code is a type of error-correcting code in which (a) each m-bit information symbol (each m-bit string) to be encoded is transformed into an n-bit symbol, where m/n is the code rate (n ≥ m) and (b) the transformation is a function... Iterative Viterbi Decoding is an algorithm that spots the subsequence S of an observation O={o1,...,on} having the highest average probability (i. ... Differential space–time codes[1][2] are a way of transmitting data in wireless communications. ... Space–time block coding is a technique used in wireless communications to transmit multiple copies of a data stream across a number of antennas and to exploit the various received versions of the data to improve the reliability of data-transfer. ... In computing, triple modular redundancy (TMR) is a fault tolerant form of N-modular redundancy, in which three systems perform a process and that result is processed by a voting system to produce a single output. ... In general, an erasure code transforms a message of n blocks into a message with > n blocks such that the original message can be recovered from a subset of those blocks. ... Fountain codes are a class of Erasure codes with the property that a potentially limitless sequence of encoding symbols can be generated from a given set of source symbols such that the original source symbols can be recovered from any subset of the encoding symbols of size equal to or... In telecommunication, forward error correction (FEC) is a system of error control for data transmission, whereby the sender adds redundant data to its messages, which allows the receiver to detect and correct errors (within some bound) without the need to ask the sender for additional data. ... In Error correction and detection, Group codes are length Linear block codes which are subgroups of , where is a Finite Abelian group. ... A Golay code can be binary or ternary. ... In mathematics and computer science, a binary Golay code is a type of error-correcting code used in digital communications. ... In mathematics, a Goppa code is a general type of linear code constructed by using an algebraic curve X over a finite field . ... In cryptography, the McEliece cryptosystem is an asymmetric key algorithm developed in 1978 by Robert McEliece. ... The Hadamard code, named after Jacques Hadamard, is a system used for signal error detection and correction. ... In telecommunication, a Hagelbarger code is a convolutional code that enables error bursts to be corrected provided that there are relatively long error-free intervals between the error bursts. ... In telecommunication, a Hamming code is a linear error-correcting code named after its inventor, Richard Hamming. ... Lexicographic codes or lexicodes are greedily generated error-correcting codes with remarkably good properties. ... In telecommunication, a longitudinal redundancy check (LRC) or horizontal redundancy check is a form of redundancy check based on the formation of a block check following preset rules: The block check formation rules are applied in the same manner to each character. ... A low-density parity-check code (LDPC code) is an error correcting code, a method of transmitting message over a noisy transmission channel. ... LT-codes are a class of near optimal rateless erasure correcting codes introduced by Michael Luby. ... An m of n code is a separable error detection code with a code word length of n bits, where each code word contains exactly m instances of a one. ... Online codes are an example of rateless erasure codes. ... A parity bit is a binary digit that indicates whether the number of bits with value of one in a given set of bits is even or odd. ... Raptor codes, introduced by Amin Shokrallahi, are the first known class of fountain codes with linear time encoding and decoding. ... Reed-Solomon error correction is a coding scheme which works by first constructing a polynomial from the data symbols to be transmitted and then sending an over-sampled plot of the polynomial instead of the original symbols themselves. ... The Reed-Muller codes are a family of linear error-correcting codes used in communications. ... This article is considered orphaned, since there are very few or no other articles that link to this one. ... A Sparse graph code is a code which is represented by a sparse graph. ... A space–time code (STC) is a method employed to improve the reliability of data transmission in wireless communication systems using multiple transmit antennas. ... Space–time trellis codes (STTCs) are a type of space–time code used in multiple-antenna wireless communications. ... Tornado codes are a revolutionary new class of erasure codes that support error-correcting and have fast encoding and decoding algorithms. ... Fountain codes are a class of Erasure code with the property that a potentially limitless sequence of encoding symbols can be generated from a given set of source symbols such that the original source symbols can be recovered from any subset of the encoding symbols of size equal to or... In computing, triple modular redundancy (TMR) is a fault tolerant form of N-modular redundancy, in which three systems perform a process and that result is processed by a voting system to produce a single output. ... In electrical engineering and digital communications, turbo codes are a class of recently-developed high-performance error correction codes finding use in deep space satellite communications and other applications where designers seek to achieve maximal information transfer over a limited-bandwidth communication link in the presence of data-corrupting noise. ... The Viterbi algorithm, named after its developer Andrew Viterbi, is a dynamic programming algorithm for finding the most likely sequence of hidden states – known as the Viterbi path – that result in a sequence of observed events, especially in the context of hidden Markov models. ... The Walsh code is used to uniquely define individual communication channels. ...

The Compact Disc and Voyager Program spacecraft use concatenated error correction technologies. ... A compact disc or CD is an optical disc used to store digital data, originally developed for storing digital audio. ... Voyager Project redirects here. ... A check digit is a form of redundancy check used for error detection, the decimal equivalent of a binary checksum. ... UPC is a three letter acronym that can stand for: Uganda Peoples Congress Ultra Personal Computer Unified Parallel C Uniform Plumbing Code Union des Populations du Cameroun United Pan-Europe Communications United Pentecostal Church United Poultry Concerns United Presbyterian Church Universal Product Code Universitat Politècnica de Catalunya University... Wikipedia encoded in Code 128 Wikipedia encoded in Code 93 Wikipedia, the free encyclopedia encoded in the DataMatrix 2D barcode For the taxonomic method, see DNA barcoding. ... The Luhn algorithm or Luhn formula, also known as the modulus 10 or mod 10 algorithm, is a simple checksum formula used to validate a variety of identification numbers, such as credit card numbers and Canadian Social Insurance Numbers. ... Decimal, or denary, notation is the most common way of writing the base 10 numeral system, which uses various symbols for ten distinct quantities (0, 1, 2, 3, 4, 5, 6, 7, 8 and 9, called digits) together with the decimal point and the sign symbols + (plus) and &#8722; (minus... A checksum is a form of redundancy check, a simple way to protect the integrity of data by detecting errors in data that are sent through space (telecommunications) or time (storage). ... The Luhn mod N algorithm is an extension to the Luhn algorithm (also known as mod 10 algorithm) that allows it to work with sequences of non-numeric characters. ... Decimal, or denary, notation is the most common way of writing the base 10 numeral system, which uses various symbols for ten distinct quantities (0, 1, 2, 3, 4, 5, 6, 7, 8 and 9, called digits) together with the decimal point and the sign symbols + (plus) and &#8722; (minus... The Verhoeff algorithm, a checksum formula for error detection first published in 1969, was developed by Dutch mathematician Jacobus Verhoeff (born 1927). ... The Luhn algorithm or Luhn formula, also known as the modulus 10 or mod 10 algorithm, is a simple checksum formula used to validate a variety of identification numbers, such as credit card numbers and Canadian Social Insurance Numbers. ...

See also

Error Correction Standardization

Research Conferences on Error Correction Federal Standard 1037C, entitled Telecommunications: Glossary of Telecommunication Terms is a United States Federal Standard, issued by the General Services Administration pursuant to the Federal Property and Administrative Services Act of 1949, as amended. ... MIL-STD-188 is a series of U.S. military standards relating to telecommunications. ...

  • 4th INTERNATIONAL SYMPOSIUM ON TURBO CODES
  1. Website http://www-turbo.enst-bretagne.fr/
  2. Website http://www.turbo-coding-2006.org/

External links

Related websites on error correction


A low-density parity-check code (LDPC code) is an error correcting code, a method of transmitting message over a noisy transmission channel. ... In electrical engineering and digital communications, turbo codes are a class of recently-developed high-performance error correction codes finding use in deep space satellite communications and other applications where designers seek to achieve maximal information transfer over a limited-bandwidth communication link in the presence of data-corrupting noise. ... Fountain codes are a class of Erasure code with the property that a potentially limitless sequence of encoding symbols can be generated from a given set of source symbols such that the original source symbols can be recovered from any subset of the encoding symbols of size equal to or... The minimum distance of a collection of points P in a space is the smallest distance between any two points of the space. ... In mathematics, the covering radius of a collection of points P in a space is the smallest r > 0 such that spheres of radius r around the points P will completely cover the space. ... In mathematics, a (binary) linear code of length and rank is a linear subspace with dimension of the vector space . Aside: is the field of two elements and is the set of all n-tuples of length over . ...

e Error correction
Decade of method introduction
1850s-1900s: check digit
1940s-1960s: checksum
1960s: Reed-Solomon
1960s: LDPC codes
1990s: Turbo codes
1990s: Space-time code
Related topics
Information theory Shannon limit

  Results from FactBites:
 
Error detection and correction - Wikipedia, the free encyclopedia (1764 words)
Error detection is the ability to detect errors that are made due to noise or other impairments in the course of the transmission from the transmitter to the receiver.
Information theory tells us that whatever the probability of error in transmission or storage, it is possible to construct error-correcting codes in which the likelihood of failure is arbitrarily low, although this requires adding increasing amounts of redundant data to the original, which might not be practical when the error probability is very high.
Shannon's theorem sets an upper bound to the error correction rate that can be achieved (and thus the level of noise that can be tolerated) using a fixed amount of redundancy, but does not tell us how to construct such an optimal encoder.
Error-correcting code - Wikipedia, the free encyclopedia (320 words)
It is used in computer data storage, for example in dynamic RAM, and in data transmission.
Shannon's theorem is an important theory in error correction which describes the maximum attainable efficiency of an error-correcting scheme versus the levels of noise interference expected.
The effectiveness of the coding scheme is measured in terms of the Coding gain, which is the difference of the SNR levels of the uncoded and coded systems required to reach the same BER levels.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m