Error Correction
Error-correcting codes
(not on the final)
1. Triple-bit voting: send 111 for 1, 000 for 0. Very inefficient!
2. Example 6.9 in Stallings 8th edition (not, alas, in 7th):
data codeword
00 00000
01 00111
10 11001
11 11110
The Hamming distance between two codewords is the number of bit positions
where the bits are unequal. Any pair of two codewords above are different in
at least three bit positions; that is, the Hamming distance is at least
three. If a single bit is flipped, there is a unique "closest" codeword (in
the sense of Hamming distance) to the received codeword; we can figure out
what the original codeword had to have been.
Note that in this example we are transmitting 5 bits to communicate 2 bits;
this is only slightly more efficient than the triple-bit-voting mechanism
where we would transmit 6 bits to communicate 2 bits.
3. 2-D parity
4. Hamming code (used for error-correcting ram, etc)
8 data bits, 4 check bits
16 data bits, 5 check bits
2^n data bits, n+1 check bits
10101101
data
1 1 1 0 |1
parity of bits with last bit (0th bit) of position = 0
10 11 |1
parity of bits with 1th bit of position = 0
1010 |0
parity of first half
10101101|1
parity of entire thing
Verify we can recover from any 1-bit error in the data
What happens if the second parity bit is a 0?
Reed-Solomon codes (used on CDs, DVDs, etc) (NOT DISCUSSED)
Given k data values (eg k=223), send n>k transmitted values as follows.
Let k data values be a[0]...a[k-1].
Form the polynomial P(X) = a[0] + a[1]X + a[2]X2 + ... + a[k-1]Xk-1
Evaluate P at points x0...x(n-1), and send those values.
Use a "finite field" so there's no roundoff.
Each field value (xi or ai) is a multi-bit symbol.
Max correction: (n-k)/2, which is pretty good. n-k = # of extra bits.
RS works well for burst errors (hence CDs/DVDs); not as efficient as others
for single-bit errors