Week 3 4b/5b: each 5-bit code has max 1 leading 0, 2 trailing 0's. Send with NRZI, so runs of 1's are ok. 0000 11110 0001 01001 0010 10100 0011 10101 .. 1100 11010 1101 11011 1110 11100 1111 11101 IDLE 11111 DEAD 00000 HALT 00100 Shannon's Law 2.3: basics of framing (on a dedicated serial link): Ethernet (interpacket gaps) char stuffing: SYN SOH header STX data ETX escape char is DLE; stuffed as prefix to ETX,DLE in data 4b/5b and special bit patterns hdlc (typical bit-oriented) bisync (typical byte-oriented framing) End of packet is flagged with ETX Any ETX in body is "escaped" with an extra DLE char Any DLE in body is also escaped with an extra DLE Like C/Java quoted strings: " and \ in body are each escaped with \ sonet (clock-based) starts at STS-1, 9x90, 1st 3 bytes of each *row* is frame header scrambling: sent as frame XOR std_random_pattern sts-3 = 3 sts-1 frames, 9x270 frames can float frame re-synchronization need for accuracy in sending voice bytes at 8000/sec STS-1 STM-0 51.84 Mbps = 810 bytes/frame * 8 bits/byte * 8000 frames/sec STS-3 STM-1 OC-3 STS-12 STM-4 622.08Mbps STS-48 STM-12 2.4: error detection parity 1 parity bit catches all 1-bit errors No generalization to N! checksums ones-complement sum of A and B: form twos-complement sum A+B if there is an overflow bit, add it back in as low-order bit Fact: ones-complement sum is never 0000 unless all bits are 0. CRC (skimmed) 2-D parity (corrects 1-bit errors) fundamental role of error-correcting codes (= "forward error correction") ====================================================================================== Sliding Windows 2.5, P&D 2.5: stop-and-wait versus sliding windows four stop-and-wait scenarios: p 103 retransmit-on-timeout v. retransmit-on-duplicate sorcerers' apprentice bug (both sides retransmit on duplicate) 1st/last packet just gave diagram of SWS=4 sliding windows basic ideas: SWS, RWS, LFS, LFA, NFE slow sender/slow router bandwidth*delay sending window: LAR+1 to LAR+SWS, receive window: NFE to LFA=NFE+RWS-1 Four regions of sender line: x <=LAR, LARA direction, all connections are infinitely fast. In the A->B direction, the A->R1 link is infinitely fast, but the other four each have a bandwidth of 1 packet/second. This makes the R1->R2 link the "bottleneck link" (it has the minimum bandwidth, and although there's a tie for the minimum this link is the first such one encountered) Alternative example: C----S1----S2----D assume: C-S1 link is infinitely fast (zero delay) S1->S2, S2->D each take 1.0 sec bandwidth delay (so two packets take 2.0 sec, per link, etc) Acks have same delay in reverse direction In both scenarios: no-load RTT = 4.0 sec Bandwidth = 1.0 packet/sec (= min link bandwidth) We assume a SINGLE CONNECTION is made; there is no competition. Bandwidth * Delay here is 4 packets (1 packet/sec * 4 sec RTT) Case 1: SWS = 2 window < bandwidth*delay (delay = RTT): less than 100% utilization delay is fixed as cwnd changes; is "base" rtt bandwidth proportional to cwnd example: SWS= 2. Throughput = 2/4 packets/sec each second, *two* of the routers R1-R4 are idle actual_RTT = 4 Case 2: SWS = 4 When SWS=4, throughput = 1 packet/sec, actual_RTT=4, and each second all four bottleneck links are busy. ================= Case 3: SWS = 6 (for Week 4) window > bandwidth*delay: pile up at router somewhere delay rises (artificially) bandwidth is that of bottleneck link example: SWS=6. Then the actual RTT rises to 6.0 sec. Each second, there are two packets in the queue at R1. avg_queue + bandwidth*no-load-RTT = 2+4 = 6 = SWS Now, however, *actual-RTT* is 6, and to the sender it *appears* that SWS = bandwidth * actual-RTT.