Comp 346/488: Intro to Telecommunications

Tuesdays 7:00-9:30, Lewis Towers 412

Class 11: Nov 16

Reading:

    Chapter 11, ATM
    Chapter 13, congestion control (despite the title, essentially all the detailed information is ATM-specific)



            

ATM: Asynchronous Transfer Mode

versus STM: what does the "A" really mean?

VPI: 8/12 bits     VCI: 16 bits
VPIs and VCIs are local!
advantages and drawbacks to VPI/VCI model
CLP bit

Header CRC; correction attempts; burst-error discards
why protect the header?
Error-correcting options; fig 9e:11.8, 8/7e:11.6)

GFC field: TRANSMIT, HALT/NO_HALT (spread over time)

Transmission & synchronization

HUNT state; HEC field & threshhold for leaving, re-entering HUNT state

More on synchronization process, α and δ values (Fig 9e:11.11 (α=5,7,9), Fig 9e:11.12 (δ=4,6,8))
Lower-level synchronization

jitter and "depacketization delay"

STM-1 diagram, Fig 9e:11.13, 7/8e:11.11: note use of H4 to contain offset to next cell header
perspective of ATM as a form of TDM; note 87*3 = 261 columns, minus 1 for path overhead

_______________________________________________

ATM Delays, 10 switches and 1000 km at 155 Mbps

 
                                               
cause time, in microsec reason
Packetization delay 6,000 48 bytes at 8 bytes/ms
propagation delay 5,000 1000 km  at 200 km/ms
transmission delay 30 10 switches, 3microsec each at 155Mbps
queuing delay 70 80% load => 2.4 cell avg queue, by queuing theory; MeanCDT
processing delay 280 28 microsec/switch
depacketization delay 70++ worst-case queuing delay / MaxCDT


AAL (ATM Adaptation Layers)

(not really in Stallings)
This is about the interface layer between the data (eg bitstreams or packets) and the cells

PDU: Protocol Data Unit

CS-PDU: Convergence Sublayer PDU: how we "package" the user datagram to prepare it for ATMification
A layout of the encapsulated user data (eg IP datagram), with ATM-specific prefix & suffix fields added. See below under AAL3/4

SAR-PDU: Segmentation-and-Reassembly PDU; the "chopped-up" CS-PDU with some cell-specific fields added
sixth edition p 369

AAL1 (sometimes known as AAL0)

constant-bit-rate source (voice/video)
may include timestamps; otherwise buffering is used at receiver end.
Does include a three-bit (4-bit??) sequence number; payload is 47 bytes.

AAL2

Intended for analog streams (voice/video subject to compression, thus having a variable bitrate)

AAL3/4

(Once upon a time there were separate AAL3 and AAL4 versions, but the differences were inconsequential even to the ATM community.)

CS-PDU header & trailer
  
CPI(8)
Btag(8)
BASize(16)
User data (big)
Pad
0(8)
Etag(8)
Len(16)

The Btag and Etag are equal; they help to detect the loss of the final cell and thus two CS-PDUs run together.
BAsize = Buffer Allocation size, an upper bound on the actual size (which might not yet be known).

The CS-PDU is frappéd into 44-byte chunks of user data (the SAR-PDUs)
preceded by type(2 bits) (eg first/last cell),SEQ(4), MID(10),
followed by length(6 bits), crc10(10 bits)

Type(2)
SEQ(4)
MID(10)
44-bye SAR-PDU
len(6)
crc10(10)

TYPE: 10 for first cell, 00 for middle cell, 01 for end cell, 11 for single-cell message
SEQ: useful for detecting lost cells; we've got nothing else!
MID: Multiplex ID; allows multiple streams
LEN: usually 44

AAL3/4: MID in SAR-PDU only (not in CS-PDU)

Note that the only error correction we have is a per-cell CRC10. Lost cells can be detected by the SEQ field (unless we lose 16 cells in a row).

Note also that if the bit error rate is constant, the packet error rate is about the same whether you send the packet as one big unit or as a bunch of small cells.

AAL5

Somebody noticed AAL3/4 was not too efficient. The following changes were made
CS-PDU
Data (may be large)
pad
reserved(16)
len(16)
CRC32(32)

We then cut this into 48-byte pieces, and put each piece into a cell. The last cell bears the "header mark".
The last-cell header bit takes care of the AAL 3/4 Type field. The per-cell CRC(10) is replaced with per-CSPDU CRC32.
What about the MID field? We don't even need it!

Why don't we need the SEQ field? Or the LEN field?

Why should ATM even support a standard way of encoding IP (or other) data? Because this is really fragmentation/reassembly; having a uniform way to do this is very helpful as it allows arbitrarily complex routing at the ATM level, with reassembly at any destination.



Compare these AALs with header-encapsulation; eg adding an Ethernet header to an IP packet

What about loss detection?
Assuming a local error that corrupts only 1 cell,
AAL3/4: The odds of checksum failure are 1 in 210 (this is the probability that an error occurs but fails to be detected).
AAL5:   The odds of checksum failure are 1 in 232.

Errors in two cells:
AAL3/4: failure odds: 1 in 220.
AAL5:   failure odds: 1 in 232.

Errors in three cells: AAL5 still ahead!

Suppose 20 cells per CS-PDU. AAL3/4 devotes 200 bits to CRC; AAL5 devotes 32, and (if the error rate is low enough that we generally expect at most three corrupted cells) still comes out ahead.

Shacham & McKenney [1990] XOR technique: send 20-cell IP packet as 21 cells.
Last one is XOR of the others. This allows recovery from any one lost cell. In practice, this is not necessary. It adds 5% overhead but saves a negligible amount.

High error rates encourage small packets (so that only a small packet is lost/retransmitted when an error occurs). Modern technology moves towards low error rates. For a while, it was implicitly assumed that this also would mean large packets. But note that, for a (small) fixed bit error rate, your overall error rate per KB will be about the same whether you send it as a single packet or as ~20 ATM packets. The probability of per-packet error will be roughly proportional to the size of the packet.

Another reason for ATM header error check: misaddressed packets mean the right connection loses the packet, but also the wrong connection receives the packet. That probably messes up reassembly on that second connection, too.



Ethernet Relay Service v Frame Relay v DS lines

How do you connect to your ISP?
How do you connect two widely separated LANs (eg wtc & lsc, or offices in different states)
How do you connect five widely separated LANs?

Method 1: lease DS1 (or fractional DS3) lines.




Method 2: lease Frame Relay connections.
The latter is considerably cheaper (50-70% of the cost)

DS1 has voice-grade capability built in. Delivery latency is tiny. You don't need that for data.

What you do need is the DTE/DCE interface at each end.

DTE(you)---DCE - - - leased link - - - - DCE---DTE(you again)

Frame Relay: usually you buy a "guaranteed" rate, with the right to send over that on an as-available basis.

"Guaranteed" rate is called CIR (committed information rate); frames may still be dropped, but all non-committed frames will be dropped first.

Ideally, sum of CIRs on a given link is ≤ link capacity.

Fig 8e:10.19: frame-relay [LAPF] format; based on virtual-circuit switching. (LAPB is a non-circuit-switched version very similar to HDLC, which we looked at before when we considered sliding windows.)

Note DE bit: Basic rule is that everything you send over and above your CIR gets the DE bit set.

Basic CS-PDU:
flag
VCI, etc
Data                               
CRC
flag

Extra header bits:
                                    
EA
Address field extension bit (controls extended addressing)
C/R
Command/response bit (for system packets)
FECN
Forward explicit congestion notification
BECN
Backward explicit congestion notification
D/C
address-bits control flag
DE
Discard eligibility

(No sequence numbers!)

You get even greater savings if you allow DE bit to be set on all packets (in effect CIR=0).
(This is usually specified in your contract with your carrier, and then the DE bit is set by their equipment).
This means they can throw your traffic away as needed. Sometimes minimum service guarantees can still be worked out.

FECN and BECN allow packets to be marked by the router if congestion is experienced. This allows endpoint-based rate reduction. Of course, the router can only mark the FECN packet; the receiving endpoint must take care to mark the BECN bit in returning packets.

Bc and Be: committed and excess burst sizes; T = measurement interval (CIR is in bytes/T; CIR = Bc/T).
Bc = T/CIR; however, the value of T makes a difference! Big T: more burstiness

Data rate r:
     r < Bc: guaranteed sending
    Bc < r < Be: as-available sending
    Be < r: do not send

One problem: do we use discrete averaging intervals of length T?




Another alternative: Ethernet Relay Service. Loyola uses (still?) this to connect LSC and WTC.
Internally, internally probably carried on Frame Relay (or perhaps ATM) circuits.
Externally (that is, at the interfaces), it looks like an Ethernet connection. (Cable modems, DSL, non-wifi wireless also usually have Ethernet-like interfaces.)



This is the setting where ATM might still find a market.

x.25 v frame relay:

real cost difference is that with the former, each router must keep packet buffered until it is acknowledged


Token Bucket

Outline token-bucket algorithm, and leaky-bucket equivalent.

Token bucket as both a shaping and policing algorithm
See Fig 9e:13.9 (78e:13.11)
packet needs to take 1 token to proceed, so long-term average rate is r
Bursts up to size b are allowed.

shaping: packets wait until their token has accumulated.
policing: packets arriving with no token in bucket are dropped.

Both r and b can be fractional.
Fractional token arrival is possible: transmission fluid enters drop-by-drop or continually, and packets need 1.0 cups to go.

Leaky-bucket analog: Bucket size is still b, r = rate of leaking out.
A packet adds 1.0 unit of fluid to the bucket; packet is conformant if the bucket does not overflow.

Leaky-bucket formulation is less-often used for packet management, but is the more common formulation for self-repair algorithms:
faults are added to the bucket, and leak out over time. If the bucket is nonempty for >= T, unit may be permanently marked "down"


HughesNet satellite-internet token-bucket (~2006):
fill time: 50,000 sec ~ 14 hours

My current ISP has rh & rl (1.5 Mbit & 0.4 Mbit??).
I get:
    rh / small_bucket
    rl / 150 MB bucket

In practice, this means I can download at rh for 150 MB, then my rate drops to rl.

While the actual limits are likely much higher in urban areas, more and more ISPs are implementing something like this.

In general, if token-bucket (r,b) traffic arrives at a switch s, it can be transmitted out on a link of bandwidth r without loss provided s has a queue size of at least b.



ATM Service categories

realtime:
CBR: voice & video; intended as ATM version of TDM
rt-VBR: compressed voice/video; bursty

nonrealtime:
nrt-VBR: specify peak/ avg/burst in reservation
ABR: subject to being asked to slow down
UBR: kind of like IP; only non-reservation-based service
GFR: guaranteed frame rate: better than UBR for IP; tries to optimize for packet boundaries

How do ABR, UBR, and GFR differ?



traffic parameters (these describe what we want to send). Most important ones are in bold.
statistics
leaky buckets & token-bucket depth


QoS parameters (these describe losses and delays):

Main ones are: PCR, CDV, SCR, BT (summarized in table 9e:13.2)


Characteristics:


realtime non-realtime; no delay requirements
Attribute CBR rt-VBR nrt-VBR ABR UBR
CLR specified specified specified specified no
CDV/CTD
MaxCTD
CDV +
maxCTD
CDV +
maxCTD
MeanCTD [?],
larger than for rt
no no
PCR, CDVT specified specified specified specified specified
SCR, BT
N/A
specified
specified, larger
BT than rt-VBR
N/A
N/A
MCR N/A
N/A
N/A
specified
N/A
Congestion
control
no
no
no
yes
no


[table from Walrand & Varaiya]

PCR: actual definition is sort of averaged over time; otherwise, two cells sent back-to-back implies PCR of raw link bandwidth

Cell Delay Variation, and buffering : Fig 9e:13.6, 8e:13.8. Draw a vertical line in the diagram to get list of packets currently in route (if the line intersects D(i)) or in the receive buffer (if it intersects V(i)).


The first packet (packet 0) is sent at time T=0. Its propagation delay is D(0). Upon arrival, the receiver holds it for an additional time V(0), to accommodate possible fluctuations in future D(i). Packet transmission time for the ith packet is k*i, where k is the packet sending interval (eg 6 ms for 48-byte ATM cells). The ith packet arrives at k*i + D(i). Let T be the time when we play back the first packet, D(0)+V(0); this is the time between sending and playing. Then for each i, D(i) +V(i) = T, so V(i) = T-D(i). Also, the ith packet needs to be played back at T+k*i. A packet is late (and thus unuseable) if D(i)>T.

V(0) is thus the value chosen for the initial receiver-side "playback buffering time". If the receiver chooses too small a value, a larger percentage of subsequent packets/cells may arrive too late. If V(0) is too large, then too much delay has been introduced. Once V(0) is chosen, all the subseequent V(i) are fixed.

V(0) (or perhaps, more precisely, the mean value of all the V(i)) is also known as the depacketization delay.

Suppose we know (from past experience with the network) the mean delay meanD, and also the standard deviation in D, which we'll call sd (more or less the same as CDV, though that might be mean deviation rather than standard deviation). Assuming that we believe the delay times are normally distributed, statistically, we can wait three standard deviations and be sure that the packet will have arrived 99.99% of the time; if this is good enough, we can choose V(0) = meanD + 3*sd - D(0), which we might just simplify to 3*sd.  If we want to use CDV, statistical theory suggests that for the same 99.99% confidence we might want to use  4*CDV. In general, we're likely to use something like V(0) = k1 + k2*CDV.

Sometimes the D(i) are not normally distributed. This can occur if switch queues are intermittently full. However, that is more common in IP networks than voice-oriented networks. On a well-behaved network, queuing theory puts a reasonable value of well under 1 ms for this delay (Walrand calculates 0.07 ms).

Note that the receiver has no value for D(0), though it might have a pretty good idea about meanD.

Note that the whole process depends on a good initial value V(0); this determines the length of the playback buffer.

V(n) is negative if cell n simply did not arrive in time.

For IP connections, the variation in arrival time can be hundreds of milliseconds.


Computed arrival time; packets arriving late are discarded.

Note that
    increased jitter
    => increased estimate for V(0)
    => increased overall delay!!!