Comp 346/488: Intro to Telecommunications

Tuesdays 7:00-9:30, Lewis Towers 412

Class 12: Nov 23

Reading:

    Chapter 11, ATM
    Chapter 13, congestion control (despite the title, essentially all the detailed information is ATM-specific)



Filtering demo

Basic idea: working with sounds as data

Demo with HAL (Douglas Rain) and Sideshow Bob Terwilliger (Kelsey Grammer), and SoX / java

creation of the sound files
separation of the joined file by hipass / lowpass filtering


Adaptive Rates

16-bit PCM at 8000 Hz sampling rate is 128 kbps of data. At 15000 Hz it is 240 kbps.

But if you can get feedback from your network that you don't have that much bandwidth, cutting back is straightforward. Most algorithms that do that also attempt some compression, but my bitrate.java simply does the following, where N<16 and M are parameters:
This reduces the raw bitrate by N×M. I have some files, of the form cantdo.N.M.wav. cantdo.10.3.wav is quite understandable. That's a bitrate of 8kbps, without companding (mu-law)!

One practical issue is figuring out how the application will receive feedback from the network; usually this is done in transport-specific libraries (such as the RTP programming interface).

Digital AM radio

Example: analog FDM trunking, or "L-carrier"

Suppose we have several signals si(t), taking values from -1 to +1, and we want to transmit them on carrier frequencies fi. We'll assume that each signal fits "comfortably" into the band from 0 to B, and that each adjacent pair of frequencies are at least 2B apart, and that the smallest fi is at least 2B.

What a transmitter does:
Digitally, we would have to establish a uniform sampling rate greater than twice any of the fi, and use that for PCM encoding. All this is straightforward, although the sampling rate might be rather high. It is most convenient if the sampling rate for r(t) is some multiple K of 2B.

What a receiver does to receive signal i:
We should expect that Si(t) -1 is pretty much the same as si(t).

Real radios do the bandpass filter with a simple coil-capacitor tuning circuit, use an electronics diode (which may simply erase the negative half of the sine wave, rather than take its absolute value), and then use an appropriately sized capacitor to do the last step with electronics averaging. The coil-capacitor tuning circuit is a bandpass filter that has relatively poor definition; it's more bell-shaped than pulse-shaped.



            

ATM: Asynchronous Transfer Mode




ATM Service categories

realtime:
CBR: voice & video; intended as ATM version of TDM
rt-VBR: compressed voice/video; bursty

nonrealtime:
nrt-VBR: specify peak/ avg/burst in reservation
ABR: subject to being asked to slow down
UBR: kind of like IP; only non-reservation-based service
GFR: guaranteed frame rate: better than UBR for IP; tries to optimize for packet boundaries

How do ABR, UBR, and GFR differ?



traffic parameters (these describe what we want to send). Most important ones are in bold.
statistics
leaky buckets & token-bucket depth


QoS parameters (these describe losses and delays):

Main ones are: PCR, CDV, SCR, BT (summarized in table 9e:13.2)


Characteristics:


realtime non-realtime; no delay requirements
Attribute CBR rt-VBR nrt-VBR ABR UBR
CLR specified specified specified specified no
CDV/CTD
MaxCTD
CDV +
maxCTD
CDV +
maxCTD
MeanCTD [?],
larger than for rt
no no
PCR, CDVT specified specified specified specified specified
SCR, BT
N/A
specified
specified, larger
BT than rt-VBR
N/A
N/A
MCR N/A
N/A
N/A
specified
N/A
Congestion
control
no
no
no
yes
no

Congestion issues generally

Congestion occurs at entry queues to links. Congestion is usually defined to be (taildrop) losses, but the term may also refer to queue buildup. Congestion may simply result in increased delay at constant throughput (best case). But it also may result in plummeting throughput; as offered_load -> 100%, perhaps delay -> infinity!

retransmission saturation: suppose A sends to D, B to C, C to B, and D to A, all sending clockwise (ABD, BDC, DCA, CAB). Suppose each sender offers rate r<1, where 1 is the maximum rate.

A-----B
|     |
|     |
C-----D

Now suppose that at each router the loss-ratio is α<=1.

Traffic from A to B (thus, the load offered to router B) consists of:
Further, let us assume that if the offered load exceeds 1, then the router reduces the load proportionally.

The total load is r + (1-α)r. For r≥0.5, we can solve for α: α = (2r-1)/r. When r=1/2, α=0 (no losses). As r⟶1, however, we see α⟶1, that is, 100% loss rate. [ref: Kurose & Ross, Computer Networking 5e, §3.6, Scenario 3]


Brief discussion of TCP congestion management
    congestion window
    additive-increase, multiplicative decrease
    windows with no loss: cwnd += 1
    windows experiencing loss: cwnd = cwnd/2

This tends to oscillate between a full queue at the bottleneck router, and half that (including packets in transit). Sometimes this leads to considerable unnecessary delay.

ATM

GCRA [generalized cell rate algorithm] CAC [connection admission control]
(GCRA defined later)

CBR = rt-VBR with very low value for BT (token bucket depth)
rt-VBR: still specify maxCTD; typically BT is lower than for nrt-VBR
nrt-VBR: simulates a lightly loaded network; there may be bursts from other users

Circuit setup involves negotiation of parameters

Things switches can control:
  1. Admission Control (yes/no to connection)
  2. Negotiating lower values (eg lower SCR or BT) to a connection
  3. Path selection, internally
  4. Allocation of bandwidth and buffers to individual circuits/paths
  5. Reservations of bandwidth on particular links
  6. Selective dropping of (marked) low-priority cells
  7. ABR only: asks senders to slow down
  8. Policing


Congestion control generally: ch 13.2, fig 89e:13.5

credit-based: aka window-based; eg TCP
rate-based: adjust rate rather than window size

Note that making a bandwidth reservation may still entail considerable per-packet delay!

Some issues:

Any congestion-management scheme MUST encourage "good" behavior: we need to avoid encouraging the user response of sending everything twice, or sending everything faster.


ATM congestion issues:

Latency v Speed:
transmission time of a cell at 150 Mbps is about 3 microsec. RTT propagation delay cross-country is about 50,000 microsec! We can have ~16,000 cells out the door when we get word we're going too fast.

Note that there is no ATM sliding-windows mechanism!

CLP bit: negotiated traffic contract can:
1. cover both CLP=0 and CLP=1 cells; ie CLP doesn't matter
2. Allow sender to set CLP; contract covers CLP=0 cells only
3. Allow network to set CLP; CLP set to 1 only for nonconforming cells

Section 13.5 (towards the end)
Suppose a user has one contract for CLP=0 traffic and another for CLP 0 or 1 traffic. Here is a plausible strategy:

cell
compliance
action
CLP = 0
compliant for CLP = 0
transmit
CLP = 0
compliant for CLP 0 or 1
set CLP=1, transmit
CLP = 0
noncompliant for CLP 0 or 1
drop
CLP = 1
compilant for CLP 0 or 1
transmit
CLP = 1
noncompliant for CLP 0 or 1
drop

A big part of GFR is to make sure that all the cells of one larger IP frame have the same CLP mark, on entry. A subscriber is assigned a guaranteed rate, and cells that meet this rate are assigned a CLP of 0. Cells of additional frames are assigned CLP = 1.