Comp 346/488: Intro to Telecommunications
Tuesdays 7:00-9:30, Lewis Towers 412
Class 6: Oct 5
Chapter readings (7th-9th editions):
8.1, 8.2
10.1-10.5
Midterm: Oct 26
No class Oct 12
orderwire: a circuit used for
management & maintenance. In the SONET orderwire examples below,
the circuits are DS0, making them suitable for voice. But there is no
requirement that voice actually be used.
"DS", as in DS-1, DS-3, etc, apparently means Data Stream.
Note from wikipedia:
allegedly in 1958 there was internal AT&T debate as to whether T-1
lines should have 1 extra bit for framing, or 1 extra byte. Supposedly
the 1-bit group won because "if 8 bits were chosen for OA&M
function, someone would then try to sell this as a voice channel and
you wind up with nothing." Later, AT&T realized 1 byte would have
made more sense, and introduced various bit-stealing techniques; eg the
low-order bit of each sixth byte.
bit-stuffing (pulse stuffing) was seen by the telecommunications
industry as a major weakness in the T-carrier system, introducing more
and more wasteful overhead as the multiplexing grew. The core issue is
that when you combine several "tributaries" into one larger data
stream, your big stream needs extra capacity to be able to handle speedups and
slowdowns in the inputs.
SONET was an attempt to avoid this problem.
SONET
Good reference: sonet_primer.pdf
First look at SONET hierarchy: Table 8.4
STS-1/OC-1
|
|
51.84 Mbps
|
STS-3/OC-3
|
STM-1
|
155.52 Mbps
|
STS-12/OC-12
|
STM-4
|
622.08 Mbps
|
STS-48/OC-48
|
STM-16
|
2488.32 Mbps
|
STS-192/
|
STM-64
|
9953.28 Mbps
|
STS-768
|
STM-256
|
...
|
STS = Synchronous Transport Signal
OC = Optical Carrier
Note that each higher bandwidth is exactly 4 times the previous (or 3 for the first row). There is no stuffing.
Path: end-to-end data
Line: single stretches of fixed multiplexing. Streams enter or leave a line only at its endpoints
Section: segment of a line between signal regenerators
basic SONET frame
-
Transport overhead = Section overhead (3 rows) + Line overhead (6 rows)
-
framing bytes: A1, A2 0xF628 (section)
-
multiplexing number: STS-ID (section)
-
E1, F1: special header-only circuits ("orderwires"). E1 is the section orderwire; F1 is the Line orderwire.
-
H1-H2: frame alignment (line); points to the start (the J1 byte) of the floating payload envelope (SPE) within the frame
-
H3: room for an extra SPE byte in case of negative pointer adjustment (byte stuffing).
The payload envelope, or SPE, is the 87 columns reserved for path
data. The SPE can "float"; the first byte of teh SPE can be any byte in
the non-overhead part of the frame; the next SPE then begins at the
corresponding position of the next frame. This floating allows for the
data to speed up or slow down relative to the frame rate, without the
need for T-carrier-type "stuffing".
SPEs are generally spread over two consecutive frames. It is often
easier to visualize this if we draw the frames aligned vertically.
The first column of the SPE is the path overhead; columns 30 and 59 are also reserved. Total data columns: 84
Frame-alignment algorithm: search for frames where A1A2 matches the predetermined 0xF628 pattern
STS-1: 9 rows × 90 columns × 8 bits/byte × 8 frames/ms = 51840 b/ms frame rate
Data rate: 9 rows × 84 columns × 8 × 8 = 48.384 mbps
DS-3 rate: 44.7 mbps
STS-3: exactly 3 STS-1 frames
An end-to-end SONET connection is essentially a "virtual circuit",
where the sender sends STS-1 frames (or larger) and the receiver
receives them. These frames may be multiplexed with others inside the
network. These frames are themselves often the result of multiplexing
DS lines and packet streams.
Stuffing
The SPE arrival rate can run slower than the 8000 Hz frame rate, or
faster. Stuffing is undertaken only at the Line head, ie the point
where multiplexing of many SPEs onto a single line takes place.
Positive stuffing is used when the SPEs are arriving too slowly, and so an extra
frame byte will be inserted between occasional SPEs. The SPE-start J1
byte will slip towards the end of the frame. Some bits in the H1-H2
word are flipped to indicate stuffing; as a result, in order to ensure
a sufficient series of unflipped H1-H2 words, stuffing is allowed to
occur only every third frame.
Negative stuffing is used when
the SPEs are arriving slightly too fast. In this case, the pointer is
adjusted (again with some bits in the H1-H2 word flipped to signal
this), and the extra byte is placed into the H3 byte. The J1 SPE-start
byte moves towards the start of the frame ("forward").
Note that negative stuffing cannot occur twice, but never has to.
Stuffing is only done at line entry. If a line is ended, and
demultiplexed, and one path is then remultiplexed into a new line, then
stuffing begins all over again with a "clean slate".
Virtual Tributaries
An STS-1 path can be composed of standard "substreams" called virtual
tributaries. VTs come in 3-column, 4-column, 6-column, and 12-column
versions.
If we multiplex 4 DS-1's into a DS-2, and then 7 DS-2's into a DS-3,
then the only way we can recover one of the DS-1's from the DS-3 is to
do full demultiplexing. Virtual tributaries ensure that we can package
a DS-1 into a SONET STS-1 channel and then pull out (or even replace)
that DS-1 without doing full demultiplexing.
SONET and clocking
This is very synchronous.
All clocks trace back to same atomic source. For this reason, clocking is NOT a major problem. Data is sent NRZ-I, no stuffing.
Strictly speaking, all the SONET equipment in a given "domain" gets its
clock signal from a master source for that domain. All equipment should
then be on the same clock to within ± 1 bit.
The stuffing mechanism described above is for interfaces between "clock domains", where there may be slight differences in timing. Such an arrangement is said to be plesiochronous.
NOTE: the rectangular layout
guarantees at most 87 bytes before a nonzero value; XOR with a fixed
pseudorandom pattern is also used. (Actually, the max run of 0's is 30
bytes, because of the "fixed" SPE columns.)
SPE path overhead bytes:
J1: over 16 consecutive frames, these contain a 16-byte "stream identifier"
B3: SPE parity
C2: flag for SPE content signaling (technical)
G1: round-trip path monitoring ("path status")
F2: path orderwire
H4:
Z3, Z4, Z5: path data
Note that the J1 "stream identifier" SPE byte is the closest we have to
a "stream address". SONET streams are more like "virtual circuits" than
packets (a lot more like!): senders "sign up" for, say, an STS-1
channel to some other endpoint, and the SONET system is responsible for
making sure that frames sent into it at the one end come out at the
right place at the other end. Typically the input to an STS-1 channel
is a set of DS-1/DS-3 lines (at most one DS-3!), or perhaps those mixed
with some ATM traffic.
IP over SONET
This can be done IP -> ATM -> SONET, or else using POS (Packet Over SONET).
The IP packet is encapsulated in an HDLC packet (this involves
bit-stuffing). We then lay the results out in the data part of a SONET
frame. During idle periods, we send HDLC frame (0111 1110) bytes, which
might not in fact end up aligned on byte boundaries due to bit-stuffing.
Back to Sliding Windows
communications cases:
-
interactive; back-and-forth synchronized
-
asynchronous back-and-forth
-
bulk one-way transfer
- request/reply
The sliding-windows algorithm addresses:
-
flow control
-
reliable transmission/retransmission
-
in-order delivery
-
increased throughput
-
managing congestion
Flow Control
For the moment, we will assume no losses.
Without losses, we don't need sequence numbers. (Without losses we need ACKs only for flow control.)
Stop-and-wait: The receiver sends an ACK when it is ready to accept the next packet.
Sliding windows: when the receiver sends ACK[N] (acknowledging packet
DATA[N]), then the sender can send up to DATA[N+W], where W = window
size. Stop-and-wait = Sliding windows with W=1.
Note that typically when ACK[N] is received, DATA[N+1]...DATA[N+W-1]
have been sent, and so ACK[N] simply elicits the transmission of
DATA[N+W].
Do some examples time needed to send, time needed to send & get ACK
bandwidth: 100kbps (100 b/ms)
propagation: 20 ms
packet size: 1000 bits (time = 10ms)
Stop-and-wait throughput: 1 packet/50ms
Sliding windows with window size = 3: 3 packets/ 50 ms
Max effective window size: 5 packets
Timeout/retransmission
ARQ = Automatic Repeat reQuest: basically, timeout/retransmit
stop-and-wait ARQ
This adds loss/retransmission to stop-and-wait
Stop-and-wait with losses: Fig 7.5
Note that only 1 bit of sequence "number" is needed.
Note that arrival of previous packet is possible, but not an earlier one
Note need for timeout.
Sorcerer's Apprentice bug: both sides resend on duplicates.
Go-Back-N ARQ
From perspective of receiver, the lossless case looks EXACTLY like
stop-and-wait, except packets arrive faster (and, if there are losses,
packets may arrive that are farther ahead in the stream than we
expected).
This case is the same as ReceiveWindow = 1 (where we make a distinction
between the SendingWindow and the ReceiveWindow; the latter represents
the number of buffers the receiver allocates).
Lost data v lost ACKs
-
RR: "Ack, ready for more" (Stallings uses RR(i+1) as the acknowledgement that DATA[i] was received)
-
RNR: "Ack, not ready for
more" (TCP supports this, but in a slightly obscure way: TCP RNR is
literally an ACK with a "window advertisement" of 0)
- RR with P-bit set: poll for ACK status (RR-P)
-
REJ: reject: bad frame received
Sometimes there is no REJ, just timeout
TCP: just resend data on
timeout (or fast retransmit); don't poll for ACKs. Most modern TCPs
support Selective ACK, or SACK, but otherwise TCP does not support REJ.
Stallings' version of the algorithm (loosely based on HDLC)
If the receiver gets an out-of-order frame (eg it receives
DATA[i+1] but DATA[i] was lost), it sends REJ. (REJ for an out-of-order
frame only makes sense on nonreordering links.)
If the sender's timeout fires for packet i, or if it receives REJ[i], then it retransmits DATA[i], and any later packets that have been transmitted.
If the sender gets an ACK for DATA[i] (either RR or RNR), then it
"slides the window forward" to [i+1 ... i+W]. If RR is received,
additional packets may be transmitted; if RNR was received, the sender
waits for an RR (most likely polling the receiver to make sure that the RR was not lost).
HDLCism: if sender gets no response, it transmits RR+P and polls
with these until it gets response (TCP: sender would retransmit the
frame)
TCP: if receiver gets an out-of-order frame, send current ACK. (TCP
receive window = send window.) Notice that REJ doesn't exist for
standard TCP.
Example: Fig 7-6
Selective-Reject ARQ
Stallings' version:
Receiver sends SREJ (NACK) upon receiving an out-of-order frame, just like in the book's go-back-N version, but any following frames are buffered.
When the sender has DATA[i] time out, or if it receives a SREJ for
DATA[i], then it retransmits that one only. Typically the receiver will
then respond with something like ACK[i+W], taking into account all the
later packets received.
Note that in the event of a timeout, the sender does not know whether subsequent packets were delivered ok or not.
Note that in Figure 7.6(b) on page 221 (ninth ed), when RR1 is lost
(about 2/3 the way down the diagram) the sender times out and sends an
RR(P=1). HOWEVER, if frame1 and frame2 also triggered RRs (RR2 and RR3
respectively), one of these would likely make it through, and by
cumulativeness of ACKs would imply the RR1.
TCP fast retransmit versus SREJ
Window sizes for the two versions, if we have N bits for sequence numbering
(2N - 1 versus 2N-1 )
What happens when we allow packet reordering?
Discuss self-clocking
X.25 uses sliding windows on a per-link basis. Does this make sense? The alternative is end-to-end sliding windows.
HDLC: section 7.3
Scenarios (a) through (e) of Fig 7.9
(a): setup with SABM lost, DISC
(b): eschange of data
(c): use of RNR
(d): use of REJ (why not SREJ?)
(e): timeout recovery
flags& bit stuffing; level issues
bit stuffing as an encoding or framing technique
Five scenarios on fig 7.9, page 227
compare with TCP
-
3-way handshake
-
lost packet response
-
reordering possibility
HDLC is a link protocol! Like
Ethernet frames. Sliding windows is arguably inappropriate at this
level; a better choice is end-to-end sliding windows. Go-back-N is more
than sufficient; we do not need selective-reject.
sliding windows and congestion management: basic ideas
TCP has the sender control the window size, rapidly downwards
(decreasing by 1/2) if there are packet losses, and slowly upwards if
there are no losses. Losses are interpreted as evidence of
congestion; the window size itself represents how many packets are
"stored" by the network.
Back to chapter 10
Virtual-circuit packet-switched routing
We don't usually use
the word "virtual" for TDM, though TDM circuits are in a sense virtual.
But "virtual circuit switching" means using packets to simulate circuits.
Note that there is a big difference between circuit-switched
T-carrier/SONET and any form of packet-based switching: packets are
subject to fill time. That is,
a 1000KB packet takes 125 ms to fill, at 64 Kbps, and the voice
"turnaround time" is twice that. 250 ms is annoyingly large. When we
looked at the SIP phone, we saw RTP packets with 160 B of data,
corresponding to a fill time of 20 ms. ATM (Asynchronous Transfer Mode)
uses 48-byte packets, with a fill time of 6 ms.
Datagram Forwarding
Routers using datagram forwarding each have ⟨destination, next_hop⟩
tables. The router looks up the destination of each packet in the
table, finds the corresponding next_hop, and sends it on to the
appropriate directly connected neighbor.
These tables are often "sparse"; some
fast lookup algorithm (eg hashing) is necessary. The IP "longest-match"
rule complicates this. For IP routing, destinations represent the network portion of the address.
How these tables are initialized and maintained can be complicated, but
the simplest strategies involve discovering routes from neighbors.
Often a third column, for route cost, is added.
drawback: cost proportional to # of nodes
first look at virtual circuit goals:
-
small cells
-
small header
-
low per-packet cost
- still packet-switched!