Comp 346/488: Intro to Telecommunications
Tuesdays 4:15-6:45, Lewis Towers 412
Class 6, Feb 21
Reading (7th -- 9th editions)
Chapter 7: p 224: HDLC bit stuffing
Chapter 8: §8.1, §8.2, §8.3
10.1, 10.2, 10.3
Read:
Chapter 5:
§5.1: digital data / digital signal: b8zs, 4b/5b
§5.2: digital data/analog signal: ASK, FSK, MFSK
§5.3: analog data / digital signal: digitized voice,
PCM, µ-law
§5.4: analog data / analog signal: AM/FM modulation
5.4: analog data/analog signal
Why modulate at all?
FDM (Frequency-Division
Multiplexing)
higher transmission frequency
simplest is AM.
band width is worth noting
new frequencies at carrier +/- signal are generated because of nonlinear interaction (the
modulation process itself).
Single Side Band (SSB: slightly
more complex to generate and receive, but:
- half the band width
- no energy at the carrier frequency (this is "wasted")
Sound files: beats.wav v modulate.wav
Latter has nonlinearities
(1+sin(sx)) sin(fx) = sin(fx) + sin(sx)sin(fx)
= sin(fx) + 0.5 cos((f-s))x) -
0.5 cos((f+s)x)
reconsider "intermodulation noise". This is nonlinear interactions
between signals, which is exactly what modulation here is all about.
Angle Modulation (FM and PM)
FM is Frequency Modulation; PM is Phase Modulation. These can be hard
to tell apart, visually.
Let m(t) = modulation signal (eg voice)
The (transmitted) signal is then
A cos (2πft + 𝜑(t))
FM: k*m(t) = 𝜑'(t) (that is, 𝜑(t) = ∫m(t)dt). m(t) = c const =>
𝜑(t) = kct = k1t; that is, we have the transmitted signal
as
A cos (2πft + kct) = A cos (2π(f+kc/2π) t),
a signal with the fixed (higher) frequency f+kc/2π.
PM: k*m(t) = 𝜑(t). m(t) = const => 𝜑(t) = const
Figure 5.24
Somewhat surprisingly, FM and PM often sound very similar. One reason
for this is that the derivative (and so the antiderivative) of a sine
wave is also a sine wave. There's distortion in terms of frequency, but
most voice frequencies are in a narrow range.
Picture: consider a signal m(t) = 0 0 1 1 1 1 1 1 0 0 0 0
FM,PM both need more band width than AM
AM: band width = 2B, B=band width of orig signal
FM,PM: band width = 2(β+1)B, where again B = band width of original
signal. This is Carson's Rule.
where β = npAmax for PM, A_max = max value of
m(t).
For FM, β = delta_F/B, delta-F = peak frequency difference. A
value of β=2, for example, would mean that in encoding an audio signal
with band width 4 KHz, the modulated signal varied in frequency by a
total range of 8 KHz. Having β low reduces the band width requirement,
but also increases noise. Also note that in our β=2, the total band
width needed for the modulated signal wold be 24 KHz.
2B + delta-F, delta-F = frequency variation (can't be too low)
5.2: digital data/analog signal
modems, long lines & fiber
(even long copper lines tend to work better with analog signals)
ASK: "naive", though used for fiber
FSK: shift color in optical fiber (not common)
PSK: easier to implement (electrically) than FSK
Superficially, ASK appears to have zero analog band width, but this is
not really the case!
ASK: 1 bit /hertz => 4000 bps max over voice line
1 bit/ 2Hz, 2400 Hz carrier => 1200 bps.
FSK analog band width = high_freq - low_freq
BFSK v MFSK: fig 5.9 for MFSK
BFSK: fig 5.8: old modems, full-duplex
MFSK: the trouble is, it takes time
to recognize a frequency (several cycles at least!)
FSK is supposedly more "noise-resistant" than ASK, but fig 5.4
shows the same graph of Eb/N0 v BER for the two. (PSK is shown 3 dB
lower (better) in that graph)
BPSK: decoding starts to get very nonintuitive!
DPSK: differential, like differential NRZ
QPSK: 4 phase choices, encoding 00, 01, 10, 11
9600bps modem:
really 2400 baud; 4 bits per signal element (12 phase angles, four of
which have two amplitude values, total 16 distinct values per signal,
or 4 bits)
Nyquist limit applies to modulation rate: noise reduces it.
56Kbps modems: use PCM directly.
Station gets data 7 bits at a time, every 1/8 ms, and sets the output
level to one of 128 values.
If there is too much noise for the receiver to distinguish all those
values, then use just every other value: 64 values, conveying 6 bits,
for 48kbps. Or 32 values (using every fourth level), for 5*8 = 40 kbps.
Quadrature Amplitude Modulation, QAM
This involves two separate signals, sent 90° out of phase and each
amplitude-modulated (ASK) separately. Because the two carriers are 90°
out of phase (eg sin(ft) and cos(ft)), the combined signal can be
accurately decoded.
Brief comparison of Fig 5-8 and Fig 8-5. Both show side-by-side
bands, interfering minimally. The first is of two bands in the voice
range (1 kHz and 2 kHz respectively), representing a modem sending in
opposite directions. The second is of multiple 4 kHz voice bands
AM-modulated (using SSB) onto carriers of 60 kHz, 64 kHz, 68 kHz, ....
HDLC Bit Stuffing
See Stallings v9, section 7.3, Frame Structure (p 224).
The HDLC protocol sends frames back-to-back on a serial line; frames
are separated by the special bit-pattern 01111110 = 0x7E. This is,
however, an ordinary byte; we need to make sure that it does not appear
as data. To do that, the bit stuffing
technique is used: as the sender sends bits, it inserts an extra 0-bit
after every 5 data bits. Thus the pattern 01111110 in data would be
sent as 011111010. Here is a
longer example:
data:
0111101111101111110
sent as: 011110111110011111010
The receiver then monitors for a run of 5 1-bits; if the next bit is 0
then it is removed (it is a stuffed bit); if it is a 1 then it must be
part of the start/stop symbol 01111110.
Some consequences:
- We have guaranteed a maximum run of 6 1-bits; if we interchange
0's and 1's and use NRZ-I, bit-stuffing has solved the clocking problem
for us.
- The transmitted size of an HDLC data unit depends on the
particular data, because the presence of stuffed bits depends on the
particular data. This will ruin any exact synchronization we had
counted on; for example, we cannot use HDLC bit-stuffing to encode
voice bytes in a DS0 line because the extra stuffed bits will throw off
the 64000-bps rate.
- The data sent, and the 01111110 start/stop symbol, may no longer
align on any byte boundaries inthe underlying transmission bitstream.
MULTIPLEXING
Brief note on synchronous v asynchronous transmission (§6.1)
Sender and receiver clocks MUST resynchronize at times; otherwise, the
clock drift will eventually result in missed or added bits.
Asynchronous: resynchronise
before/after data, eg with a "stop bit" before and after each byte.
This is common approach with serial lines, eg to modems.
Synchronous: send data in
blocks too big to wait to resynchronize at the end, but embed
synchronization in the data (with NRZ-I, for example, we usually
resynchronize on each 1-bit).
Manchester (a form of
synchronous): interleave clock transitions with data transitions.
More efficient techniques make sure there are enough 1's scattered in
the data itself to allow synchronization without added transitions. Example:
4b/5b: every 5 bits has at least 2 transitions (2 1-bits)
Brief note on PACKETs as a form of multiplexing
The IP model, with relatively large (20 byte for IP) headers that
contain full delivery information, is an approach allowing a large and
heterogeneous network. But simpler models exist.
The fundamental idea of packets, though, is that each packet has some
kind of destination address attached to it. Note that this may not
happen on some point-to-point links where the receiver is unambiguous,
though what "flow" the packet is part of may still need to be specified.
HDLC packet format: omit
Voice channels
The basic unit of telephony infrastructure is the voice channel, either
a 4 KHz analog channel or a 64 kbps DS0 line. To complete a call, we do
two things:
- reserve an end-to-end path of voice channels for the call
- at each switch along the way, arrange for the output of a channel
to be forwarded (switched) to the next channel in the path.
Channels are either end-user lines or are trunk channels; the latter are
channels from one switching center to the next. Within the system,
channels are identified by their Circuit
Identification Code. It is the job of Signaling
System 7 (in particular, the ISDN User Part, or ISUP, of SS7, to
handle the two steps above). The spelling "signalling" is common in
this context. SS7 also involves conveying information such as caller-ID
and billing information.
Note that VoIP does not
involve anything like channels; we just send packets until a link is
saturated. The channel-based system amounts to a hard bandwidth
reservation (with hard delay bounds!) for every call.
The channel is the logical descendant of the physical circuit. At one
point, the phone system needed one wire per call. Channels allow the
concept of multiplexing:
running multiple channels over a single cable. We'll now look at three
ways of doing this:
- L-carrier
- DS (T-carrier) lines
- SONET
More on the signaling and switching processes below
8.1: FDM (Frequency Division Multiplexing)
AM radio is sort of the archetypal example.
Frequency v time: fig 8.2
ATT "L-carrier" FDM
voice example (fig 8.5):
4kHz slots; 3.1kHz actual bandwidth (300 Hz - 3400 Hz). AM SSB (upper
sideband) modulation
onto a carrier frequency f transforms this band into the band [f,
f+4kHz], of the same width. Note that without SSB, we'd need double the
width; FM would also use much more bandwidth than the original 4kHz.
ATT group/supergroup hierarchy: Table 8.1
name
|
composition
|
# channels
|
Group
|
|
12
|
Supergroup
|
5 groups
|
5 × 12 = 60
|
Mastergroup
|
10 supergroups
|
10 × 60 = 600
|
Jumbogroup
|
6 mastergoups
|
6 × 600 = 3600
|
Mastergroup Multiplex
|
N mastergroups
|
N × 600
|
L-carrier: used up through early 1970s
Why bundle calls into a hierarchy of groups? So you can multiplex
whole trunks onto one another, without demuxing individual calls.
Peeling out a single call is relatively expensive, particularly if we
want to replace that slot with a new call. For one thing, additional
noise is introduced.
Even the repeated modulation into larger and larger groups introduces
noise.
Chapter 8.2: STDM (Synchronous Time-Division Multiplexing)
Fixed-width interleaving, of N low-datarate channels onto one
high-datarate line in the course of one frame, each sender gets one
timeslot (usually equal-sized). 1 frame = N timeslots
Timeslots are SMALL (eg 1 byte), and have no addressing or headers.
Input channels are assumed continuous:
senders send pad bytes if nothing else. (Note that in realtime voice
transmission, pad bytes represent silence,
but still need to be transmitted to maintain timing.) Encoding and
decoding are simple; no addressing is needed!!
Timeslots are typically very small: 1 byte for the lines we will look
at.
In the telecommunications system, the first (and still common) STDM
lines are the T-carrier hierarchy (at least in the US; the E1,etc
hierarchy is used in Europe). The designation T1 describes the
hardware level; the designation DS1 (for Data Stream) represents the
logical signaling level. At the bit level, B8ZS signaling is used.
These lines were used starting
mid-1970s for trunking.
Note that B8ZS does not involve any insertion of extra bits, allowing
for strict preservation of the 8000-Hz "heartbeat".
The main advantage of digital over FDM is the absence of cumulative
distortion
A T1 line carries 24 DS0 lines. This works out to 24×64kbps = 1.536
mbps. The actual bit rate of a T1/DS1 line is 1.544mbps, a difference
of 8kbps. The basic T1 frame is 193 bits, = 24 timeslots of 8 bits
each, + 1 "framing" bit. The frame rate is 8000 frames/sec (matching
the voice sampling rate!), meaning that every 1/8000 of a second the
line carries 1 byte from each of the 24 inputs, plus 1 bit. That works
out to 8000 frames/sec × 193 bits/frame = 1,544,000 bits/sec = 1.544
mbps.
Note that the frame size 193 is a prime number. This is relatively
common in the telecom world, as opposed to the general-computing world
where things tend to be a power of 2, or a small multiple of a power of
2.
All we need is a 1-byte buffer for each input channel; these are
sampled round-robin. Some input channels can get 2 or more timeslots;
buffering is only slightly
complicated.
That extra framing bit may not sound like much, and it is not. A group
of 12 T1 frames is called a superframe; the framing bit is used to
encode a special bit-pattern, eg 0101 1101 0001, that can be used to
identify lost syncronization between the endpoints. (The pattern 0101
0101 0101 can be used to synchronize frames, but not superframes).
24×64kbps = 1.536 mbps, DS1 = 1.544; difference (due to the framing
bit) is 8kbps (1 bit/frame, × 8000 frames/sec)
framing-search mode: used for initial synchronization and when
synchronization is lost. We know a frame is 193 bits; we examine every
bit of each frame until we find one bit that consistently shows the
framing pattern.
When T1 lines are used to carry voice data, five frames out of six
carry 8-bit PCM (µ-law in the US). Every sixth frame has the low-order
bit taken as a signaling bit; it is set to 0 on delivery. This is why
modems just get 56kbps, not
64 kbps.
digital mode: 8th bit in every
byte is an indicator of user data v control; lots of room for stuffing
but with 8/7 overhead.
digital mode sync byte
Full-line digital mode: use 23 bytes per frame for data; 24th byte is
used for framing indicator that allows faster recovery than the
1-bit-per-frame method.
Note from wikipedia:
allegedly in 1958 there was internal AT&T debate as to whether T-1
lines should have 1 extra bit
for framing, or 1 extra byte.
Supposedly
the 1-bit group won because "if 8 bits were chosen for OA&M
function, someone would then try to sell this as a voice channel and
you wind up with nothing." Later, AT&T realized 1 byte would have
made more sense, and introduced various bit-stealing techniques; eg the
low-order bit of each sixth byte.
The main service of a T1 line is not
simply to provide a 1.5mbit data rate; there are much cheaper ways to
do that. The point of a T1 line is that the system provides extremely
low delay for each voice line: possibly less than a millisecond over
the actual path propagation delay. Buffering is essentially zero!
What if one of the inputs runs slow?
Naive outcome: we will duplicate a byte every now and then, from the
slow source. Ultimately, there is no easy fix for slow real-time
streams. Note, however, that it is
easy to send packets
over a single TDM channel without slow-source worries! All we have to
do is pre-buffer the entire packet, so its next byte is always
available. Alas, while this approach
can be used to eliminate the possibility of one link's running slow
during the time it takes to send one packet (thus corrupting that
packet), it does mean that we
have to adopt a store-and-forward strategy at each switch: the packet
must be fully received and buffered for the next link.
DS lines are said to be plesiochronous:
close to synchronous, but with some reasonable tolerance for error.
This is usually pronounced Ples-ee-AH-krun-ous, to make it akin to
SYN-krun-ous, but some do pronounce it Ples-i-oh-KRON-us.
In plesiochronous lines, pulse
stuffing is used to accommodate minor timing incompatibilities.
If the inbound links run slightly slow, extra bits/bytes will be
inserted to take up the slack. The outbound link will have some extra
bandwidth capacity; that is, it will run slightly fast, so there will
be room for pulse stuffing even if the inbound links run slightly
faster than expected. We need either
applications that will tolerate occasional bad data (voice) or else we
need some way of encoding where the extra stuffed bits/bytes have been
put. Actually, pulse stuffing in the real world pretty much requires
that we can always identify the stuffed bits.
Table 8.3: North American DS-N hierarchy
DS0
|
64kbps voice line |
DS1
|
1544 kbps, = 8×24 + 1 = 193 =
1544/8 bits/frame |
DS2
|
6312 kbps, 789 b/f = 96 bytes +
21 bits = 4×DS-1 + 17 bits
Actually 1176 bits per DS2 M-frame |
DS3
|
44736 kbps, 5592 b/f = 24×28
bytes + 27 bytes
= 7 DS2 + 69 bits
Actual frame size is 4704 bits, rate 106.402 microseconds
|
bit-stuffing: flag bits indicate whether certain bytes have data or
padding
Allowable clock drift: 1 part in 2 × 10-5, or, for a DS1, 30
bits/sec
DS1→DS2 multiplexing
Reference: DS3fundamentals.pdf.
This is just plain weird. If nothing else, it should convince you that
telco engineers think in bits, not
bytes.
Note from Stallings (p 253 in 9th edition)
Pulse
Stuffing
... With pulse stuffing, the outgoing data rate of the multiplexer,
excluding framing bits, is higher than the sum o fthe maximum
instantaneous incoming rates. The extra capacity is used by stuffing
extra dummy bits or pulises into each incoming signal until its rate is
raised to that of a locally generated clock signal. The stuffed pulses are inserted at fixed
locations in the multiplexer frame format so that they may be
identified and removed at the demultiplexer.
But how do you tell when a bit was stuffed, and when it was not?
Variability (sometimes stuffing, sometimes not) is essential if this technique is
going to allow us to "take up slack".
Here are the details for how pulse-stuffing is used to multiplex four
DS1 signals onto a DS2 signal.
First, the multiplexing is completely asynchronous; we do not align on DS1 frame boundaries
(this is the 193-bit frame).
A DS2 stuff block is 48 bits
of data, 12 from each DS1, interleaved round-robin at the bit
level, plus an overhead bit at the front for 49 bits in all. (We'll
revisit these OH bits
below; each bit is either an M-bit, a C-bit, or an F-bit, based on
position.)
An M-subframe is six
stuff-blocks, holding 72 bits of each DS1, total 288+6=294 bits (294 =
6×49)
An M-frame is four subframes (M1,
M2, M3, M4), holding 288 bits of each
DS1, total 294×4=1176
Each M-frame can accomodate up to 1 "stuff bit" per DS1 input. A stuff
bit is a bit that upon demultiplexing does not
belong to that DS1 stream, representing an opportunity for that input
to run slow. The DS2 output stream runs slightly fast (ie
DS1 inputs are "slow"), so stuff bits always represent "missed" bits.
There is no way to handle inputs running fast.
If the input buffer is running low on bits, we insert a stuffed bit to
give it a chance to catch up.
Naming the overhead bits:
In each M-subframe, there are six OH bits: ⟨M, C, F, C, C, F⟩.
M0
|
C
|
F0
|
C
|
C
|
F1
|
M1
|
C
|
F0
|
C
|
C
|
F1
|
M1
|
C
|
F0
|
C
|
C
|
F1
|
Mx
|
C
|
F0
|
C
|
C
|
F1
|
In this diagram, each cell contains a stuff block, with leading bit M,
C or F. Each row respresents an M-subframe; the rows represent
subframes M1, M2, M3, M4.
The entire grid is an M-frame.
There are 4 M-bits in an M-frame, spelling out the bit pattern 011x,
where x varies.
The F-bits are for frame alignment; the first is always 0 and the
second is always 1.
Stuffing for input stream i is done in M-subframe Mi,
i<4.
If the three C-bits of that subframe (ith row) are all 1's, then the
first bit of the ith input stream in the last stuff block is stuffed;
ie is not real. If the C-bits are all 0's, then there was no stuffing.
Actual use: 2 out of 3 1's, versus 2 out of 3 0's. WHY WOULD WE DO
THAT??? Isn't any bit error
equally fatal??
Note the size of the blocks never changes.
With four DS1's, the data rate for a DS2 needed is 4×1.544×49/48 =
6.30466666.. Mbps
But the actual DS2 rate is 6.312 Mbps = 8 kbps × 789
We stuff bits as necessary to take up the slack.
total DS2 bits per second:
6312000
DS1×4 data bits per second: -6176000
DS2 overhead bits per second - 128816
________
Total stuff bits:
7184
Divided by 4:
1796 bps per DS1
At 8000 frames/sec, that's roughly 1 stuff bit every 4.5 frames, or 1
bit every 860 bits.
DS2→DS3: Same strategy is possible, except nowadays this is generally
done as an integrated process multiplexing 28 DS1's into a DS3. So the
DS3 stuff bits are never
needed (all the slack is taken up at the DS1→DS2 level), so they've
been adopted for line-signaling purposes.
As we move higher up the hierarchy, more and more stuff bits are
needed. A different approach is used for very-high-speed links.
SONET
Good reference: sonet_primer.pdf
Sonet is said to be truly synchronous:
timing is supposed to be exact, to within ±1 byte every several frames.
Bit-stuffing (pulse stuffing) was seen by the telecommunications
industry as a major weakness in the T-carrier system, introducing more
and more wasteful overhead as the multiplexing grew. The core issue is
that when you combine several "tributaries" into one larger data
stream, your big stream needs extra capacity to be able to handle
speedups and
slowdowns in the inputs.
SONET was an attempt to avoid this problem.
First look at SONET hierarchy: Stallings Table 8.4 (largely reproduced
below)
STS-1/OC-1
|
|
51.84 Mbps
|
STS-3/OC-3
|
STM-1
|
155.52 Mbps
|
STS-12/OC-12
|
STM-4
|
622.08 Mbps
|
STS-48/OC-48
|
STM-16
|
2488.32 Mbps
|
STS-192
|
STM-64
|
9953.28 Mbps
|
STS-768
|
STM-256
|
39.81312 Gbps
|
STS-3072
|
|
159.25248 Gbps
|
STS = Synchronous Transport Signal
OC = Optical Carrier
STM = Synchronous Transport Mode [?]
Note that each higher bandwidth is exactly
4 times the previous (or 3 for the first row). There is no bit
stuffing, though there is a
mechanism to get ahead or fall behind one byte at a time.
basic SONET frame (Stallings v9 Fig 8.11)
-
Transport overhead, path overhead
- framing bytes: A1, A2 0xF628
- multiplexing number: STS-ID
- E1, E2, F1: special header-only voice lines
- H1-H3: frame alignment
- one of these allows byte stuffing, and/or frame drift.
A1
|
A2
|
J0
|
J1 |
data cols 4-29
|
J1 |
data cols 31-58
|
J1
|
data cols 60-90
|
|
E1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
H1
|
H2
|
H3
|
|
|
|
|
|
|
|
|
|
F2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
E2
|
|
|
|
|
|
|
The payload envelope, or SPE, is the 87 columns reserved for path
data. The SPE can "float"; the first byte of the SPE can be any byte in
the non-overhead part of the frame; the next SPE then begins at the
corresponding position of the next frame. (The diagram above does not show a floating SPE.) This
floating allows for the
data to speed up or slow down relative to the frame rate, without the
need for T-carrier-type "stuffing".
SPEs are generally spread over two consecutive frames. It is often
easier to visualize this if we draw the frames aligned vertically.
The first column of the SPE is the path
overhead; columns 30 and 59 are also reserved. In the diagram
above, these are the columns beginning with J1. Total data columns: 84.
Note that the path-overhead columns mean that the longest run of bytes
before a 1-bit is guaranteed is about 30; the SONET clocking is usually
accurate enough to send 240 0-bits (30 bytes) and not lose count.
However, sometimes SONET does lose count, and has to re-enter the
"synchronization loop". This can involve a delay of a few hundred
frames (~40-50 ms). Packets with the "wrong" kind of data (resulting in
long runs of 0-bits after scrambling) are often the culprit; carriers
don't like this.
Frame-alignment algorithm: search for frames where A1A2 matches the
predetermined 0xF628 pattern
SONET frames are always sent at 8000 frames/sec (make that 8000.000
frames/sec). Thus, any single byte position in a frame can serve, over
the sequence of frames, as a DS0 line, and SONET can be viewed as one
form of STDM.