Comp 346/488: Intro to Telecommunications
Tuesdays 7:00-9:30, Lewis Towers 412
Class 10: Nov 9
Reading:
Chapter 9, Spread Spectrum and CDMA
Chapter 14, Cellular networks
Chapter 11, ATM
Exam problem 2, and working with units
Program 2
Cellular telephony
9.4: CDMA
Here, each bit becomes k "chips" (k ~ 128). The chips are sent using FHSS or DSSS. Each user has an individual "chipping code".
As with DSSS, one consequence is that data is spread out over a range of frequencies, each with low power
basic idea:
- chipping codes
- dot product
- orthogonality
- recovery from additive signal
- compare to <1,0,0,0,0,0>, <0,1,0,0,0,0>, etc
K users: each user transmits k chips on k frequencies at the same time.
Ideally we use K=k, but in practice K can be somewhat larger. All the
transmissions "add" together in the airwaves. The transmissions are done in such a way
that each receiver can extract its own signal, even though all the
signals overlap.
Trivial way to do this: ith sender sends on frequency i only, and does
nothing on other frequencies. This is just FDM. We don't want to use
this, though.
Example from Table 9.1:
CA∙CB = 0, CA∙CC = 0, CB∙CC = 2 (using ∙ for "dot product")
A sends (+1)*CA or (-1)*CA
suppose D = data = a*CA + b*CB, a,b = +-1
Then D∙CA = a*CA∙CA + b*CB∙CA = a*6 + b*0 = 6a; we have recovered A's bit!
Cell phones
analog
uses FDMA (Freq Division Multiple Access): allocates a 30kHz channel to each call (2 channels!), held for duration of call.
TDMA
digitizes voice, compresses it 3x, and then uses TDM to send 3 calls over one 30kHz channel.
GSM
form of TDMA with smaller cells, more dynamic allocation of cells. Note
that if everyone talks at once, there might not be enough cells for
all! Data form: GPRS (General Packet Radio Service)
CDMA
sprint PCS, US Cellular, others
code division multiple access: kind of weird. This is a form of "spread
spectrum". Signals are encoded, and spread throughout the band. Any one
bit is sent as ~128 "chips", but chips may be used by multiple senders.
Demultiplexing is achieved because codes are "orthogonal", as above.
This strategy gives good gradual degradation as # of calls
increases, and good eavesdropping prevention (though not as good as
strong encryption). It also gives excellent bandwidth utilization (3x
TDMA).
There is a deep relationship in CDMA between power and bandwidth, but in a good
way: phones negotiate to use the smallest power for their needs
second reason for CDMA: allows gradual performance falloff with oversubscription
Chapter 14
Fig 14.1 on cell geometries: a cell diameter is typically a few miles, though in cities it can be much less
Fig 14.2 on frequency reuse (for TDMA, FDMA)
- cell sectoring
- cell splitting (works down to ~ 100 m; cells smaller than ~1km are known as microcells)
- adding frequencies ($$$)
- frequency borrowing
- microcells
Fig 14.6 on finding phones: note that paging is pretty rare; phones instead "check in" regularly.
Fast (half-wavelength, or 6") fading v slow fading
fastfade.pdf (from the IEEE 802.11 "wi-fi" specification)
AMPS: FDMA, analog
channel spacing 30kHz!!! So big because of multipath problem
Multipath distortion:
- reflection
- diffraction
- scattering
Figure 14.7
phase distortion: multipath can lead to different copies arriving
sometimes in phase and sometimes out of phase (this is the basic cause
of "fast fading").
Intersymbol interference (ISI) and Fig 14.8
Note that CDMA requires good POWER CONTROL
Phones all adjust signal strength downwards if received base signal is stronger than the minimum.
Go through Andrews presentation at http://users.ece.utexas.edu/~jandrews/publications/cdma_talk.pdf
- Walsh codes at base station
- PN codes in receivers (because we no longer can guarantee synchronization)
- Frequency reuse issues
SMS (text messaging)
This originated as a GSM protocol, but is now universally available under CDMA as well.
Basically, the message piggybacks on "system" packets.
Erlang model
Suppose we know the average
number of simultaneous users at the peak busy time (could be calls
using a trunk line, could be calls to help desk). We would probably
figure that at least some of the time, the demand might be higher than
average. How many lines do we actually need?
To put this another way, suppose we flip 200 coins. The average number
of heads is 100. What are the odds that in fact we get less than or
equal to 110 heads? Or, to put it in a slightly more parallel way,
suppose we keep doing this. We want to reward every head-getter, 99% of the time. How many rewards do we have to keep on hand?
One parameter is the acceptable blocking rate:
the fraction of calls that are allowed to go unconnected. As this rate
gets smaller, the number of lines needed will increase.
Example: how
many lines do we need to handle a peak rate of 100 calls at any one
time (eg 2,000 users each spending 5% of the day on the phone), and a
maximum blocking rate of 0.01%?
(From http://erlang.com/calculator/erlb and my own program)
We assume that calls for which a line is not available are BLOCKED, not queued.
rate
|
allowed blocking
|
lines needed
|
excess
|
10
|
0.01
|
18
|
180%
|
20
|
0.01
|
30
|
150%
|
50
|
0.01
|
64
|
128%
|
100
|
0.01
|
118
|
118%
|
180
|
0.01
|
201
|
111%
|
200
|
0.01
|
221
|
110%
|
500
|
0.01
|
526
|
105%
|
1000
|
0.01
|
1029
|
103%
|
To a reasonable approximation, if we have 200 lines, we can handle ~180 calls.
Here's a table where the rate is 0.001:
rate
|
allowed blocking
|
lines needed
|
excess
|
10
|
0.001 |
21 |
210% |
20
|
0.001 |
35 |
175% |
50
|
0.001 |
71 |
142%
|
100
|
0.001 |
128 |
128% |
180
|
0.001 |
216 |
120%
|
200
|
0.001 |
237 |
119% |
500
|
0.001 |
554 |
111% |
1000
|
0.001 |
1071 |
107% |
# lines needed is always greater than the average peak rate. The question is by how much.
As allowed blocking rate goes down, the # of lines needed goes up, for the same rate.
Also, as the rate goes up, the % of extra lines needed goes DOWN, due to "law of large numbers"
ATM: Asynchronous Transfer Mode
versus STM: what does the "A" really mean?
Basic idea: virtual-circuit networking using small (53-byte) fixed-size cells
Review Fig 10.11 on large packets v small, and latency. (Both sizes give equal throughput, given an appropriate window size.)
basic cellular issues
delay issues:
10 ms echo delay: 6 ms fill, ~4 ms for the rest
In small countries using 32-byte ATM, the one-way echo delay might be 5 ms
echo cancellation is not cheap!
150 ms delay: threshhold for "turnaround delay" to be serious
packetization: 64kbps PCM rate, 16kbps compressed rate. Fill times: 6 ms, 24 ms respectively
buffering delay: keep buffer size >= 3-4 standard deviations, so
buffer underrun will almost never occur. This places a direct
relationship between buffering and jitter.
encoding delay:
time to do sender-side compression; this can be significant but usually is not.
voice trunking (using ATM links as trunks) versus voice switching (setting up ATM connections between customers)
voice switching requires support for signaling
ATM levels:
- as a replacement for the Internet
- as a LAN link in the Internet
- MAE-East early adoption of ATM
- as a LAN + voice link in an office environment
A few words about how TCP congestion control works
- window size for single connections
- how tcp manages window size: Additive Increase / Multiplicative Decrease (AIMD)
- TCP and fairness
- TCP and resource allocation
- TCP v VOIP traffic
ATM
basic argument (mid-1980s): Virtual circuit approach is best when
different applications have very different quality-of-service demands
(eg voice versus data)
VCC (Virtual Circuit Connection) and VPC (Virtual Path: bundle of VCCs)
Goal: allow support for QOS (eg cell loss ratio)
ATM cell format: Fig 11.6 (9th ed.), 11.4 (7th & 8th eds)
3-bit PT (Payload Type) field: user bit, congestion bit, SDU bit
CLP (Cell Loss Priority) bit: if we have congestion, is this in the first group to go or the second?
48-byte compromise [anecdote?]
why cells:
- reduces packetization (fill) delay
- fixed size simplifies switch architecture
- lower store-and-forward delay
- reduces time waiting behind lower-priority big traffic
no-reordering rule, and its consequences