Comp 346/488: Intro to Telecommunications

Tuesdays 7:00-9:30, Lewis Towers 412

Class 9: Nov 2

Reading:
    Chapter 9, Spread Spectrum and CDMA
    Chapter 14, Cellular networks
    Chapter 11, ATM

Signal processing

Convolution

Specifically, convolution of an array samples[] by a second array K, a shorter array of double of length len, means to set result[n] as follows:

	double sum = 0;
for (int i = 0; i<len; i++) sum += samples[i+n]*K[len-i-1];
result[n] = (short) sum;
Sometimes you might divide by len; more often, K has been adjusted so that the K[i] add up to 1.0 (or 0.0). Purists do the reversed indexing of K, as above, but if you replace K[len-i-1] with K[i], all you have to do is make sure that K itself is created in the reversed order. If K is symmetrical, you won't even have to do that. This reversed form means taking the sum of samples[i+n]*K[i], for i<len.

Sometimes it's important that result[n] only depend on samples[k] for k≤n; that is, there is no lookahead. If that's the case, an appropriate formula might be
    for (i=0; i<len; i++) sum += samples[n-i]*K[i]
All the forms are interchangeable, after adjusting for differing forms of K and for shifting between result[n] and result[n+len].

You can do two things when you reach the upper end of the array samples[]: quit at n = samples.length - len, or else use 0.0 wherever you need a value samples[n] for n ≥ samples.length.

In the literature, K is often said to be a kernel, though this is a very different usage than in the OS sense.

Mathematically, convolving a signal with a kernel K(t), sometimes denoted signal*K, involves taking the integral ∫ signal(t)×K(x-t)dt, over an appropriate interval such as [0,len]. Convolution is the time-domain version of pointwise multiplication in the frequency domain; if the frequencies are, say, ⟨f0, f1, f2, f3, f4, f5, f6, f7⟩ and we multiply pointwise by ⟨0,0,0,1,0,0,0,0⟩ we get f3; that is, we have isolated a particular frequency. If we multiply pointwise by ⟨0,0,0,0,1,1,1,1⟩ we get ⟨0,0,0,0,f4,f5,f6,f7⟩; that is, we have implemented a high-pass filter. Both frequency isolation (above) and lo/hi/bandpass filters are indeed implemented as convolutions in the time domain.

Mathematical note: the Fourier Transform is the integral X(f) =  ∫ signal(t)*e2πiftdt over the interval from -∞ to ∞; this is thus the convolution with e2πift. This is a frequency-domain view; note the dependence on a given frequency f: X(f) is the energy at frequency f. Note also that there is no longer any dependence on time t; that has "gone away". We can also denote this as F(signal)(f); that is, F(signal) is the Fourier transformation, and F(signal)(f) is one particular value. It is the Fourier transform that converts from time-domain to frequency-domain, and back. The Fourier transform F(s*K) of the convolution s*K is the pointwise product of F(s) and F(K).

A kernel to implement an echo is the following [this is in the no-lookahead form, corrected from last week]:
	1 0 0 0 0 0 ... (length depends on delay) ... 0 0 A
If the length here is len+1, the sum is then for (i=0; i≤len; i++) sum+= samples[n-i]*K[i]. Since all the K[i] are zero except i=0 and i=len, this simply reduces to setting result[n] = samples[n] + A*samples[n-len]; that is, adding back into the samples the result from len units ago, reduced by factor A.

If the kernel is of the form
    1 0 0 0 0 0 ... (any number of zeroes)
then the result of the convolution is simply the original signal again. Thus, this kernel is sometimes called the Identity.

Lowpass Filters

These are operations that discard all frequences above (for lowpass) a given cutoff, or below (for highpass). Combining these, we can create a bandpass filter that allows only frequencies in a certain range.

There are various analog-electronics circuits for doing this, but digital filters are generally are much more thorough. A lowpass filter for frequency f=f0 is an appropriate form of the sinc function: sinc(x) = sin(x)/x for x≠0, sinc(0) = 1. Specifically, to pass only frequencies less than or equal to f, you convolve with sinc(2πfx), generally over a range of x including several complete cycles above and below 0.

When working with digital samples, we will assume that the frequency f is expressed in sampling_ticks/sec; for a sampling rate of 8000, f = frequency_in_Hertz / 8000. The wavelength, also measured in samples, is 1/f. For example, if the frequency in Hertz is 400 Hz, then the frequency in sampling ticks is f = 400/8000 = .05, and the wavelength is 20 ticks.

The sinc function tapers off to infinity in both directions; we need to truncate it. Choose an integer N several times the wavelength (in sampling ticks); we usually choose N so it is close to an integer multiple of the wavelength. Let
	L[i] = A*Math.sin(2*pi*f*(i-N)) / (i-N);
L[N] = A*2*pi*f
Then run this as i goes from 0 to 2*N. (Note L[N] is chosen correctly to ensure continuity.)

Mathematical note: you get the sinc function by starting with the frequency-domain "cutoff" function, where f0 is the cutoff frequency: C(f) = 1 for f<f0, 0 for f>f0. Given a signal s(t), we convert it to the frequency domain by taking the Fourier transform F(s); at that point, we implement the lowpass filter simply by multiplying by C: F(s)(f) × C(f) = F(s)(f) for f<f0; 0 for f>f0. If we then take the inverse Fourier transform of the relationship F(s)×C, we get s * F-1(C), where stands for convolution. And one can show, by straightforward calculus manipulations, that F-1(C) is the sinc function. See table A2 in Stallings.

This kernel is symmetric, so it doesn't matter in which order you do the sum; ie it doesn't matter if you use L[i] or L[2N-i].

The above kernel ends abruptly at the ends; to taper it more gradually, the following tweak (the "Blackman window") is often recommended:

    for (int i = 0; i<=n; i++) {
double factor = 0.42 - 0.5*Math.cos(pi*i/n) + 0.08*Math.cos(2*pi*i/n);
L[i] *= factor;
L[2*n-i] *= factor;
}
This "cosine" wave rises and falls just once in the entire range, versus many wavelengths for the sin part above.

Why does this work? The intuitive idea is that for a function with too high a frequency, doing the convolution will make everything cancel out, approximately. However, if you use a function with a lower frequency, it doesn't.

Highpass Filters

Suppose you want the frequencies higher than a given frequency f0. The simplest approach is to subtract the result of the lowpass filter for f0, which we'll call L0, from the original signal. Convolution is linear, meaning s*(K1±K2) = s*K1 ± s*K2. If I is the identity kernel, we have s*I = s, and so s-s*L0 = s*I - s*L0 = s*(I-L0). The highpass kernel H0 is thus just I-L0.

Bandpass Filters

To do bandpass filtering, to keep only those frequencies between f1 and f2, one first applies a lowpass filter K1 for f1, and then applies a highpass filter K2 for f2. The result can be combined into a single filter by convolution: K = K1 * K2. If the lengths of K1 and K2 are m and n, then K will have length m+n.

One application of bandpass filters is to filter an analog voice channel to the range [300 Hz, 3400 Hz] before digitization.

Filter length versus sharpness

I'll have to draw a picture for this one.

DTMF (Dual-Tone Multi-Frequency, aka TouchTone)

How do we recognize the frequencies dialed by a touchtone phone? One way might be with a sufficient number of bandpass filters, but this is not how it is done in practice. Instead, the Goertzel algorithm is used: we iteratively compute s(n) by:

    s(n) = samples[n] + 2cos(2*pi*f)s(n - 1) - s(n - 2);    s(-2) = s(-1) = 0;

Power level at frequency f is then calculated from s(N), s(N-1), where N = samples.length-1:
    power = s(N)2 + s(N-1)2 - 2cos(2πf)*s(N-1)*s(N)

The frequencies:



1209 Hz 1336 Hz 1477 Hz 1633 Hz
697 Hz 1 2 3 A
770 Hz 4 5 6 B
852 Hz 7 8 9 C
941 Hz * 0 # D




Cellular telephony


Cellular issues: neighboring cells interfere with each other!
Cellular phones must also resist eavesdropping (though encryption is the "right" way to do that)
Cell phones must somehow deal with multipath distortion!!

Three modulation techniques:
FDMA: Frequency Division Mulitple Access: essentially FM
    everyone talks on a different channel
TDMA: Time Division Multiple Access: essentially time-division multiplexing
    everyone talks one at a time, quickly, during their own time slot
CDMA: Code Division Multiple Access:
    everyone talks in a different language?

CDMA handles multipath distortion (reflected, delayed copies of the original signal) better than the others. While all three are theoretically equally efficient in terms of signal bandwidth, in practice the latter two need guard bands to handle multipath distortion, and so CDMA allows more channels. CDMA allows reuse of frequencies from adjoining cells, for a seven-fold improvement over TDMA/FDMA. It also may allow more graceful signal degradation as the channel is oversubscribed.

Hedy Lamarr, b. 1914, Vienna
One of the great actresses of Hollywood's golden era: wikipedia.org/wiki/Hedy_Lamarr
inventor of FHSS

Spread Spectrum

9.2: FHSS & Hedy Lamarr / George Antheil

Both sides generate the same PseudoNoise [PN] sequence. The carrier frequency jumps around according to PN sequence.
In Lamarr's version, player-piano technology would have managed this; analog data was used. Lamarr's intent was to avoid radio eavesdropping (during WWII); an eavesdropper wouldn't be able to guess the PN sequence and thus would only hear isolated snippets.

Digital world:
MFSK: multiple FSK; 2L frequencies to send L bits at a time (see Fig 5.9).
Review this simple example of MFSK
Note one individual frequency is used to encode L bits; we need 2L frequencies in all to send all L-bit patterns.
Note we are doing (limited) frequency hopping already
The goal of FHSS/MFSK is to greatly increase the hopping range

FHSS: use PN sequence to spread the frequencies of MFSK over multiple "blocks"; each MFSK instance uses one block.
Tc: time for each unit in the PN sequence
Ts: time to send one block of L bits

slow v fast FHSS/MFSK: Fig 9.4 v 9.5
slow: Tc >= Ts: send >= 1 block of bits for each PN tick
fast: Tc < Ts: use >1 PN tick to send each block of bits

feature of fast: we use two or more frequencies to transmit each symbol; if there's interference on one frequency, we're still good.

Consider 1 symbol = 1 bit case

If we use three or more frequencies/symbol, we can resolve errors with "voting"

What is this costing us?


9.3: DSSS

Each bit in original becomes multiple bits, or chips. We end up using a lot more bandwidth for the same amount of information! The purpose: resistance to static & interference.

Same spreading concept as fast FHSS, but stranger.

Basically we modulate the signal much "faster", which spreads the bandwidth as a natural consequence of modulation. We no longer have a discrete set of "channels" to use.

Suppose we want to use n-bit spreading. Each side, as before, has access to the same PN sequence. To send a data bit, we do the following:
The receiver then:
See Figure 9.6.

Sometimes it is easier to think of the signal as taking on values +1 and -1, rather than 0/1. Then, we can multiply the data bit by the n-bit PN sequence; multiplication by -1 has the effect of inverting.

Figure 9.9 shows the resultant spectrum. Because we're sending at a n-times-faster signaling rate, our spectrum is also n times wider. Why would we want that? Because jamming noise is likely to be isolated to a small part of the frequency range; after the decoding process, the jamming power is proportionally less.

9.4:  CDMA

Here, each bit becomes k "chips" (k ~ 128). The chips are sent using FHSS or DSSS. Each user has an individual "chipping code".

As with DSSS, one consequence is that data is spread out over a range of frequencies, each with low power

basic idea:
K users: each user transmits k chips on k frequencies at the same time. Ideally we use K=k, but in practice K can be rather larger. All the transmissions add together. The transmissions are done in such a way that each receiver can extract its own signal, even though all the signals overlap.

Trivial way to do this: ith sender sends on frequency i only, and does nothing on other frequencies. This is just FDM. We don't want to use this, though.

Example from Table 9.1:
CA∙CB = 0, CA∙CC = 0, CB∙CC = 2 (using ∙ for "dot product")

A sends (+1)*CA or (-1)*CA

suppose D = data = a*CA + b*CB, a,b = +-1

Then D∙CA = a*CA∙CA + b*CB∙CA = a*6 + b*0 = 6a; we have recovered A's bit!