Comp 346/488: Intro to Telecommunications

Tuesdays 4:15-6:45, Lewis Towers 412

Class 3, Jan 31

Reading (7th -- 9th editions)
10.1, 10.2, 10.3
Chapter 1: 1.3, 1.4
Chapter 2, 2.1-2.6, especially 2.3 on TCP/IP

Chapter 3 readings (7th-9th editions):
    3.1: Concepts
    3.2: Analog & Digital
    3.3: Transmission impairments
    3.4: Channel Capacity


Ekiga

From ekiga.org: "GnomeMeeting has been renamed Ekiga, to be pronounced [i k ai g a]."

Demo of Ekiga softphone Note the substantial echo (I got ~550 ms)

Why is the port 5061 rather than 5060?


Asterisk demos



Program set 1: Oscillator and Mixer



Audacity demo

The audacity package lets you view waveforms (and also do some sound editing).

UDP

Let's look at a tcpdump trace of a phone call using wireshark (cisphone.pcap)
Now look at a tcpdump trace of a call from the perspective of ulam2 (flowtrace_outbound.pcap)
Other traces: flowtrace_inbound.pcap and star67.pcap.



Chapter 3:    Today we will continue learning about sin

Sine waves

(abbreviated)
 
frequency-domain concepts
 
Fact: physical transmission media have natural action on sine waves, due to things like "capacitance".  This action is different at different frequencies (eg high-frequency attenuation or distortion).
 
 
Discussion of benefits of Fourier analysis and the frequency-domain view: what if distortion is linear, but:
What if we want to understand bandpass filtering?
 

Fourier.xls

I've created a simple spreadsheet, fourier.xls, that can be used to create a graph of the result of the first 10 terms of any Fourier series.
Examples:


Time domain and Frequency domain

You can take any signal, periodic or not, and apply the Fourier transform to get a frequency-domain representation. This can be messy, but is not particularly deep. Or helpful: having a signal composed of complex sine waves of continuously varying frequency can be intractable.

Most voice signals are not periodic at larger scales: people say different words.

What is useful, though, is the idea that over small-but-not-too-small intervals, say 10-100ms, voice does indeed look like a set of periodic waves. At this point, the frequency-domain view becomes very useful, because it is just a set of discrete frequencies.

In other words, "sound is made up of vibrations".

Maybe even more importantly, most digital data signals similarly look "almost periodic", often over much larger intervals than voice. So the same principle applies: translation to the frequency domain is actually useful.

When we start looking at the details of audio highpass, lowpass, and bandpass filters (eg for audio), we'll see that they involve averaging over a time interval. If that time interval is too large, the signal may no longer look very periodic and we may get misleading or distorted results. If the time interval is too small, the filter may not be "sharp" enough.



How do we get Fourier series?

We will assume that the waveform data has been digitized and is available as an array of values A[n].

One way to think of the Fourier series is as a set of averaging operations: the sequence A[n] is "averaged against" a sine (or cosine) wave sin(2πFn), where f is the "frequency" in units of the sampling rate. (More specifically, the frequency in Hz is f = FR, where R is the sampling rate. Alternatively, the sampling frequency in terms of the frequency in Hertz is F = f/R. For a 400-Hz tone, F = 1/20, meaning that 20 samples make up one full sine wave.)

The idea behind "averaging against" is that we're taking a weighted average of the A[n], eg 1/4 A[1] + 2/3 A[2] + 1/12 A[3]. However, with sin(n) involved, some of the weights can be negative.

Averaging two waveforms, in two arrays A and B, amounts to getting the following sum:
    double sum = 0.0
    for (i = 0; i < whatever; i++) sum += A[i]*B[i]

"whatever" might be A.length, or max(A.length,B.length). We might also "shift" one of the arrays, so the sum is
    sum += A[i]*B[shift+i]
In some cases it is natural to reverse the direction in B:
    sum += A[i]*B[shift−i]

If we're averaging against a sine wave, with frequency 2πf, we might have something like this:
    for (n=0; n<len; n++) sum+=A[n]*sin(2*Pi*f*n);
    avg = sum / len

Digitized sound makes these sums easy to do. Sometimes it's unclear what you divide by (eg len, above; sometimes it's actually the time in seconds), but if we divide by the wrong thing we're usually just off by a constant.

Example 1: two sine waves of different frequency average to zero; phase does not matter

Two of the same frequency average to, well, maybe 1/2 (avg of sin2(x)) = 1/4. This is a constant.

Assume the signal g(x) is periodic, and is composed of sines, and has base frequency 1 (or 1/2π). We know g(x) must have the following form; we just need to find A1, A2, etc:

    g(x) = A1 sin(x) + A2 sin(2x) + A3 sin(3x) + ...

To do this, we start by averaging both sides with sin(x):

Average(g(x), sin(x)) = A1 * average(sin(x), sin(x)) + A2 * average(sin(2x), sin(x)) + A3 * average(sin(3x), sin(x)) + ...
                            = A1 * 1/2

From this we can readily calculate A1, and by averaging with sin(2x) get A2, etc.

In the above averages, we calculate one fixed sum (eg for (i=0;i<max;i++) sum+=g(n)*sin(n)). There is another kind of averaging, where we repeatedly "shift" one of the two functions involved, to get a new sequence / waveform /function. This is also called moving averages. Things don't necessarily have to be periodic here.

Here's a simple example, defining a new sequence B[n] from existing sequence A[n]. If A[n] is defined for n from 0 to 1000, inclusive, then B[n] will be defined for n from 1 to 999.

    B[n] = (A[n-1] + A[n] + A[n+1])/3

We can think of this as a weighted average of A[] with (1/3, 1/3, 1/3). The point here is that we get a new sequence, B[n], instead of a single value like A1 above. However, the actual calculation is the same: find the sum of a bunch of A[n]*f(n) products.

Any linear transformation of the input signal can be expressed this way, though some may need lots and lots of coefficients and finding the coefficients might in some cases be difficult.

However, there are some shortcuts to the find-the-coefficients problem. If we know what we want in the frequency domain, we can apply some of the math above to find the corresponding coefficients for a corresponding moving average in the time domain.  For example, suppose we want to create a "lowpass filter", to cut off the higher frequency signals. This is an easy point-by-point thing to express, in the frequency domain:

(1, 1, 1, ...., 1, 0, 0, 0, ...)
                ^cutoff

Apply this to a sequence of frequency amplitudes (a1, a2, a3, a4, a5, a6, a7, a8) (representing respective amplitudes, say, of sin(x), sin(2x), ..., sin(8x)). Assume the cutoff is at position 4, and multiply corresponding values. We get
    (1*a1, 1*a2, 1*a3, 1*a4, 0*a5, 0*a6, 0*a7, 0*a8)
    = (a1, a2, a3, a4, 0, 0, 0, 0)

Point-by-point multiplication in the frequency domain corresponds to a specific moving average in the time domain, and we can calculate that in a relatively straightforward way (although it does involve some calculus).

In this particular case of a frequency-domain cutoff, above, it turns out that the time-domain moving average we need is the "sinc" function (with a "c", pronounced "sink"):

sinc(x) = sin(x)/x    (sinc(0) = 1)