### Comp 271-400 Week 2

Crown 103, 4:15-8:15

Welcome

Bailey chapter 2, sections 2.1 and 2.d, on pre/post conditions and assertions.
Bailey chapter 5, on recursion
Bailey chapter 6, on sorting
Bailey chapter 11 section 11.2.2 on binary search

Morin chapter 11 on sorting (maybe wait on this one until after you read Bailey)

Primary text: Bailey, online, and maybe Morin, also online.

### Pre- and Post-conditions

Bailey addresses these in Chapter 2.

A simple example of a precondition is that the function Math.sqrt(double x) requires that x>=0. The postcondition is something that is true afterwards,on the assumption that the precondition held (in this case, that the value returned is a "good" floating-point approximation to the squareroot of x). Note that sometimes precondition X is replaced in Java with the statement that "an exception is thrown if X is false"; this is probably best thought of as amounting to the same thing.

Note that it is up to the caller of a function to verify the precondition. Sometimes (though not always) the function verifies that the preconditions hold.

An invariant is a statement that is both pre and post: if it holds at the start, then it still holds at the end. The classic example is a loop invariant.

int sum = 0;
int n=0;
while (n<=100) {    // invariant: sum = 1+2+...+n
n += 1;
sum += n;
}

We're not going to obsess about these, but they're good to be familiar with. Most loop invariants are either not helpful or are hard to write down; sometimes, however, they can really help clear up what is going on.

Consider again the Ratio class. One version of the gcd() method was recursive: it calls itself. But we also had an iterative version:

// pre: a>=0, b>=0
int gcd(int a, int b) {
while (a>0 && b>0) {
if (a>=b) a = a % b;
else b = b % a;
}
if (a==0) return b; else return a;

Is there an invariant we can use here? Basically, the gcd of a and b never changes. How do we write that?

How do we write the invariant? First, we note that gcd(a,b) = gcd(a,b%a), always; any divisor of a and b is a divisor of b%a (which has the form b-ka), and any divisor of a and b%a is a divisor of b. So, when rgcd(a,b) returns rgcd(a,b%a), that is the same value, by invariance.

A second question, though, is whether rgcd() ever returns. One way to prove this is to argue that the first parameter to rgcd() keeps getting smaller. We stop when it reaches 0, as it must. The atomic case in the recursion is the case that involves no further recursive calls; in the gcd() example it is the case when a==0.

When we're dealing with loops we also should argue that the loop terminates. Usually this "seems" more obvious.

### Object Semantics

Remember the assignment in the Lab 1 expanding-the-array code

elements = newelements;

List-related examples:

#### Table of Factors

This is the example on Bailey page 88. Let us construct a table of all the k<=n and a list of all the factors (prime or not) of k, and ask how much space is needed. This turns out to be n log n. The running time to construct the table varies with how clever the algorithm is, it can be O(n2) [check all i<k for divisibility], O(n3/2) [check all i<sqrt(k)], or O(n log n) [Sieve of Eratosthenes].

#### Space in a string

The answer depends on whether we're concerned with the worst case or the average case (we are almost never interested in the best case). If the average case, then the answer typically depends on the probability distribution of the data.

#### More complexity

A function is said to be polynomial if it is O(nk) for some fixed k; quadratic growth is a special case.
So far we've been looking mainly at running time. We can also consider space needs.

### Chapter 6: Sorting

See sorting.html#sorting

In-class lab: get some sorting times

Repeated sorting:

• insertion sort
• mergesort
• countbelow() on sorted and unsorted data

Why is mergesort faster when applied to sorted data?

### Recursion

Recursion starts at Bailey page 94
See recursion.html