Lists
The basic List interface resembles an array, in that one can access and
update elements by specifying their position. The underlying
implementation, however, may not be via an array (for example, linked
lists), in which case accessing the Nth element may not be as fast as
accessing a corresponding array element.
Most list interfaces also support some mechanism of automatic size
expansion to accommodate new additions, reducing the risk of array-bounds
overflow. However, attempting to access an element (by position) that is
not there is always an error: retrieving item 100 of a list that contains
items 0 through 37 can never end well.
Java List<T> and ArrayList<T>
Java supports the ArrayList class, and the List interface. We'll only
touch on interfaces here.
To declare an ArrayList of strings, use
import java.util.*; // or
java.util.ArrayList
...
ArrayList<String> L = new
ArrayList<String>()
The constructor here is the right side; there is no parameter. The
<String> is the "type parameter". ArrayList is a generic
class, as are most data-structure classes.
The list L created above is initially empty. We use L.add(str) to add
entries:
L.add("apple");
L.add("banana");
L.add("cherry");
The current size of the list is available as L.size(); this would return
3 for the list above. The elements are at positions 0, 1 and 2.
We can now set and retrieve elements with L.set(i,val) and L.get(i),
corresponding to A[i]=val and A[i], for an array A. In the above, L.get(0)
returns "apple".
The capacity of the list represents the underlying memory allocation, which,
for an ArrayList, is an array. It is 10, by default, though we could also
have passed in a numeric parameter to the constructor: new
ArrayList<String>(100).
Things get interesting when you keep using .add to insert one more element
than there is room for. A new internal array is allocated, and the contents
of the old array are copied over. The .add() operation then continues as if
nothing unusual had happened.
It is possible to use .add() to add an entire array's worth of elements. We
can also use .add(pos, val) to add the value to position pos, rather than to
the end of the list. In this case, the value that was at position pos is
moved to position pos+1, and so on; nothing is overwritten. Other useful
ArrayList options:
- L.contains(val)
returns true if val occurs in L
- L.indexOf(val)
finds position of first occurrence of val in L, or -1 if val is not
there
- L.remove(n)
removes value at position n
- L.remove(val)
finds first occurrence of val and removes it
- L.subList(from, to) returns new list of L[from],L[from+1], ...,
L[to-1]
- L.toArray()
returns an array, with no excess capacity
Demo: arraylisttest
As a general rule, ArrayLists are best when you're not sure of the list
length at the start of the program. Arrays work best when you do
know the array size upfront.
Vector
Chapter 3 contains Bailey's Vector class, which represents a List of
Objects implemented via an array. The main feature is that the vector can
grow.
We'll start with the Vector interface on page 46.
Bailey's examples:
Vector used for Wordlist
Vector used for L-systems
There is an issue with Vectors of Objects: we don't really want Objects.
Look closely at the code on page 47; it contains a cast
to (String):
targetWord = (String)
list.get(index);
What this does is takes the Object list.get(index), and, because it is
really a string, allows its use as a string at compile-time.
In Java we will usually use generics, eg
Vector<T>.
In the word-frequency example on page 49 (actually starting on p 48)
there is
wordInfo = (Association) vocab.get(i);
and
vocabWord = (String) wordInfo.getKey();
Adding to the middle of a vector: you need to move from right to left.
See the picture on Bailey p 52.
Vector Growth Demo
C# version
We will use the command-line version. The List<T> class is documented
at http://msdn.microsoft.com/en-us/library/6sh2ey19%28v=vs.110%29.aspx.
The demo program is listgrowth.cs.
We first create the list.
List<String> s = new List<String>();
We can examine s.Count to get the current length of the list, and s.Capacity
to get the current length of the internal array in object s. Initially, both
are 0.
If we add an item: s.Add("apple"), then s.Count = 1 and s.Capacity = 4.
If we add three more items, both s.Count and s.Capacity are 4.
If we then execute s.Add("banana"), s.Count is now 5 and s.Capacity is now
8.
If we then execute s.Add("cherry"), s.Capacity becomes 16.
Java version
The demo program is in arraylisttest. Create an empyt arraylisttest object
on the Object Bench. We can then call:
- addOne(String s): to add one more string
- addSome(int count, String s): to add multiple copies of the string
- addArray(int count, String s): this works like addSome(count,s), but
it puts all the new strings into a new ArrayList, and then adds them all
at once to the original list.
Try:
- Use the inspector to examine the addOne("apple"). Inspect theList.
What is the size of elementData? Where is elementData, by the
way? Where did it come from?
- Same after addOne("apple")
- Same after addSome(9,"banana"). Why 9?
- Same after addOne("cherimoya")
Keep going until you can guess the expansion pattern for elementData.
§3.5: analysis of costs of expansion
- The number of moves to
insert at a random place in the middle of a list of length N is, on
average, N/2.
- The number of compares to
search a list of length N for a random element that is in fact present
is, on average, N/2.
- The number of compares to search a list of length N for an element not there is simply N.
These costs are all linear; that
is, proportional to N.
Now suppose we want to insert N items into a list initially of length 0,
perhaps searching the list each time in order to insert in alphabetical
order. Each item's required position is more-or-less random, and so takes on
average size()/2 moves. That is, to insert the 1st element takes 0/2 moves,
the 2nd takes 1/2, the 3rd takes 2/2, the 4th takes 3/2, the 5th takes 4/2,
etc. Adding all these up gives us a total "average" number of moves of
1/2 + 2/2 + 3/2 + 4/2 + 5/2 + ... + (N-1)/2 = 1/2*(1 + 2+
3 + ... + (N-1))
= 1/2*(N(N-1)/2) = N2/4-N/4
Now, for large N this is approximately
N2/4, that is, proportional to N2, or quadratic.
Using the big-O notation, later, this is equivalent to saying the number of
moves is O(N2).
Search and insert costs
These operations both take, on average, currsize/2 steps, or N/2 if we
follow the convention that N represents the current size. For search, we
have to check on average half the list; the steps we are counting are the
comparisons. For insert, the operations are the assignments
things[i+1]=things[i].
Both these can be described as O(N).
Generics
We would really like to be able to declare containers with a fixed type,
where the type is supplied as parameter:
List<String> s = new List<String>();
If we use Bailey's Vector class, we would have pretty much the same
performance, but getting strings out of the Vector would always
require a cast:
String s = (String) vect.get(3);
Stack
A stack is a data structure that supports push() and pop() operations. A
stack looks like a list except there is no direct way to access anything but
the topmost element; you cannot even do that except by also deleting that
element from the stack. The basic operations are
- s.push(A): adds data value A to the stack
- s.pop(): returns the most recently pushed value, and deletes it from
the stack
The Stack class, with specific methods push(), pop(), and isEmpty(), is sort
of the canonical example of an Abstract Data Type, that is, a class where
the focus is on representing a "thing" (as is usually the case). A stack can
be implemented as an array, but we
have no access to the top element except through pop(), and no access to the
middle elements at all. Alternatively, we could change the implementation to
that of, say, "linked list", and the class users would be unaffected. Note
that push() and pop() do not simply perform individual field updates.
Finding something to do with the
stack is harder; why would you need that very specific last-in, first-out
(LIFO) access? There are lots of examples from system design and
programming-language design, but they tend not to be trivial. One
straightforward example is to confirm that a line consisting of ()[]{} has
all the braces in balance. The algorithm is as follows:
if you encounter an opening symbol, (, [, or {, push it.
if you encounter a closing symbol, ), ], or }, pop what
is on the stack and verify the two correspond.
when you get to the end of the input, verify that the
stack is empty.
Note that generally popping something off an empty stack is an error, so
that you should check with isEmpty().
Implementing a stack
Here's a stack of strings:
class Stack {
private List<string> L;
public void push(string s) {L.Add(s)}
public string pop() {string s = L[L.Count - 1];
L.remove[L.Count-1]; return s;}
public boolean is_empty() {return L.Count == 0;}
}
push(e) corresponds to L.add(e),
is_empty() corresponds to L.size() == 0.
pop() corresponds to {e=L.get(size()-1; L.delete(size()-1); return e;}
Deleting from a list
If all we do is add, then the growth strategy of doubling the internal
space when necessary makes perfect sense.
But what happens if we will regularly grow lists to large size, and then
delete most of the entries? A list grown to have internal
capacity 1024 will retain that forever, even if we shrink down to just a
few elements.
One approach is to re-allocate to a smaller elements[] whenever
L.Count < L.Capacity/2, or something like that.
Morin in §2.6 (p 49) introduces what he calls a RootishArrayStack, which
is an array-based list with an efficient delete
operation. Here are the key facts (big-O notation is officially introduced
in the next section):
- The space used for n elements is n + O(√n)
- For any m add/remove operations, the time spent growing and shrinking
is O(m)
The idea is to keep a list of arrays (an array of pointers to arrays).
These sub-arrays have size 1, 2, 3, 4, etc respectively. For 10 elements,
the RootishArrayStack would have four arrays, and thus a capacity of
1+2+3+4.
If the RootishArrayStack has N elements in n arrays, then N ≃ n²/2; this
follows because 1+2+...+n ≃ n²/2. When the RootishArrayStack needs to
expand, it will add an array of size n+1 to the pool; this is about √(2n).
Thus, growth is "slower" than for C# Lists or Java ArrayLists. However,
when a new allocation is made for growth, the old space is not discarded.
The real advantage of the RootishArrayStack is for deletions. If the list
shrinks so that the last sub-array is now empty, that sub-array and that
sub-array only is discarded. This is a relatively efficient operation.
Big-O notation and Bailey Chapter 5:
Analysis
We will need to be able to talk about runtime costs. To this end, the big-O
and (to a lesser extent) little-o notations are useful. If N is the size of
the data structure, and f(N) is a growth function (like f(N) = log(N) or
f(N) = N or f(N) = N2), then we say that a cost is O(f(N))
provided the number of steps is bounded by k×f(N), for a constant k, as N
grows large. We say that a cost is o(f(N)) if cost(N)/f(N) → 0 as N grows
large. (Alternatively, cost(N) is O(f(N)) if cost(N)/f(N) is bounded by
constant k as N grows large.)
See Figure 5.1 on page 83.
Here are a few examples for an array-based Vector as in Bailey of length N:
operation
|
cost
|
provisos and notes
|
Inserting at the end of a Vector |
O(1)
|
if no expansion is necessary |
Inserting in the middle of a Vector |
O(N) |
N/2 moves on average |
Searching a Vector |
O(N) |
N/2 comparisons on average if found |
Inserting and searching are both linear.
Adding an element to a SetVector takes O(n) comparisons, because we have to
make sure it isn't already there.
Now suppose we want to insert N items into a Vector initially of length 0,
perhaps searching the list each time in order to insert in alphabetical
order. Each item's required position is more-or-less random, and so takes on
average size()/2 moves. That is, to insert the 1st element takes 0/2 moves,
the 2nd takes 1/2, the 3rd takes 2/2, the 4th takes 3/2, the 5th takes 4/2,
etc. Adding all these up gives us a total "average" number of moves of
1/2 + 2/2 + 3/2 + 4/2 + 5/2 + ... + (N-1)/2 = 1/2*(1 + 2+
3 + ... + (N-1))
= 1/2*(N(N-1)/2) = N2/4-N/4
Now, for large N this is approximately
N2/4, that is, O(N2), or quadratic.
Building a list up by inserting each element at the front (or inserting each
element at random) is O(n2). (This is the last example on Bailey
page 87.)
Taking the union or intersection of two sets is O(n2) (Why? Is
there a faster way?)
Finding if a number is prime by checking every k < sqrt(n) is O(n1/2).
How hard is it to find the minimum of an array of length N? O(N)
How hard is it to find the median of an array of length N?
Somewhat surprisingly, this can also be done in O(N) time. See sorting.html#median.
See sorting.html#binsearch for an
analysis of binary search
A function is said to be polynomial
if it is O(nk) for some fixed k; quadratic growth is a special
case.
So far we've been looking mainly at running time. We can also consider
space needs. As an example, see the Table of Factors
example on Bailey page 88. Let us construct a table of all the k<=n and
a list of all the factors (prime or not) of k, and ask how much space
is needed. This turns out to be n log n. The argument here is a bit
mathematical; see Bailey. If the table length is n, then factor f can
appear no more than n/f times (once every fth line).
The running time to construct the table varies with how clever the
algorithm is, it can be
- O(n2) [check all i<k for divisibility]
- O(n3/2) [check all i<sqrt(k)]
- O(n log n) [Sieve of Eratosthenes]
Now suppose we want to search a large string for a specific character.
How long should this take? Bailey has an example on p 90. The answer
depends on whether we're concerned with the worst case or the average case
(we are almost never interested in the best case). If the average case,
then the answer typically depends on the probability distribution of the
data.
Linked Lists
The standard "other" way of implementing a list is to build it out of cells,
where each cell contains a pointer to the next item. See
- Bailey, chapter 9 section 4
- Morin, Chapter 3 (p 63) (we will look at singly-linked lists, or
Morin's SLList)
Linked lists are very efficient in terms of time to allocate and de-allocate
space. Insertion is O(1). Finding an element is O(n), however, even if the
list is sorted; there is no fast binary search.
Each linked-list block contains two pointers, one for data and one for the
link. That's a 2× space overhead. For array-based lists, that would
correspond to having each list have a Capacity that was double its Count.
That's not necessarily bad, but the point is that linked lists have limited
space efficiency. (They may be quite efficient in terms of
allocation time, though; each block allocated amounts to one list cell, and
if many linked lists are growing and shrinking then the allocator can in
effect just trade cells back and forth. With array-based lists, however, if
two lists have just deleted blocks of size 256 and a third list now needs a
block of size 512, the deleted blocks cannot be recycled into the new block
unless they just happen to be adjacent.
Here is some code from the demo file Tlister.java
class
TLinkedList<T> {
private T data;
private
TLinkedList<T> next;
public
TLinkedList(T d, TLinkedList<T> n) {data=d; next=n;}
public T first()
{return data;}
public
TLinkedList<T> rest() {return next;}
}
The interface is peculiar here; ignore that for now.
A program that uses this might be:
static
void main(String[] args) {
TLinkedList<String> slist = new TLinkedList<String>("apple",
null);
slist = new
TLinkedList<String>("banana", slist);
slist = new
TLinkedList<String>("cherry", slist);
slist = new
TLinkedList<String>("daikon", slist);
slist = new
TLinkedList<String>("eggplant", slist);
slist = new
TLinkedList<String>("fig", slist);
TLinkedList<String> p = slist;
while (p!= null)
{
System.out.println(p.first());
p = p.rest();
}
}
This is not exactly what we want: too many internals are exposed.
A more contained implementation would be as follows:
class
TLinkedList<T> {
class
Cell<T> {
private T data;
private Cell<T> next;
public Cell(T d, Cell<T> n) {data=d; next=n;}
public T first() {return data;}
public Cell<T> rest() {return next;}
}
private
Cell<T> head = null;
public void
AddToFront(T element) {head = new Cell<T>(element, head);}
public bool
is_empty() {return head == null;}
public T First()
{return head.first();}
public void
DelFromFront() {head = head.rest();}
}
A slightly more complete Cell subclass is the following (changes in bold)
public class Cell<T> {
private T data_;
private Cell<T> next_;
public Cell(T s, Cell<T> n) {data_ = s; next_ = n;}
public T data() {return data_;}
public Cell<T> next() {return next_;}
public void setData(T s) {data_ = s;}
public void setNext(Cell<T> c) {next_ = c;}
}
Implementing a stack using linkedlist
push(e) corresponds to AddToFront(e), is_empty() corresponds to head ==
null.
pop() corresponds to ...
Implementing a set
In section 3.7 Bailey uses vectors/Mylists to implement
an abstract Set. Note the more limited set of operations; there is no get()
and no set().
add() now works very differently: add(E e) is basically if
(!contains(e) ) add(e), where the second add(e) is Vector.add(e).
On the face of it, to form the union of two sets A and B of size N, we need
N2 equality comparisons: each element of A has to be compared
with each element of B to determine if it is already there. This cost is
sometimes said to be O(N2)
if we don't care if it's N2, or N2/2, or 3N2.
Later we'll make this faster with hashing.
Brief summary: choose a relatively large M, maybe quite a bit larger than N.
Define h(obj) = hashCode(obj) % M. Now choose a big array ht (for hash
table) of size M, initially all nulls. For each a in A, do something with
ht[hash(a)] to mark the table. Then, for each b in B, if ht[hash(b)] is
still null, put it in; it's not a duplicate! If ht[hash(b)] is
there already, then we have to check "the long way", but in general we save
a great deal.
Is there an intersect option?
Java LinkedList
Java has a LinkedList<T> class. It works like ArrayList, except it
uses linked lists. That makes lookup of an arbitrary element O(N), but
insertion (once you've found the postion) is now O(1).
Using List<T> to implement a Matrix class in C#
Suppose we want to construct a two-dimensional object, Matrix. Values in the
Matrix will have type double.
The class should have the following operations:
- Matrix(int height, int width)
- getWidth()
- getHeight()
- get(int row, int col)
- set(int row, int col, double val)
How should we proceed?
Here's a simpler problem: how should we implement a Vector<T>
class, where vector objects have a fixed length, and are initialized to 0?
C# does take care of that latter, but List<T>'s do not automatically
have the right length. Also, ideally we'd like to "hide" the add()
operation, that can make a List<T> grow longer than we'd
like.
class
Vector { public Vector(int l) {...}
public int getlength() {...}
public double get(int i) {...}
public void set(int i, double val) {...}
}
Now let's return to the Matrix class. As in Vector, we will pre-allocate
space for all the elements. Here is some simple code to implement a matrix
class with TList objects (not yet converted to Java).
/**
* Class Matrix is implemented by a TList of rows.
*/
class Matrix {
// instance variables - replace the example below with your own
private TList<TList<double> > m;
// list of lists
private int height, width;
/**
* Constructor for objects of class TList
*/
public Matrix(int h, int w)
{
// initialize instance variables
height = h;
width = w;
m = new TList<TList<double>>(height);
// we must preallocate all the rows
for (int i = 0; i<height; i++) {
TList<double> theRow = new TList<double>(width);
theRow.Fill(0.0);
// we must preallocate all the slots (columns) in each row
m.Add(theRow);
}
}
public int getwidth() {return width;}
public int getheight() {return height;}
// get nth value, with range check
public double get(int r, int c) {
if (r<0 || r >= height) {
Console.WriteLine("Warning: Matrix.get() called with out-of-range row = " + r);
return 0.0;
}
if (c<0 || c >= width) {
Console.WriteLine("Warning: Matrix.get() called with out-of-range column = " + c);
return 0.0;
}
return m.get(r).get(c);
}
// set nth value, with range check
public void set(int r, int c, double val) {
if (r<0 || r >= height) {
Console.WriteLine("Warning: Matrix.set() called with out-of-range row = " + r);
return;
}
if (c<0 || c >= width) {
Console.WriteLine("Warning: Matrix.get() called with out-of-range column = " + c);
return;
}
m.get(r).set(c,val);
}
}
Things to note:
- We're using TList to build a 2-D structure.
- There's no analogue to TList.add(E e); we have to add entire rows or
columns or else the Matrix will no longer be neatly rectangular. Note
that we add new rows and columns "empty", that is, populated with nulls.
(In the code above, there is no way to add a new row or column.)
- Because the generic class uses TLists, not arrays, we don't have any
problem using the element type E directly throughout. When we created
the Vector and MyList classes, we had that annoying need to use Object
when creating arrays even when we wanted EltType.
- Matrix.print(int fieldwidth) is a handy way of generating output. Note
the parameter. Because of the parameter, making this into ToString() is
tricky. (Not shown above.)
- How do we know all the rows are the same length?
Linked List Efficiency
What good are linked lists? Inserting in the middle is fast, but finding
a point in the middle is slow. So almost everything is O(n).
But inserting at the head is always fast.
Also, linked lists use memory efficiently if you have a great many shorter
lists. While the next_ fields require space, there are no "empty" slots as
in an array-based stack. And no memory wasted due to list expansion.
These are singly linked lists; a doubly linked
list has a pointer prev_ as well as next_, that points to the
previous element in the chain.
Stacks and Linked Lists
While the array implementation of a stack is quite fast, the linked list
approach is equally straightforward. All we have to do is maintain a
pointer to the head:
class stack<T> {
private Cell<T> head_;
public bool is_empty() {return (head_ == null);}
public T pop() {T val = head_.data(); head_ =
head_.next(); return val;}
public void push(T val) {head_ = new
Cell<T>(val, head_);}
}
What would we need to do in C++ if we wanted to be sure we deleted a
popped cell?
Sorting Linked Lists
How would you sort a linked list? QuickSort is out.
Hashing
"When in doubt, use a hash table"
- Brian Fitzpatrick, Google engineering manager and former Loyola
undergrad
One way to search through a large number of values is to create a hash
function hash(T) that returns an integer in the range 0..hmax-1.
Then, given a data value d, we calculate h = hash(d)
and then put d into "bucket" h. A convenient way to do this is to have an
array htable of lists, and add d to the list htable[i]. This particular
technique is sometimes called "bucket hashing" or "chain hashing"; see
Bailey 15.4.2.
Linked lists are particularly convenient for representing the buckets, as we
will have a relatively large number of them, and most will be small.
However, array-based lists can also be used.
What shall we use as a hash function? This comes up often, and a great
number of standard data structures rely on having something available.
Therefore, Java provides every object with a hashCode() method. It returns a
32-bit value.
Demo: what are hashcodes of
- int values
- "d"
- "A"
- " "
- "2"
On my system, for a two-character string hashCode() returns
31*first_char + second_char, where the values first_char and second_char are
the ascii numeric values. So, for string "db", where 'd' is 100 and 'b' is
98, hashCode() returns 3198. See class HashCodes in demo hash.
Example: bucket hashing of
"avocado",
"banana",
"canteloupe",
"durian",
"eggplant",
"feijoa",
where hash(s) = s.length();
Many classes choose to "tune" the standard hashCode() by providing their own
version. Many data structures will simply assume that two objects with
different hashcodes are unequal, so it is important when providing an
overriding .equals() method to also provide .hashCode(). In lab 3, I
provided equals() and hashCode() for class LinkedList<T>; for lab 1 I
did this for StrList.
If you were to create a class with its own .equals(), but no .hashCode(),
search might fail with some containers. Given a container of your class,
Java might determine that there was no value in the container that had the
same hashCode() value as the search target, and give up, even if there was
in fact a value in the container that was .equals() to the search target.
Mid-class exercise: call hashCode() on the following strings:
{
"avocado",
"banana",
"canteloupe",
"durian",
"eggplant",
"feijoa",
"guava",
"hackberry",
"iceberg",
"jicama",
"kale",
"lime",
"mango",
"nectarine",
"orange",
"persimmon",
"quince",
"rutabega",
"spinach",
"tangerine",
};
The above can be assigned to an array string[] A; this is done in
hashFruit.java and hashStats.java (in hash).
1. Do all of you get the same values for s.hashCode()? In C# on linux, for
"avocado" I get -622659773 and for "guava" I get 98705182. (Java is a
little more standard across different platforms than C#.)
2. Now use hash(s), in the file above, and put the strings into htable. For
what htablesize do you get buckets with "collisions": more than one string
assigned to it? For what htablesize is this particular table collision-free?
3. Can you think of an orderly way of searching for the answer for #2?
The hash table in hash.cs is not actually an object. What do we have to do
to make it one? Perhaps htablesize could be a parameter to the constructor.
Open Hashing
Another way to do hashing is so-called "open" hashing: a data object d
is simply put into htable[hash(d)]. If that position is
taken, the next position is used. For this to work, we need to be sure
that htablesize is quite a bit larger (eg at least double) the number of
elements added. Deletions require careful thought. See Bailey 15.4.1.
Traversing a Hash Table
If we want to print out a hash table, or construct an iterator to step
through each element in turn, we can simply run linearly through the
hashtable array. For bucket hashing, each hashtable[i] represents a linked
list to be traversed. For open hashing, we simply skip over the unused
elements.
This traversal is in no particular order!
A class based on this is in hashtable.java; note the print() method.
This class uses the string type; there is also inthashclass.cs
that uses int (yes, I should have made this use a generic type).
Hash-table performance
The usual strategy is to choose a table size comparable to the number of
items stored, rehashing as necessary to maintain this as the
number of items grows. This way, the average length of the bucket lists is
1.
This isn't quite as good as it sounds, as the empty buckets figure in the
length average but not into the real-world performance stats. Still, if λ
is the average number of items per bucket, and N is the number of buckets,
then the Poisson distribution says that the expected number of buckets
with k items is Ne-λ/k!. If λ=1, this means that list lengths
are approximately distributed as follows:
size of bucket |
fraction of buckets |
0 |
36.79% |
1( |
36.79% |
2 |
18.39% |
3 |
6.13% |
4 |
1.53% |
5 |
0.31% |
The above assumes that the hash function distributes items among the
buckets randomly. Trying to improve on this is usually not worth the
effort. However, for special cases when there are many times
more lookups than updates, it may pay to attempt to tweak the
hash function to minimize collisions; this would have to be done after
every few updates though. One common approach to tweaking is to try a
range of different values for some numeric parameter built into the hash
function, and then pick the value that makes the hash function perform
best.
Fibonacci Hashing
The performance bottleneck for classic hashing is dividing by the size of
the table. Here's an alternative: the table size is a power of 2, M, and
we have, for integer x
hash(x) = trunc(M×(a×x mod W)/W)
(For string data, we might use hash(s.hashCode().) W is the word size of
the machine, eg 232. M is the table size, also a
power of 2 (perhaps M = 210 = 1024). The value a is W/φ, where
φ is the Fibonacci ratio, or the golden ratio, (√5+1)/2. For W=32,
a=2,654,435,769.
The reason this often works well is that it is particularly effective at
spreading the hash values of consecutive x's very widely. When used with
open hashing, above, this means we will seldom encounter collisions.
Multiplication by the large value of a, above, is usually quite a bit
faster than dividing by a non-power-of-two table size. Finding a×x modulo
the word size W just means that we did the multiplication and got the
low-order bits only; that is, we ignored the overflow.
Hash Sets and Hash Dictionaries
One way to implement sets of strings (or of a generic type T) is with lists:
class StrSet {
private StrList sl;
public StrSet() {sl = new StrList(100); }
public boolean isMember(String s) {
for (int i=0; i<sl.size(); i++) {
if (sl.get(i) == s) return true;
}
return false;
}
public void add(string s) {
if (isMember(s)) return;
sl.add(s);
}
}
But there is a problem here: the isMember() and add() methods are O(N).
[why?]
Can we do better? Yes, with hashing.
To create a HashSet, we use a hash table as in hashtable.java.
The code for this is in hashsetdemo.java.
class hashset {
private hashtable ht;
public hashset(int size) {ht = new hashtable(size);}
public boolean isMember(String s) {
return ht.isMember(s);
}
public void add(String s) {
if (isMember(s)) return;
ht.add(s);
}
public void print() {
ht.print();
}
}
To run this, it must be linked with hashclass.java.
Dictionaries
To create a dictionary, we will use generic type parameters K for the key
and V for the values. We will rewrite our hashtable class so that the Cell
contains fields for the key (of type K) and value (of type V).
The interface will then be:
- V get (K key): returns the value corresponding to key, or else
default(V) (generally null)
- void add(K key, V val): adds the new pair. Precondition: K is not
already present
- void update(K key, V newval): like add. K may or may not be
present.
Demo: use CountWords.java from the hashing lab and count the word
occurrences in a paragraph pasted in from some other source (these notes, or
else a paragraph from Bailey). CountWords.java uses the java.util Map class
by default, or can be rewritten to use your own hashing class. When working
with a long text string s holding a paragraph, use s.Split() to divide it
into words. Extra options:
- Use s.Split(" .,;[]()\t") to split at other characters besides spaces
- Convert each word to lowercase
Iterators
Suppose we build a hashtable object, ht, of Strings. Suppose we want to
be able to print out the hash table using a for-each loop, like this:
foreach
(String s: ht) System.out.println(s);
What do we have to do? The answer is to define an iterator.
If we just include the foreach loop above, we get this error message:
Error: java: for-each not applicable to expression type
required: array or java.lang.Iterable
found: hashtable
We must make hashtable implement the Iterable interface, as follows:
class hashtable implements Iterable<String> {
To implement an interface is to promise to implement within the class the
methods required by that interface. In this case, we must have a method of
type public Iterator<String>
iterator(). To do this, in turn, we typically have it return an iterator
object that we define for our class:
public Iterator<String> iterator() {
return new hIterator();
}
Now we have to define class
hIterator, which must implement Iterator<String>
(not the same as Iterable<String>
above!). An iterator must implement the following methods:
- boolean hasNext();
- String next();
- void remove();
The first two are all we need for the system to be able to iterate
through the table, returning each successive entry with each call to next:
while
(hasNext()) {
System.out.println(next());
}
The iterator keeps track of our position. The idea is that the internal
state of the iterator should always refer to the next element in the
structure.
Before the HashTable example, we'll start with something simpler: an
iterator for an ArrayList class [arraylistiterator]. The variable
pos in the iterator class below keeps track of the position of the
next element of the array; it is initialized to 0 and hasNext() becomes
false when pos == elements.length. We don't implement remove().
private class alIterator implements Iterator<String> {
private int pos;
public alIterator() {
System.out.println("initializing alIterator");
pos = 0;
}
public boolean hasNext() {
return (pos < elements.length);
}
public String next() {
String retval = elements[pos];
pos++;
return retval;
}
public void remove() {
throw new UnsupportedOperationException();
}
}
Now let's do this for a hashtable. We'll need two variables to keep track
of where we are, row and p; p will be a pointer to Cell somewhere in the
list htable[row]. We'll advance to a non-null p initially. After returning
p.getString(), we'll advance to the next non-null p; in most cases this
should be p.next() but sometimes we'll have to advance one or more rows as
well. The private method findnext() takes care of advancing to further
rows in the event that p==null.
private class hIterator implements Iterator<String> {
private int row;
private Cell p;
public hIterator() {
System.out.println("initializing hIterator");
row = 0;
p = htable[row];
findnext();
}
public boolean hasNext() { return (p != null && row < htablesize); }
public String next() {
String retval = p.getString();
p = p.next();
findnext();
return retval;
}
private void findnext() {
while (p==null) {
row++;
if (row >= htablesize) break;
p = htable[row];
}
}
public void remove() { throw new UnsupportedOperationException(); }
}
This is the C# version of the above.
hashtable enumerator: demos/dictionary.cs
A dictionary is a hash table of key-value pairs, each of type
KeyValuePair<K,V>. I want this to work in C#:
foreach (KeyValuePair<string,int> kvp in d)
Console.WriteLine("{0}: {1}", kvp.Key, kvp.Value);
The hashtable is an array of linked lists; the linked-list cell type is
public class Cell {
private K key_;
private V val_;
private Cell next_;
public Cell(K k, V v, Cell n) {key_ = k; val_ = v; next_ = n;}
public K getKey() {return key_;}
public V getVal() {return val_;}
public Cell next() {return next_;}
public void setVal(V v) {val_ = v;}
public void setNext(Cell c) {next_ = c;}
}
To start, I must have class dictionary inherit from
System.Collections.Generic.whatever. This works:
class dictionary : System.Collections.Generic.IEnumerable> {
Then I must implement the IEnumerable method. The exact method signature
is as follows; note the return type.
IEnumerator> IEnumerable>.GetEnumerator() {
return foonumerator();
}
What is up with foonumerator()? That's here:
IEnumerator> foonumerator() {
for (int i = 0; i(p.getKey(), p.getVal());
p = p.next();
}
}
yield break;
}
Why didn't I just define this in IEnumerable, above? Because we also must
implement the non-generic form of IEnumerable, due to inheritance
constraints. I did that this way:
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() {
return foonumerator();
}
Otherwise I would have to type everything twice.
I figured this all out by reading the MSDN Dictionary.cs reference code, here.