Well if you understand a linked list, a hash table can be easily done with an array of linked lists. You do something like this:
Code:
struct foo
{
struct foo *...pointers to next (optionally to prev) node...
int key;
...data...
};
struct foo *hastable[ NUMBER_OF_BUCKETS ];
Now all you do, is "hash" the key to find out what bucket it goes into. Then, just like when you "sort at time of insertion" a normal linked list entry, you do the same here.
You find the bucket it goes into:
Code:
bucket = &hashtable[ toinsert->key % NUMBER_OF_BUCKETS ];
Now you pass that bucket over to a function along with whatever you're inserting.
Code:
insertme( struct foo **bucket, struct foo *toinsert )
{
...insert it in numerical order based on the key (not the hash of the key)...
}
The reason you need to pass the address of the bucket is so that you can actually upate the bucket itself. That is to say, if this is NULL, meaning nothing is here right now, or if the value of what there should come after the value that's there already, we need to make this new value the top of this list:
Code:
if( hashtable[ HASHED_KEY ] == NULL || hashtable[ HASHED_KEY ]->key > toinsert->key )
{
toinsert->next = hashtabhe[ HASHED_KEY ];
hashtable[ HASHED_KEY ] = toinsert;
}
The above was just an illustration of the point. You wouldn't actually do that check implicitly. Rather, since this would be inside a function as listed above, it would be more along the lines of:
Code:
if( *bucket == NULL || (*bucket)->value > toinsert->value )
{
...fix it up here...
}
At any rate, it's a pretty easy concept if you understand linked lists. Now your search time becomes 1/NUMBER_OF_BUCKETS of the time it would take you to simply run through a standard linked list, because you have it split into NUMBER_OF_BUCKETS lists, and you just run through one of those smaller lists instead of one huge one.
Quzah.