Can i have Some simple , effective ,easy to implement hash algorithm
Can i have Some simple , effective ,easy to implement hash algorithm
Can you use Google?
1. Get rid of gets(). Never ever ever use it again. Replace it with fgets() and use that instead.
2. Get rid of void main and replace it with int main(void) and return 0 at the end of the function.
3. Get rid of conio.h and other antiquated DOS crap headers.
4. Don't cast the return value of malloc, even if you always always always make sure that stdlib.h is included.
Simplest
Code:unsigned int simplest_hash(const char *str) { return 0; }
Most effective
gperf - GNU Project - Free Software Foundation (FSF)
Consider this post signed
Under pretty restricted circumstances yeah, but thanks for that -- interesting.
More generally, "no collisions" is not a worthwhile goal for a general purpose hash table algorithm. The idea is very simple:
"tsz" is the table size. You can "double hash" by using (eg) a top level table in the form of a 100 element array, each of which is a further array of 12. If you then make the inner level an array of (eg) red-black binary trees, you end up with the most high performance kind of general purpose, random access data structure currently conceivable (if anyone has run across something better please tell!)Code:int hash (char *str, int tsz) { int i, r=0, len = strlen(str); for (i=0; i<len; i++) r += str[i]; return r%tsz; }
C programming resources:
GNU C Function and Macro Index -- glibc reference manual
The C Book -- nice online learner guide
Current ISO draft standard
CCAN -- new CPAN like open source library repository
3 (different) GNU debugger tutorials: #1 -- #2 -- #3
cpwiki -- our wiki on sourceforge
FNV-1a and CRC are two of my favourites that fit your criteria.
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
I've considered that before, but I think making the hash entries to be trees is overkill. You might think about doing this if you are very space-constrained. Suppose you only had room for 256 entries in the hash table. That's equivalent to an extra tree depth of 8, in other words, you could do everything with a single tree and 8 additional comparisons. Unless comparisons are very costly, that's really not much overhead. You probably do more work computing the hash in the first place.
But if you know of an instance where the hybrid hash/tree thingie was a good tradeoff, I'd be interested to hear what that is.
Code://try //{ if (a) do { f( b); } while(1); else do { f(!b); } while(1); //}