Or, perhaps I should say
Quantify "bad"
Or, perhaps I should say
Quantify "bad"
Mainframe assembler programmer by trade. C coder when I can.
bad here means that it's slow!
That's still vague.
=========================================
Everytime you segfault, you murder some part of the world
You could read the two lists of words into two dynamic arrays, sort them, and then do a single pass through each of them simultaneously to find the words in the second list that appear in the first list. Since the resulting list is sorted, another dynamic array will do, assuming you do not intend to frequently insert elements to the middle of it.It's not a dictionary.. so first I read a I read a list of words and put it into some data structure
secondly I read a different file, which is also a list of words and then compare that words to see if any matches the word in the first file if it does then I copy that word to a data structure.. thirdly I would iterate through previous data structure and put the list of the words in a new data structure so that the list of the words would be sorted.. that's all I need to do basically
so what is the most efficient and fast way to do this? what data structure should I use the first time, second time, and third time?
Alternatively, just sort the first list, and then use binary search on it. However, the resulting list would then not be sorted, so you would have to sort it.
Of course, the point that people are making is that if your lists of words are sufficiently small, doing all this sorting would not help, and would merely be wasted effort.
Look up a C++ Reference and learn How To Ask Questions The Smart WayOriginally Posted by Bjarne Stroustrup (2000-10-14)
I am thinking of storing the first file in a hash table first...