iterating through an array of, say, 1000 elements, or checking a hard drive (not an unusally fast one) to see if a file exists (not reading any data directly from the file). An educated guess will do just fine.
iterating through an array of, say, 1000 elements, or checking a hard drive (not an unusally fast one) to see if a file exists (not reading any data directly from the file). An educated guess will do just fine.
C Code. C Code Run. Run Code Run... Please!
"Love is like a blackhole, you fall into it... then you get ripped apart"
>iterating through an array of, say, 1000 elements, or checking a hard drive (not an unusally fast one) to see if a file exists
Why do you need to compare the performance of two completely different operations?
-Prelude
My best code is written with the delete key.
I thought of two ways to accomplish something, just wanted to know which one in general is faster
C Code. C Code Run. Run Code Run... Please!
"Love is like a blackhole, you fall into it... then you get ripped apart"
>just wanted to know which one in general is faster
It doesn't matter. Use the solution that is easier and if it turns out to be a bottleneck (unlikely) then you can do something else or optimize what you have.
-Prelude
My best code is written with the delete key.
but the easier solution requires me to check if a file exists in excess of 100,000 times! whereas the other would require me to search through an array of varying length 100,000 times. I guess I had better use the not easier method since most likely the size of the list I have to iterate through (in my second solution) will be relatively small (anywhere between 0 and 1000)
Last edited by *ClownPimp*; 11-09-2002 at 11:53 AM.
C Code. C Code Run. Run Code Run... Please!
"Love is like a blackhole, you fall into it... then you get ripped apart"
>but the easier solution requires me to check if a file exists in excess of 100,000 times!
The reason you go with the easier solution is twofold:
1) Performance bottlenecks are notorious for not being in intuitive places, so you can't tell before profiling the code whether or not the easier solution is dreadfully slow.
2) The easier solution is easier, it's faster to create and simpler to understand. If you can get away with it then do so.
So you start by implementing the easiest solution first, then if after profiling you find that the easiest solution is too slow, switch it out for a more complicated solution. Since you went with the easy solution first, you didn't waste much time finding out that it wasn't good enough.
-Prelude
My best code is written with the delete key.
Your doing something wrong.
You might want to consider storing the filenames
in a hashtable or using some type of tree structure.
>Your doing something wrong
Perhaps. But as I said in my previous thread, I didnt want to rewrite any of the existing code. Although its starting to look unavoidable at this point
C Code. C Code Run. Run Code Run... Please!
"Love is like a blackhole, you fall into it... then you get ripped apart"
Prelude is absolutely correct. Usually you can take the way you write something initially and optimize it down to something usable.
What exactly are you doing *ClownPimp*? You may be blowing something simple way out of proportion. And to answer the initial question, all things equal (in computers this is never the case) an array of 1000 elements of a nominal sized data structure shouldn't take too much time to scan. However, you would probably be better off with using temp files with large objects.
Long story short, it sounds like the hash table thing would be simple enough for whatever it is that you are doing. I hate when I have to re-write code but that is just life.