which one is faster....

This is a discussion on which one is faster.... within the C++ Programming forums, part of the General Programming Boards category; iterating through an array of, say, 1000 elements, or checking a hard drive (not an unusally fast one) to see ...

  1. #1
    Registered User
    Join Date
    Jan 2002
    Posts
    552

    which one is faster....

    iterating through an array of, say, 1000 elements, or checking a hard drive (not an unusally fast one) to see if a file exists (not reading any data directly from the file). An educated guess will do just fine.
    C Code. C Code Run. Run Code Run... Please!

    "Love is like a blackhole, you fall into it... then you get ripped apart"

  2. #2
    Code Goddess Prelude's Avatar
    Join Date
    Sep 2001
    Posts
    9,796
    >iterating through an array of, say, 1000 elements, or checking a hard drive (not an unusally fast one) to see if a file exists
    Why do you need to compare the performance of two completely different operations?

    -Prelude
    My best code is written with the delete key.

  3. #3
    Registered User
    Join Date
    Jan 2002
    Posts
    552
    I thought of two ways to accomplish something, just wanted to know which one in general is faster
    C Code. C Code Run. Run Code Run... Please!

    "Love is like a blackhole, you fall into it... then you get ripped apart"

  4. #4
    Code Goddess Prelude's Avatar
    Join Date
    Sep 2001
    Posts
    9,796
    >just wanted to know which one in general is faster
    It doesn't matter. Use the solution that is easier and if it turns out to be a bottleneck (unlikely) then you can do something else or optimize what you have.

    -Prelude
    My best code is written with the delete key.

  5. #5
    Registered User
    Join Date
    Jan 2002
    Posts
    552
    but the easier solution requires me to check if a file exists in excess of 100,000 times! whereas the other would require me to search through an array of varying length 100,000 times. I guess I had better use the not easier method since most likely the size of the list I have to iterate through (in my second solution) will be relatively small (anywhere between 0 and 1000)
    Last edited by *ClownPimp*; 11-09-2002 at 11:53 AM.
    C Code. C Code Run. Run Code Run... Please!

    "Love is like a blackhole, you fall into it... then you get ripped apart"

  6. #6
    Code Goddess Prelude's Avatar
    Join Date
    Sep 2001
    Posts
    9,796
    >but the easier solution requires me to check if a file exists in excess of 100,000 times!
    The reason you go with the easier solution is twofold:

    1) Performance bottlenecks are notorious for not being in intuitive places, so you can't tell before profiling the code whether or not the easier solution is dreadfully slow.

    2) The easier solution is easier, it's faster to create and simpler to understand. If you can get away with it then do so.

    So you start by implementing the easiest solution first, then if after profiling you find that the easiest solution is too slow, switch it out for a more complicated solution. Since you went with the easy solution first, you didn't waste much time finding out that it wasn't good enough.

    -Prelude
    My best code is written with the delete key.

  7. #7
    Blank
    Join Date
    Aug 2001
    Posts
    1,034
    Your doing something wrong.
    You might want to consider storing the filenames
    in a hashtable or using some type of tree structure.

  8. #8
    Registered User
    Join Date
    Jan 2002
    Posts
    552
    >Your doing something wrong
    Perhaps. But as I said in my previous thread, I didnt want to rewrite any of the existing code. Although its starting to look unavoidable at this point
    C Code. C Code Run. Run Code Run... Please!

    "Love is like a blackhole, you fall into it... then you get ripped apart"

  9. #9
    Banned master5001's Avatar
    Join Date
    Aug 2001
    Location
    Visalia, CA, USA
    Posts
    3,685
    Prelude is absolutely correct. Usually you can take the way you write something initially and optimize it down to something usable.

    What exactly are you doing *ClownPimp*? You may be blowing something simple way out of proportion. And to answer the initial question, all things equal (in computers this is never the case) an array of 1000 elements of a nominal sized data structure shouldn't take too much time to scan. However, you would probably be better off with using temp files with large objects.

    Long story short, it sounds like the hash table thing would be simple enough for whatever it is that you are doing. I hate when I have to re-write code but that is just life.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Faster bitwise operator
    By Yarin in forum C++ Programming
    Replies: 18
    Last Post: 04-29-2009, 02:56 PM
  2. Faster way of printing to the screen
    By cacophonix in forum C Programming
    Replies: 16
    Last Post: 02-04-2009, 01:18 PM
  3. Which Operation is Faster "=" or "+=" ?
    By thetinman in forum C++ Programming
    Replies: 37
    Last Post: 06-06-2007, 08:29 PM
  4. does const make functions faster?
    By MathFan in forum C++ Programming
    Replies: 7
    Last Post: 04-25-2005, 10:03 AM
  5. Floating point faster than fixed-point
    By VirtualAce in forum A Brief History of Cprogramming.com
    Replies: 5
    Last Post: 11-08-2001, 11:34 PM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21