Which of these is more efficient?

This is a discussion on Which of these is more efficient? within the C++ Programming forums, part of the General Programming Boards category; The idea is this. I have an index array, and a value array. I want to get the index by ...

  1. #1
    Registered User
    Join Date
    Jan 2005
    Posts
    106

    Which of these is more efficient?

    The idea is this. I have an index array, and a value array. I want to get the index by searching through the index array, and then set the corresponding position in the value array to that something. There are two ways I can do this:

    Code:
    var_pos = get_index(var_names,search);
    var_vals[var_pos] = new_Value;
    or

    Code:
    var_vals[get_index(var_names,search)] = new_Value;
    Which one's the prefered method?

  2. #2
    carry on JaWiB's Avatar
    Join Date
    Feb 2003
    Location
    Seattle, WA
    Posts
    1,972
    As far as efficiency goes, there's probably very little difference, and I would guess you don't even need to be optimizing at this point (that should be toward the end of the development process, generally). I would say it's more of a style or even a readability issue--the second method might end up being less readable, for example, otherwise it's probably not a big deal either way.
    "Think not but that I know these things; or think
    I know them not: not therefore am I short
    Of knowing what I ought."
    -John Milton, Paradise Regained (1671)

    "Work hard and it might happen."
    -XSquared

  3. #3
    Registered User
    Join Date
    Nov 2005
    Posts
    52
    Strictly from the "My $0.02" department, I'd lose the separate arrays, and use a std::map<indexType, valueType>. That'll give you a LOT more efficiency.

  4. #4
    Registered User Kurisu's Avatar
    Join Date
    Feb 2006
    Posts
    62
    Doesn't make much of a difference. Programmers discretion as to what he/she feels is more readable. Efficiency will be determined by your search algorithm [bubble sort; divide and conquer; etc.] + structure used [ hash table; tree; array; etc.].

    Whether or not to bypass the variable var_pos is trivial to a computer. Its like asking Albert Einstein whats 1+1.

    Good luck!

  5. #5
    Devil's Advocate SlyMaelstrom's Avatar
    Join Date
    May 2004
    Location
    Out of scope
    Posts
    4,069
    Well, now I have to mention that Einstein sucked at math and is well known for failing it. I guess that's beyond the point with 1+1, though.
    Sent from my iPadŽ

  6. #6
    chococoder
    Join Date
    Nov 2004
    Posts
    515
    Second is somewhat harder to read and only saves you a single assignment operation, which should take no more than a few CPU cycles.
    Unless this happens in a very tight loop in which that assignment makes up a major portion of the duration for each iteration it's not going to make any noticable difference in performance.

  7. #7
    Registered User
    Join Date
    Oct 2001
    Posts
    2,934
    >var_vals[get_index(var_names,search)] = new_Value;
    I like this one better, but it's a programmer preference issue. Use the one you understand.

    Efficiency wise, they are identical. The compiler will do the optimization.

  8. #8
    Registered User
    Join Date
    Jan 2005
    Posts
    106
    Ah, okay. I was really just asking because that's in a source file I'd rather NOT have split (Because it has genuinely related functions. They're just a LOT of them). It's less readable, but it's also relatively easy to unfold for debugging purposesm and I'd rather just have it condensed so that I don't have a single, 2000 line source file.

    As for get_index, if I take the bounds checking out, the entire code is:

    Code:
    while(x!= search) {x++;} return x;.
    With or without, it can get through ~70000 entries in 5 seconds. Which is roughly the same speed of just having -- erm, correction. Couting x each time makes it take five seconds. Not couting x until the operation is done takes about 1 second. It doesn't take two seconds until I increase the total number of entires to 699999. It needs to be 6999999* before major slowdown really occures. I don't know why anyone would sanely have seven million entries in a vector, though. So, er, no, it's not going to be really slow (or slow at all, really) unless the vector somehow gets really, really full of stuff.

    also, I can't find a way to instantiate maps to anything. Could be a problem.

    * It might take less to get slower. Prolly does. I was just using things based out of 70 entries for some reason. 1 mill is still managable.

  9. #9
    chococoder
    Join Date
    Nov 2004
    Posts
    515
    and you could condense that to x = x + search -1; which indeed is a lot faster than the loop you use now

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Most Efficient Way to Check a Variable Is A Whole Number
    By bengreenwood in forum C++ Programming
    Replies: 29
    Last Post: 05-28-2009, 01:33 PM
  2. Is std::map efficient for this problem?
    By dudeomanodude in forum C++ Programming
    Replies: 12
    Last Post: 04-10-2008, 02:15 PM
  3. Efficient Algorithm
    By purple in forum C Programming
    Replies: 10
    Last Post: 02-05-2003, 02:01 PM
  4. How do I write more efficient code?
    By minesweeper in forum C++ Programming
    Replies: 4
    Last Post: 08-06-2002, 11:08 AM
  5. Any better & efficient alternative to this C prg.
    By perlguy in forum C Programming
    Replies: 12
    Last Post: 05-24-2002, 03:00 AM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21