Thread: which data structure is faster ?

  1. #1
    Registered User
    Join Date
    Aug 2009
    Posts
    168

    which data structure is faster ?

    Code:
    char a = 'A';
    char b = 'T';
    a < b ? 0:1;
    
    int x = 0;
    int y = 1;
    x < y? 0:1;
    the speed of comparing for a vs b and x vs y ,
    which is faster?
    why?

  2. #2
    Guest Sebastiani's Avatar
    Join Date
    Aug 2001
    Location
    Waterloo, Texas
    Posts
    5,708
    Well, compile and run it for yourself! It'll need some timer logic, of course...
    Code:
    #include <cmath>
    #include <complex>
    bool euler_flip(bool value)
    {
        return std::pow
        (
            std::complex<float>(std::exp(1.0)), 
            std::complex<float>(0, 1) 
            * std::complex<float>(std::atan(1.0)
            *(1 << (value + 2)))
        ).real() < 0;
    }

  3. #3
    Registered User
    Join Date
    Jun 2005
    Posts
    6,815
    The answer is actually "it depends". I'll leave working out why as an exercise.
    Right 98% of the time, and don't care about the other 3%.

    If I seem grumpy or unhelpful in reply to you, or tell you you need to demonstrate more effort before you can expect help, it is likely you deserve it. Suck it up, Buttercup, and read this, this, and this before posting again.

  4. #4
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,229
    Depends on the CPU.

    If you know which CPU it's for, you can write it out in assembly, lookup how long each instruction takes in the CPU's manual, and you'll know the time.

  5. #5
    ATH0 quzah's Avatar
    Join Date
    Oct 2001
    Posts
    14,826
    #define char int

    Now they're the same.


    Quzah.
    Hope is the first step on the road to disappointment.

  6. #6
    Algorithm Dissector iMalc's Avatar
    Join Date
    Dec 2005
    Location
    New Zealand
    Posts
    6,318
    There's no difference whatsoever; they both do nothing and would both be optimised down to nothing for three reasons:
    The result of the ternary operator is not used
    There are no side-effects
    A decent compiler should work out that the result of the ternary operator is constant
    My homepage
    Advice: Take only as directed - If symptoms persist, please see your debugger

    Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"

  7. #7
    Registered User
    Join Date
    Jun 2005
    Posts
    6,815
    Quote Originally Posted by iMalc View Post
    There's no difference whatsoever; they both do nothing and would both be optimised down to nothing for three reasons:
    The result of the ternary operator is not used
    There are no side-effects
    A decent compiler should work out that the result of the ternary operator is constant
    .... if the compiler is performing optimisations and eliminating code with no observable effect. Not all decent compilers do that by default.
    Right 98% of the time, and don't care about the other 3%.

    If I seem grumpy or unhelpful in reply to you, or tell you you need to demonstrate more effort before you can expect help, it is likely you deserve it. Suck it up, Buttercup, and read this, this, and this before posting again.

  8. #8
    Hurry Slowly vart's Avatar
    Join Date
    Oct 2006
    Location
    Rishon LeZion, Israel
    Posts
    6,788
    Quote Originally Posted by grumpy View Post
    .... if the compiler is performing optimisations and eliminating code with no observable effect. Not all decent compilers do that by default.
    if programmer is so interested in performance that he considers possibility of replacing types of variables - he should start with selecting a best compiler available... This switch will give a lot more performance boost than any micro-optimizations...
    All problems in computer science can be solved by another level of indirection,
    except for the problem of too many layers of indirection.
    – David J. Wheeler

  9. #9
    Woof, woof! zacs7's Avatar
    Join Date
    Mar 2007
    Location
    Australia
    Posts
    3,459
    That's not a data structure. So none of them...

  10. #10
    Registered User
    Join Date
    Jun 2005
    Posts
    6,815
    Quote Originally Posted by vart View Post
    if programmer is so interested in performance that he considers possibility of replacing types of variables - he should start with selecting a best compiler available... This switch will give a lot more performance boost than any micro-optimizations...
    Maybe. The most common reason I see for people asking about variable types and performance in the same sentence is that they are practicing premature optimisation.

    Software architecture, one of those things too many coders often ignore, can determine performance. The choice of a good versus a bad algorithm is also often a key determinate of performance. And a profiler can help identify the real hot spots and opportunities. Compared with those, choice of compiler is often a micro-optimisation: even the best compiler can only do so much to compensate for bad code.
    Right 98% of the time, and don't care about the other 3%.

    If I seem grumpy or unhelpful in reply to you, or tell you you need to demonstrate more effort before you can expect help, it is likely you deserve it. Suck it up, Buttercup, and read this, this, and this before posting again.

  11. #11
    Registered User
    Join Date
    Sep 2008
    Location
    Toronto, Canada
    Posts
    1,834
    I would try to reason it out this way:

    1) Comparing one byte with another vs. comparing 4 bytes with 4 bytes ought to be faster.

    2) Perhaps the hardware does either in exactly the same time.

    3) Perhaps the compiler actually promotes the bytes to integers for comparison, so they both amount to the same thing. Especially since during optimization it realizes these things are constant.

    4) Perhaps the compiler needs to insert code to do the char-to-int promotion, such as filling higher order bytes with zeros or ones... this may take additional time. Or the machine could still have vestiges of 8-bit, 16-bit, 32-bit operations that can be called upon.

    The answer depends on what the compiler does/optimizes away and what the hardware is capable of.

  12. #12
    Registered User
    Join Date
    Jun 2005
    Posts
    6,815
    nonoob, your first and fourth points are contradictory. If the machine requires the compiler to fill higher order bytes, then comparing one byte will be slower than comparing 4.

    There are also factors like instruction pipelining, instruction caching, etc that come into play. Modern microprocessors are not purely sequential devices: even if the programming model (eg machine language) suggests sequential instructions, the processor may do things concurrently or even in a different order.
    Right 98% of the time, and don't care about the other 3%.

    If I seem grumpy or unhelpful in reply to you, or tell you you need to demonstrate more effort before you can expect help, it is likely you deserve it. Suck it up, Buttercup, and read this, this, and this before posting again.

  13. #13
    Registered User
    Join Date
    Oct 2008
    Posts
    1,262
    The answer to this is quite simple and I'm surprised it hasn't been stated this clearly before in this thread:
    Operations with integers are generally faster than with other types.
    This is because an integer is defined (I think, it may just be extremely common otherwise) to be the architecture's standard data size. Usually, processors are optimized for these data types and, yes, they're usually faster.

    Of course, maybe the compiler does some magic to make them equally fast. And, I assume the OP meant this is part of actual code, not as actual code on itself. Even if the compiler would make the characters integers, the widening would take more time than the actual copying.
    There are other factors, like caching, but I think that the chances are next to none that would matter at all in this case.

  14. #14
    ATH0 quzah's Avatar
    Join Date
    Oct 2001
    Posts
    14,826
    Type char is probably being promoted to an int anyway.


    Quzah.
    Hope is the first step on the road to disappointment.

  15. #15
    Registered User
    Join Date
    Sep 2008
    Location
    Toronto, Canada
    Posts
    1,834
    Quote Originally Posted by grumpy View Post
    nonoob, your first and fourth points are contradictory. If the machine requires the compiler to fill higher order bytes, then comparing one byte will be slower than comparing 4.
    No, not really. Sorry it came across that way. Point #1 was just a gut feeling about manipulating less data should be faster than fiddling with more bytes. That may not be how a machine handles things in certain groups of bytes, where its 32-bit or 64-bit architecture actually favors aggregate bytes over single byte access. That's why I started to break down that initial feeling #1 into various compiler-optimization scenarios and machine architecture considerations.

    It was just an exercise in "thinking out loud", which I would encourage everyone to do. Of course it helps to have a background in studying instruction timing, register sizes, and the nitty-gritty's of machine language vs. higher level language since the Z80.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. pthread question how would I init this data structure?
    By mr_coffee in forum C Programming
    Replies: 2
    Last Post: 02-23-2009, 12:42 PM
  2. Help require for some data structure topics
    By jawwadalam in forum A Brief History of Cprogramming.com
    Replies: 10
    Last Post: 12-15-2002, 07:09 PM
  3. can't insert data into my B-Tree class structure
    By daluu in forum C++ Programming
    Replies: 0
    Last Post: 12-05-2002, 06:03 PM
  4. Tab Controls - API
    By -KEN- in forum Windows Programming
    Replies: 7
    Last Post: 06-02-2002, 09:44 AM
  5. Dynamic Data Structure -- Which one is better?
    By Yin in forum C++ Programming
    Replies: 0
    Last Post: 04-10-2002, 11:38 PM