the speed of comparing for a vs b and x vs y ,Code:char a = 'A'; char b = 'T'; a < b ? 0:1; int x = 0; int y = 1; x < y? 0:1;
which is faster?
why?
the speed of comparing for a vs b and x vs y ,Code:char a = 'A'; char b = 'T'; a < b ? 0:1; int x = 0; int y = 1; x < y? 0:1;
which is faster?
why?
Well, compile and run it for yourself! It'll need some timer logic, of course...
Code:#include <cmath> #include <complex> bool euler_flip(bool value) { return std::pow ( std::complex<float>(std::exp(1.0)), std::complex<float>(0, 1) * std::complex<float>(std::atan(1.0) *(1 << (value + 2))) ).real() < 0; }
The answer is actually "it depends". I'll leave working out why as an exercise.
Depends on the CPU.
If you know which CPU it's for, you can write it out in assembly, lookup how long each instruction takes in the CPU's manual, and you'll know the time.
#define char int
Now they're the same.
Quzah.
Hope is the first step on the road to disappointment.
There's no difference whatsoever; they both do nothing and would both be optimised down to nothing for three reasons:
The result of the ternary operator is not used
There are no side-effects
A decent compiler should work out that the result of the ternary operator is constant
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
All problems in computer science can be solved by another level of indirection,
except for the problem of too many layers of indirection.
– David J. Wheeler
Maybe. The most common reason I see for people asking about variable types and performance in the same sentence is that they are practicing premature optimisation.
Software architecture, one of those things too many coders often ignore, can determine performance. The choice of a good versus a bad algorithm is also often a key determinate of performance. And a profiler can help identify the real hot spots and opportunities. Compared with those, choice of compiler is often a micro-optimisation: even the best compiler can only do so much to compensate for bad code.
That's not a data structure. So none of them...
I would try to reason it out this way:
1) Comparing one byte with another vs. comparing 4 bytes with 4 bytes ought to be faster.
2) Perhaps the hardware does either in exactly the same time.
3) Perhaps the compiler actually promotes the bytes to integers for comparison, so they both amount to the same thing. Especially since during optimization it realizes these things are constant.
4) Perhaps the compiler needs to insert code to do the char-to-int promotion, such as filling higher order bytes with zeros or ones... this may take additional time. Or the machine could still have vestiges of 8-bit, 16-bit, 32-bit operations that can be called upon.
The answer depends on what the compiler does/optimizes away and what the hardware is capable of.
nonoob, your first and fourth points are contradictory. If the machine requires the compiler to fill higher order bytes, then comparing one byte will be slower than comparing 4.
There are also factors like instruction pipelining, instruction caching, etc that come into play. Modern microprocessors are not purely sequential devices: even if the programming model (eg machine language) suggests sequential instructions, the processor may do things concurrently or even in a different order.
The answer to this is quite simple and I'm surprised it hasn't been stated this clearly before in this thread:
Operations with integers are generally faster than with other types.
This is because an integer is defined (I think, it may just be extremely common otherwise) to be the architecture's standard data size. Usually, processors are optimized for these data types and, yes, they're usually faster.
Of course, maybe the compiler does some magic to make them equally fast. And, I assume the OP meant this is part of actual code, not as actual code on itself. Even if the compiler would make the characters integers, the widening would take more time than the actual copying.
There are other factors, like caching, but I think that the chances are next to none that would matter at all in this case.
No, not really. Sorry it came across that way. Point #1 was just a gut feeling about manipulating less data should be faster than fiddling with more bytes. That may not be how a machine handles things in certain groups of bytes, where its 32-bit or 64-bit architecture actually favors aggregate bytes over single byte access. That's why I started to break down that initial feeling #1 into various compiler-optimization scenarios and machine architecture considerations.
It was just an exercise in "thinking out loud", which I would encourage everyone to do. Of course it helps to have a background in studying instruction timing, register sizes, and the nitty-gritty's of machine language vs. higher level language since the Z80.
Type char is probably being promoted to an int anyway.
Quzah.
Hope is the first step on the road to disappointment.