# Thread: What's the difference between pow() vs shift?

1. ## What's the difference between pow() vs shift?

On one of the local BBS's here in Berkeley, A Biochemist from Oxford mentioned that it was usually better to shift vs using pow(). He said this was because of how some computers handled floating point operations. I asked him to clarify his comment, but he didn't bother to respond. Then tried to search for the answer on google groups. All I found was stuff in comp.lang.c that said
"Use shift and not pow()."

Can someone please explain to why it's more desirable to shift vs using pow() in some cases.

2. It's faster, that's all.
(256 / 2) << 1 == 256.
So, if you have 128 and shift is left 1 bit, you get 256. The same as if you multiplied by two.
I'm not sure how they linked it to power, but power is basically a loop of multiplication.
Shifting is like multiplication with a power of two.

3. << uses one assembler command
pow is a comlex function with a long and complex code that takes a lot of time to execute

4. Here is the quote from the BBS that mystifies me...

Code:
```Don't use exponential functions.  Ew ew ew.  Use shift and bit test operators: then you won't end up
with wierd things as you convert to floating point representation and back to integers.```

5. Well, it's all about how floating operations are performed in the actual hardware since it's all bits n' bytes. I really don't suggest you worry about it, however, since it requires knowledge about the bitness of the processor and how floating point variables work, etc.
It's an advanced topic, but one you can learn for some optimization techniques.

6. You can represent multiplication with a series of shifts and adds. For example, to multiply by 1010, we add two numbers: our other number shifted once (for the 1 in the 2^1 position) and our other number three times (for the 1 in the 2^3 position). So pow(10,2) in binary is 10100+1010000 = 1100100 (represents 100).
We have to do one add for each one-bit in the number we're multiplying by.

The problem here is that overflow can happen pretty easily, and you have to be looking for it yourself. If MAXINT is 2 billion and whatever, then we can get up to pow(1290,3), pow(215,4), pow(73,5), etc. But if we know we're not going to overflow, we save a conversion to floating point which saves both time and precision. (In this case, conversion to double should be ok if slow, since every 32-bit int is exactly representable in the 53 bits of precision of a double. If we had a 64-bit int, though, we'd be in trouble.)

I don't know much about compiler construction, but if you did a for-loop, or even i*i*i or whatever, you'd probably get something pretty close to the code you'd get with the shifts and adds (I can't imagine that MUL in assembler, even though it checks for overflow properly, would be that much slower) without confusing yourself.

Odds are, all this is irrelevant, based on what it is you (or they, or whoever) is actually trying to accomplish. It's not clear what the original context was, so this is all I can say.

7. It's like vart said.
Here's the pow() function that glibc uses:
http://www.koders.com/c/fid0B3EC156D...D3ED12371.aspx
And they're even bold enough to call it the ultimate pow function, as optimized as possible for IEEE 754 floating point numbers. Converting the result from floating point to integer also takes some time.
math.h routines like sqrt(), sin(), cos(), pow(), log() and so on should be used very sparingly because of the relatively huge difference in time taken to perform them compared to arithmetic operations and other things elementary to the computer.
As for the weird things about converting from floating point to integer it's probably the notion that floating point number are scary approximations and might not convert into the number you wanted, even if the math is correct.

8. Originally Posted by Elysia
Well, it's all about how floating operations are performed in the actual hardware since it's all bits n' bytes. I really don't suggest you worry about it, however, since it requires knowledge about the bitness of the processor and how floating point variables work, etc.
It's an advanced topic, but one you can learn for some optimization techniques.
Can you please give me the 'technical explanation.' There really aren't that many advanced topics that give me fear and awe.

9. You'll have to turn to someone else for that, for this I don't know.

10. If you want an example of the "weird" things that can happen with pow(), check out the thread about "converting string to int" in this very board. There must have been something in the air today.