# How does my program know if a long long int is odd?

Printable View

• 02-22-2013
Once-ler
How does my program know if a long long int is odd?
I would post code but I have no idea about how to implement this.
• 02-22-2013
grumpy
Look up the modulo (remainder on division) operator. Sometimes also described as the modulus operator.
• 02-22-2013
Barney McGrew
(j & 1) ===> j is odd.
• 02-22-2013
Once-ler
So %19999 !=0 == odd?
• 02-22-2013
grumpy
Quote:

Originally Posted by Once-ler
So %19999 !=0 == odd?

That is not even valid code. Try it with a compiler.
• 02-22-2013
Barney McGrew
Nah. You want to replace 19999 with 2. That will test whether "So" is odd.
• 02-22-2013
Once-ler
thanks ya'll
• 02-22-2013
zoom
Like suggest above, use modulo. It's very simple operator, like the little brother of divide. Simpley explained, modulo gives back what is "left over" after you put dividings. Symbol for modulo is % So.. 3 % 2 = 1 2 % 2 = 0 Now notice 3 is odd, 2 even. And they return 1 and 0 when modulo applied. Informal intuitive proof (please verify with mathematical induction or other techniquie if needed); odd is of form 2n +1 even is of form 2n With n element of set of natural numbers. Then modulo odd yields 1 even yields 0 respectively
• 02-22-2013
GReaper
Quote:

Originally Posted by zoom
please verify with mathematical induction or other techniquie if needed

No need to, these are actually the definitions of odd and even numbers, as you said:
( Only difference, I include all integers )
Code:

```n ∈ ℤ even = 2n odd  = 2n + 1```
• 02-22-2013
GReaper
Adding to my prev post:
Note that you should prefer (n % 2) instead of (n & 1) when dealing with signed integers.
• 02-23-2013
Sorin
Quote:

Originally Posted by GReaper
Adding to my prev post:
Note that you should prefer (n % 2) instead of (n & 1) when dealing with signed integers.

Why is that? Since you're testing the LSB in the case of (n & 1), what difference does it make if a number is 3 or -3? Either way it's still odd, and having a 1 in bit 0 is still going to generate something not divisible by 2. If you're testing the MSB, then I can see the need for knowing signed/unsigned because that will affect the meaning of the MSB should it happen to be 1 (hypothetically that is, since it doesn't make sense to in our odd/even scenario).

In assembly class, we always had to keep performance and efficiency in mind. Now, I know this is irrelevant in 99% of today's computers and applications (and that we're talking about C), but I still like to think about it and be aware of it. As such, they told us that % was an expensive operation and that when considering performance, you should avoid it if you can, favoring bit manipulation techniques if it is reasonable to do so and still be understandable.

Not being a smartass; this is an honest question because I really don't know and would like to.
• 02-23-2013
Barney McGrew
n & 1 is fine for non-negative values, but it may fail with negative values if your implementation represents negative values with ones' complement (the other two representations permitted by the C standard, two's complement and sign-and-magnitude, will give you correct results for negative values). If you want correct results from implementations that use ones' complement, you can convert n to an unsigned type.
• 02-23-2013
grumpy
Quote:

Originally Posted by Sorin
Why is that? Since you're testing the LSB in the case of (n & 1), what difference does it make if a number is 3 or -3?

In contrast to what happens with unsigned types, the result of bitwise operations on signed values is implementation defined.

Quote:

Originally Posted by Sorin
Either way it's still odd, and having a 1 in bit 0 is still going to generate something not divisible by 2.

Not true. There is, for example, a possibility that a value of 1 is not represented by only one activated bit in a signed integral type. There is also no guarantee that bit 0 is not the sign bit.

Quote:

Originally Posted by Sorin
In assembly class, we always had to keep performance and efficiency in mind. Now, I know this is irrelevant in 99% of today's computers and applications (and that we're talking about C), but I still like to think about it and be aware of it.

When programming with assembly you make specific assumptions about all the basic types, about the operations that act on them, etc. In particular, you can employ knowledge about the layout of bits in integral types.

Those assumptions are why assembly code is not always portable between target machine architectures.

But C is not assembly. It leaves such things implementation defined, for the signed integral types, because such things can vary between hardware, operating systems, and compilers. So, for a signed integer, you cannot rely on the equivalence of bit fiddling operations to mathematical operations.

(The situation is different for unsigned types, since the standard does mandate equivalences. Although the number of real-world programmers who write "efficient" code that gives an incorrect result is still astounding).

Quote:

Originally Posted by Sorin
As such, they told us that % was an expensive operation and that when considering performance, you should avoid it if you can, favoring bit manipulation techniques if it is reasonable to do so and still be understandable.

Again, C is not assembly. The developers of modern C compilers - and optimisers for those compilers - invest considerable effort to understanding what operations on the target system(s) give an equivalent result, and which ones do so more expensively.

That means, if you want to test if a value is odd, use modulo 2. If the compiler can recognise the value 2 as a compile-time constant in your code, it is a fair bet that the compiler will emit the best instructions to complete the job. Even better, depending on compilation and optimisation settings, the compiler can make trade-offs to suit different requirements (eg optimising for speed, optimising to exploit instruction pipelines, optimising for number of instructions, etc etc).