# int vs long

This is a discussion on int vs long within the C Programming forums, part of the General Programming Boards category; What's the difference between the int and the long? They have the same range, both are 32-bit, and both can ...

1. ## int vs long

What's the difference between the int and the long? They have the same range, both are 32-bit, and both can be signed or unsigned? What differences are there between the two? If not prechosen, should an int or a long be used? By "prechosen", I mean for things like the biWidth element in Windows' bitmapinfoheader structure which is a long.

2. Portability mainly. And int has a guaranteed range of -32767 to 32727; a long has a guaranteed range of -2147483647 to 2147483647.

3. Originally Posted by Dave_Sinkula
Portability mainly. And int has a guaranteed range of -32767 to 32727; a long has a guaranteed range of -2147483647 to 2147483647.
Wouldn't that be from -32768 to 32767 instead (16 bit)? The same for the long - -2147483648 for the negative?

0xFFFF means -1 as far as I know and 0x8000 is -32768. The same for 32-bit.

Why is it then that the int is often regarded as being 32-bit? I read somewhere that the long on some systems is a 64-bit integer going to 18 4/9 quintillion.

4. Originally Posted by ulillillia
Wouldn't that be from -32768 to 32767 instead (16 bit)? The same for the long - -2147483648 for the negative?
No. C languages support 1s complement, 2s complement, and sign magnitude representations.

Originally Posted by ulillillia
0xFFFF means -1 as far as I know and 0x8000 is -32768. The same for 32-bit.
Signedness is related to the underlying representation.

Originally Posted by ulillillia
Why is it then that the int is often regarded as being 32-bit? I read somewhere that the long on some systems is a 64-bit integer going to 18 4/9 quintillion.
For the same reason that an int was once regarded as being 16-bit.

[edit=another]And that would have to be an unsigned 64-bit long (long).
— maximum value for an object of type unsigned long long int
ULLONG_MAX 18446744073709551615 // 264 - 1

5. They have the same range, both are 32-bit,
This is true for most standard PCs. Smaller systems, however, may not have this. I've worked on more than one architecture where int's are 16bit and longs are 32bit.

6. The standard just says that a short int must fit in an int, and a int must fit into a long int...

7. Originally Posted by zacs7
The standard just says that a short int must fit in an int, and a int must fit into a long int...
It also says that a short has a guaranteed range of between minus and plus 32767, and that a long has a guaranteed range of between minus and plus 2147483647. These imply that a short (and hence an int also) has at least 16 bits, and that a long has at least 32 bits.

8. So if the short and the int have the same guarenteed range, which should be used when you need a value from 0 to 10,000?

9. I believe int is typically the "natural" type for the CPU, so arithmetic should be faster using int (assuming short and int are different - on common hardware short is usually 16 bits and int 32 bits). On the other hand, if you are using a large array of them which is using up a large fraction of available memory, or your program is bottlenecked by memory access speed as opposed to CPU, then the smaller type is better. I'm not a hardware guru so anyone else feel free to correct or add to this.

10. Choosing an arithmetic type should be about range first, and byte size second. Given that the standard mandates a range plus-or-minus 32,767 for integers, attempting to store 10,000 as a value is an overflow error most of the time. Use a long int or size_t depending on what the value means (the size of an array for instance).

11. 10,000 < 32,000, so using a short int would be more efficient.
@ robatino, infact any 4-byte data type or less (which, in most cases, encompasses an int, short int, and char) can fall into your definition of "natural" for CPUs that contain 4-byte registers. As long as an operand can be contained in a single register (whether it uses all 4 bytes of the register or only 2) it might be considered "natural". But yes, when you start dealing with long ints and data types that span multiple registers or words, then operations may involve slightly more work, but almost insignificantly so.
Edit: above is true for unsigned types, but signed short types will require sign-extension which may produce more work than using int.

12. I think the following link is where I got the idea that CPUs normally worked with int internally if the actual type was smaller. Is it inaccurate/outdated?

http://www.eventhelix.com/RealtimeMa...%20and%20short

13. Originally Posted by ulillillia
So if the short and the int have the same guarenteed range, which should be used when you need a value from 0 to 10,000?
http://c-faq.com/decl/inttypes.html

14. I forgot about the work done for sign-extension, so it would appear accurate after all. So I guess unless you're using unsigned short ints, "short int vs. int" would fall into a "space vs. time" tradeoff.