I'm not sure if the standard says that, but I know it's not true in practice. It's pretty common for compilers targeted towards 16 bit architectures to use 32 bit integers.
Printable View
No, I don't believe so. The only thing regarding size are the minimum and minimum-maximum values. It also defines the "at least" size of each type (i.e.: how many bits they must be at least). For example, char must be at least 8 bits wide; a short must be at least 16 bits wide, etc.
Quzah.
Additionally, most programs use integer values which easily fit into a 16-bit variable, much less a 32-bit one. To have the integer type default to 64-bits would, in most cases, just be a huge waste of space.
People failing to use the optimal data type isn't the fault of the language however.
Quzah.
The standard does not say so, no. It gives a minimum size of int, and a minimum size of long. But both can be the same size, and also be smaller than a pointer - this is exactly what you get in a MS compiler for x86-64. On the other hand, Linux x86-32 & -64 compilers will have "long" the size of pointers.
The size of int is determined by several things: It's normally a "natural" size for the processor. In x86-64, the "natural" size is 32, since that gives the shortest code-form. Using 64-bit integers will require a size-prefix to the instruction at least, and setting a 64-bit integer is (sometimes) 4 bytes longer than the 32-bit version of the same instruction, as well as the integer space used by a 64-bit integer is of course twice as large, so any array of int would be twice as large for example - this affects memory and cache-performance by "less values for the same amount of memory" - so it takes twice as long to process, even if the instructions are identical in time. And for most things, as long as int is bigger than a few million, everythng is fine. Making a bigger integer will not improve anything, and it will take up more space.
I think this is about the 10th time I'm writing this in some thread.
--
Mats