Note that I was not arguing it is practically better to have 64-bit ints. What I meant was, by the standard, int should be 64-bit. It may be practical to deviate from the standard in this case.
Brewbuck's way of determining the size of an int (a type large enough so that programmers won't feel limited most of the time, yet doesn't waste too much space) is in disagreement of the standard (which says an int should be the natural size suggested by the execution environment, regardless of the amount of space wasted). I like brewbuck's definition better, though. Certainly makes life easier for programmers
I think that is fair enough.
I think the "natural size" requirement is sufficiently vague as to not directly suggest 64-bit integers on a 64-bit platform. On 64-bit Intel chips which are backward-compatible all the way to 8086, one might argue that a byte is a "natural size" because the chip supports single-byte addressing and has methods for accessing 8-bit register values.
Originally Posted by cyberfish
In other words, just because an architecture supports 64-bit integers doesn't mean that 32-bit integers are any less "natural" in that architecture.