LOL. What's a more authoritative source?I don't really refer to the standard, either.
Because the C standard says so, and we are using C.Why should it be the 64 bits or 32 bits?
LOL. What's a more authoritative source?I don't really refer to the standard, either.
Because the C standard says so, and we are using C.Why should it be the 64 bits or 32 bits?
If anything, I think that compiler vendors are bending to pressure to provide 32 bit ints to prevent breaking presumptuous legacy code, which is a shame. The standard *has*, after all, always said that the size of an int is implementation-defined. Don't punish us for the idiocy of others.
As cyberfish, before, quoted:
There's nothing suggestive about that. A plain int object *HAS* the natural size that the creators of the architecture suggested.5 An object declared as type signed char occupies the same amount of storage as a
‘‘plain’’ char object. A ‘‘plain’’ int object has the natural size suggested by the
architecture of the execution environment (large enough to contain any value in the range
INT_MIN to INT_MAX as defined in the header <limits.h>).
So if the creators of the architecture suggested it has 1000 bytes, a plain int HAS the size of 1000 bytes. Not may have, not should have, but has.
The standard says it MUST be the "the natural size suggested by the architecture of the execution environment".
But I only said an int "should" be 64-bit not "must", because what natural size x86-64 suggests is possibly a subject of debate, even though all conventions (size of general purpose registers, size of addresses/pointers, native 64-bit instructions, stack pushed and pops by 8 bytes at a time) unanimously suggest 64-bit.
That is an interesting question to ask. I am not sure how we can determine if such a decision was sensible. A programmer could just switch to long (or a typedef thereof) if a 32-bit integer type was needed.Originally Posted by cyberfish
I think that presumptuous legacy code is a shame, but 32-bit ints might not be a shame. A progression of 16, 32 and 64 bits for short, int and long like what Java mandates seems pretty neat to me.Originally Posted by Sebastiani
Look up a C++ Reference and learn How To Ask Questions The Smart WayOriginally Posted by Bjarne Stroustrup (2000-10-14)
I fail to see practically why there is any real reason for int to be 64 bits. You want a 64-bit type? You always have the larger long.
I haven't seen any practical reason why int must or should be 64 bits.
And I'm not talking about what the standard says. If the standard says it must be 64 bits, then it must a reason for that, and if so, I'd like to hear that reason. But as it stands, this is not true.
Think of it like this (perhaps weak analogy): suppose you walked into a car dealership that since it's opening in 1945 has promised an A/C system installed at no extra charge as part of the standard package. As you step into your newly purchased SUV you realize that your "A/C system" is just a dash-mounted 1945-model fan. You'd probably be pretty disappointed, wouldn't you? Similarly, an int has historically been synonymous with the widest machine-word available on an architecture, and as such it's only natural to expect it to be 64-bits on newer machines.I fail to see practically why there is any real reason for int to be 64 bits. You want a 64-bit type? You always have the larger long.
I haven't seen any practical reason why int must or should be 64 bits.
And I'm not talking about what the standard says. If the standard says it must be 64 bits, then it must a reason for that, and if so, I'd like to hear that reason. But as it stands, this is not true.
At any rate, compiler vendors are simply ignoring the standard that, like it or not, should be the guiding influence when deciding what features will (or won't) be supported by a given release - not the whim of some whiny group of PDP-11 programmers (or whatever) crying foul because their incorrectly-written programs won't work anymore.
I agree. It would sure make writing portable programs much easier.I think that presumptuous legacy code is a shame, but 32-bit ints might not be a shame. A progression of 16, 32 and 64 bits for short, int and long like what Java mandates seems pretty neat to me.
To me, it just makes sense to keep an int at 4 bytes.
Mainframe assembler programmer by trade. C coder when I can.
Are you successful in seeing that an int should be 32-bit on 32-bit machines?I fail to see practically why there is any real reason for int to be 64 bits. You want a 64-bit type? You always have the larger long.
Or do you find nothing wrong with having 16-bit ints on x86-32? Afterall, you want a 32-bit type? You always have the larger long.
I like Sebastiani's analogy.
Well, no, a 2 byte int "word" on PCs made no sense to me at all in the early days. My background is mainframes, and an int (a "word") has been 4 bytes on that platform since day 1 (the 1960's).
An integer, as we all know it today, on prevalent modern hardware architectures, is 4 bytes. Promoting it to 8 bytes, when there already is a long, makes no sense to me. It would break a lot of code and waste storage.
When IBM went from 31 bit addressing to 64 bit addressing, they extended all 16 general purpose registers from 4 bytes to 8 bytes. However, they left the original (31/32 bit) instructions intact, and provided all new instructions (thus almost doubling the number of instructions on the processor) to exploit 8 byte registers.
When coding today in 64-bit or 31 bit (or 24 bit mode for that matter) addressing mode, in assembler, you can use the new 64-bit instructions or the original 31/32 bit instructions. The same set of 8 byte registers are used either way, but the instruction dictates where in the register the data goes or comes from.
To make a long story short, the new 8 byte registers are defined as being in two halves - the 0-31 bit half (the left half) and the 32-63 bit (right) half. If you use the original instructions, values are placed in or moved from or acted upon bits 32-63 of the 8 byte register.
If an int goes to 8 bytes, does a long go to 16? Does a short go to 4? Does a long long go to 32? Why upset the world?
Mainframe assembler programmer by trade. C coder when I can.
That's not the case with x86-64. x86-64 gets new 64-bit registers (extended EAX-EDX to RAX-RDX, as well as a few brand new 64-bit ones). They are not 'halves".
In your view, then, what should determine the size of int? Should it always be 4 bytes everywhere on all machines?
More importantly, why should "the world" write code on the assumption that any of those be a certain size? That's what portable solutions like typedef's are for (eg: int16, int32, etc). That's how I'm able to port *any* of my applications over to whatever the architecture without breaking a single line of code. Hell, I don't even assume 8-bit bytes, for that matter!If an int goes to 8 bytes, does a long go to 16? Does a short go to 4? Does a long long go to 32? Why upset the world?
Write once, run anywhere. Sounds good to me.
Hmm... this looks very much like the Z80 to me in how it handled 8 and 16 bit instructions. Are you sure there are indeed only 64 bit registers instead of 32 bit registers paired up?
I'm thinking you'd have there a 32 bit processor with a 32 bit internal databus and a 64 bit address bus. Most of the registers would be 32 bits that could be paired for 64 bit data storage, and only a handful true 64 bit registers (like the stack pointer). And you can still have two sets of instructions for 32 and 64 bit processing.
Otherwise I find it odd this -- IMHO -- inefficient choice to use 64 bit registers to store 32 bit data blocks.
Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.