Thread: sizeof int?

  1. #16
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,229
    I don't really refer to the standard, either.
    LOL. What's a more authoritative source?

    Why should it be the 64 bits or 32 bits?
    Because the C standard says so, and we are using C.

  2. #17
    Guest Sebastiani's Avatar
    Join Date
    Aug 2001
    Location
    Waterloo, Texas
    Posts
    5,708
    If anything, I think that compiler vendors are bending to pressure to provide 32 bit ints to prevent breaking presumptuous legacy code, which is a shame. The standard *has*, after all, always said that the size of an int is implementation-defined. Don't punish us for the idiocy of others.

  3. #18
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Quote Originally Posted by cyberfish View Post
    LOL. What's a more authoritative source?
    Because the C standard says so, and we are using C.
    You originally said "should". Should is not must. And the standard does not specifically demand that it must be 64 bits.
    Therefore, I am asking you to define why it should be 64 bits, besides what the standard suggests.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  4. #19
    Registered User
    Join Date
    Oct 2008
    Posts
    1,262
    Quote Originally Posted by Elysia View Post
    You originally said "should". Should is not must. And the standard does not specifically demand that it must be 64 bits.
    Therefore, I am asking you to define why it should be 64 bits, besides what the standard suggests.
    As cyberfish, before, quoted:

    5 An object declared as type signed char occupies the same amount of storage as a
    ‘‘plain’’ char object. A ‘‘plain’’ int object has the natural size suggested by the
    architecture of the execution environment (large enough to contain any value in the range
    INT_MIN to INT_MAX as defined in the header <limits.h>).
    There's nothing suggestive about that. A plain int object *HAS* the natural size that the creators of the architecture suggested.
    So if the creators of the architecture suggested it has 1000 bytes, a plain int HAS the size of 1000 bytes. Not may have, not should have, but has.

  5. #20
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,229
    The standard says it MUST be the "the natural size suggested by the architecture of the execution environment".

    But I only said an int "should" be 64-bit not "must", because what natural size x86-64 suggests is possibly a subject of debate, even though all conventions (size of general purpose registers, size of addresses/pointers, native 64-bit instructions, stack pushed and pops by 8 bytes at a time) unanimously suggest 64-bit.

  6. #21
    C++ Witch laserlight's Avatar
    Join Date
    Oct 2003
    Location
    Singapore
    Posts
    28,413
    Quote Originally Posted by cyberfish
    We can equally well have 16-bit ints on x86-32, but does that make sense?
    That is an interesting question to ask. I am not sure how we can determine if such a decision was sensible. A programmer could just switch to long (or a typedef thereof) if a 32-bit integer type was needed.

    Quote Originally Posted by Sebastiani
    If anything, I think that compiler vendors are bending to pressure to provide 32 bit ints to prevent breaking presumptuous legacy code, which is a shame. The standard *has*, after all, always said that the size of an int is implementation-defined. Don't punish us for the idiocy of others.
    I think that presumptuous legacy code is a shame, but 32-bit ints might not be a shame. A progression of 16, 32 and 64 bits for short, int and long like what Java mandates seems pretty neat to me.
    Quote Originally Posted by Bjarne Stroustrup (2000-10-14)
    I get maybe two dozen requests for help with some sort of programming or design problem every day. Most have more sense than to send me hundreds of lines of code. If they do, I ask them to find the smallest example that exhibits the problem and send me that. Mostly, they then find the error themselves. "Finding the smallest program that demonstrates the error" is a powerful debugging tool.
    Look up a C++ Reference and learn How To Ask Questions The Smart Way

  7. #22
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    I fail to see practically why there is any real reason for int to be 64 bits. You want a 64-bit type? You always have the larger long.
    I haven't seen any practical reason why int must or should be 64 bits.
    And I'm not talking about what the standard says. If the standard says it must be 64 bits, then it must a reason for that, and if so, I'd like to hear that reason. But as it stands, this is not true.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  8. #23
    Guest Sebastiani's Avatar
    Join Date
    Aug 2001
    Location
    Waterloo, Texas
    Posts
    5,708
    I fail to see practically why there is any real reason for int to be 64 bits. You want a 64-bit type? You always have the larger long.
    I haven't seen any practical reason why int must or should be 64 bits.
    And I'm not talking about what the standard says. If the standard says it must be 64 bits, then it must a reason for that, and if so, I'd like to hear that reason. But as it stands, this is not true.
    Think of it like this (perhaps weak analogy): suppose you walked into a car dealership that since it's opening in 1945 has promised an A/C system installed at no extra charge as part of the standard package. As you step into your newly purchased SUV you realize that your "A/C system" is just a dash-mounted 1945-model fan. You'd probably be pretty disappointed, wouldn't you? Similarly, an int has historically been synonymous with the widest machine-word available on an architecture, and as such it's only natural to expect it to be 64-bits on newer machines.

    At any rate, compiler vendors are simply ignoring the standard that, like it or not, should be the guiding influence when deciding what features will (or won't) be supported by a given release - not the whim of some whiny group of PDP-11 programmers (or whatever) crying foul because their incorrectly-written programs won't work anymore.

    I think that presumptuous legacy code is a shame, but 32-bit ints might not be a shame. A progression of 16, 32 and 64 bits for short, int and long like what Java mandates seems pretty neat to me.
    I agree. It would sure make writing portable programs much easier.

  9. #24
    Jack of many languages Dino's Avatar
    Join Date
    Nov 2007
    Location
    Chappell Hill, Texas
    Posts
    2,332
    To me, it just makes sense to keep an int at 4 bytes.
    Mainframe assembler programmer by trade. C coder when I can.

  10. #25
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,229
    I fail to see practically why there is any real reason for int to be 64 bits. You want a 64-bit type? You always have the larger long.
    Are you successful in seeing that an int should be 32-bit on 32-bit machines?

    Or do you find nothing wrong with having 16-bit ints on x86-32? Afterall, you want a 32-bit type? You always have the larger long.

    I like Sebastiani's analogy.

  11. #26
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,229
    Quote Originally Posted by Dino View Post
    To me, it just makes sense to keep an int at 4 bytes.
    Care to elaborate?

    I bet it made sense to programmers during the 16-bit to 32-bit transition to keep an int at 2 bytes, too (too bad I started programming in the 32-bit age).

  12. #27
    Jack of many languages Dino's Avatar
    Join Date
    Nov 2007
    Location
    Chappell Hill, Texas
    Posts
    2,332
    Well, no, a 2 byte int "word" on PCs made no sense to me at all in the early days. My background is mainframes, and an int (a "word") has been 4 bytes on that platform since day 1 (the 1960's).

    An integer, as we all know it today, on prevalent modern hardware architectures, is 4 bytes. Promoting it to 8 bytes, when there already is a long, makes no sense to me. It would break a lot of code and waste storage.

    When IBM went from 31 bit addressing to 64 bit addressing, they extended all 16 general purpose registers from 4 bytes to 8 bytes. However, they left the original (31/32 bit) instructions intact, and provided all new instructions (thus almost doubling the number of instructions on the processor) to exploit 8 byte registers.

    When coding today in 64-bit or 31 bit (or 24 bit mode for that matter) addressing mode, in assembler, you can use the new 64-bit instructions or the original 31/32 bit instructions. The same set of 8 byte registers are used either way, but the instruction dictates where in the register the data goes or comes from.

    To make a long story short, the new 8 byte registers are defined as being in two halves - the 0-31 bit half (the left half) and the 32-63 bit (right) half. If you use the original instructions, values are placed in or moved from or acted upon bits 32-63 of the 8 byte register.

    If an int goes to 8 bytes, does a long go to 16? Does a short go to 4? Does a long long go to 32? Why upset the world?
    Mainframe assembler programmer by trade. C coder when I can.

  13. #28
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,229
    That's not the case with x86-64. x86-64 gets new 64-bit registers (extended EAX-EDX to RAX-RDX, as well as a few brand new 64-bit ones). They are not 'halves".

    In your view, then, what should determine the size of int? Should it always be 4 bytes everywhere on all machines?

  14. #29
    Guest Sebastiani's Avatar
    Join Date
    Aug 2001
    Location
    Waterloo, Texas
    Posts
    5,708
    If an int goes to 8 bytes, does a long go to 16? Does a short go to 4? Does a long long go to 32? Why upset the world?
    More importantly, why should "the world" write code on the assumption that any of those be a certain size? That's what portable solutions like typedef's are for (eg: int16, int32, etc). That's how I'm able to port *any* of my applications over to whatever the architecture without breaking a single line of code. Hell, I don't even assume 8-bit bytes, for that matter!

    Write once, run anywhere. Sounds good to me.

  15. #30
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Quote Originally Posted by Dino View Post
    To make a long story short, the new 8 byte registers are defined as being in two halves - the 0-31 bit half (the left half) and the 32-63 bit (right) half. If you use the original instructions, values are placed in or moved from or acted upon bits 32-63 of the 8 byte register.
    Hmm... this looks very much like the Z80 to me in how it handled 8 and 16 bit instructions. Are you sure there are indeed only 64 bit registers instead of 32 bit registers paired up?

    I'm thinking you'd have there a 32 bit processor with a 32 bit internal databus and a 64 bit address bus. Most of the registers would be 32 bits that could be paired for 64 bit data storage, and only a handful true 64 bit registers (like the stack pointer). And you can still have two sets of instructions for 32 and 64 bit processing.

    Otherwise I find it odd this -- IMHO -- inefficient choice to use 64 bit registers to store 32 bit data blocks.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Replies: 48
    Last Post: 09-26-2008, 03:45 AM
  2. Replies: 3
    Last Post: 05-13-2007, 08:55 AM
  3. Working with random like dice
    By SebastionV3 in forum C++ Programming
    Replies: 10
    Last Post: 05-26-2006, 09:16 PM
  4. Half-life SDK, where are the constants?
    By bennyandthejets in forum Game Programming
    Replies: 29
    Last Post: 08-25-2003, 11:58 AM
  5. Quack! It doesn't work! >.<
    By *Michelle* in forum C++ Programming
    Replies: 8
    Last Post: 03-02-2003, 12:26 AM