sizeof int?

This is a discussion on sizeof int? within the Tech Board forums, part of the Community Boards category; I think what he means is that the old instructions simply act on a portion of the an existing 64-bit ...

  1. #31
    Guest Sebastiani's Avatar
    Join Date
    Aug 2001
    Location
    Waterloo, Texas
    Posts
    5,708
    I think what he means is that the old instructions simply act on a portion of the an existing 64-bit register (although it would seem that it should be the lower word - not higher). Just as you can access the lower 16 bits of eax on a 32-bit machine by referencing the 'ax register', for instance.

    Or maybe I misunderstood you?

  2. #32
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,183
    I thought what he meant was, 64-bit data needs to be stored in 2 independent registers, like how the x86 div instruction divides edx:eax by... something . That's not true for x86-64, though.

  3. #33
    Jack of many languages Dino's Avatar
    Join Date
    Nov 2007
    Location
    Katy, Texas
    Posts
    2,309
    Quote Originally Posted by cyberfish View Post
    That's not the case with x86-64. x86-64 gets new 64-bit registers (extended EAX-EDX to RAX-RDX, as well as a few brand new 64-bit ones). They are not 'halves".

    In your view, then, what should determine the size of int? Should it always be 4 bytes everywhere on all machines?
    Yes, I think standardization is good. We're talking about a language that is used on many, many platforms.
    Mac and Windows cross platform programmer. Ruby lover.

    Quote of the Day
    12/20: Mario F.:I never was, am not, and never will be, one to shut up in the face of something I think is fundamentally wrong.

    Amen brother!

  4. #34
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Portugal
    Posts
    7,459
    Quote Originally Posted by Dino View Post
    Yes, I think standardization is good. We're talking about a language that is used on many, many platforms.
    But what can possibly be more portable than:
    A ‘‘plain’’ int object has the natural size suggested by the
    architecture of the execution environment (large enough to contain any value in the range
    INT_MIN to INT_MAX as defined in the header <limits.h>)
    If on one hand it can be argued the developer is limited in that they cannot rely on the size of the int, it however allows for the continuous development of newer architectures without having to introduce new built-in types; a feature that would break portability at a fundamental level.

    EDIT: Incidentally, did you check my question on the previous post? You arose my curiosity.
    Last edited by Mario F.; 09-14-2009 at 09:04 AM.
    The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
    The programmer comes home with 12 loaves of bread.


    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  5. #35
    Jack of many languages Dino's Avatar
    Join Date
    Nov 2007
    Location
    Katy, Texas
    Posts
    2,309
    Quote Originally Posted by Mario F. View Post
    Hmm... this looks very much like the Z80 to me in how it handled 8 and 16 bit instructions. Are you sure there are indeed only 64 bit registers instead of 32 bit registers paired up?

    I'm thinking you'd have there a 32 bit processor with a 32 bit internal databus and a 64 bit address bus. Most of the registers would be 32 bits that could be paired for 64 bit data storage, and only a handful true 64 bit registers (like the stack pointer). And you can still have two sets of instructions for 32 and 64 bit processing.

    Otherwise I find it odd this -- IMHO -- inefficient choice to use 64 bit registers to store 32 bit data blocks.
    I'm not familiar with the Z80. I'm referring to IBM z/Architecture series (aka z/Series) processors.

    The register setup is certainly not inefficient, by any measure.

    Similar to Intel, on the IBM, there are several sets of registers available to the programmer:

    16 64-bit general purpose registers
    16 32-bit access registers
    16 64-bit control registers

    I wasn't on the design team (as if) but I'm pretty sure the choice to keep the same set of registers and extend them on the left was an easy one in order to provide backward compatibility (so all those programs written back in the 60's, 70's and 80's whose source code is or is not long gone will still run on new hardware).
    Mac and Windows cross platform programmer. Ruby lover.

    Quote of the Day
    12/20: Mario F.:I never was, am not, and never will be, one to shut up in the face of something I think is fundamentally wrong.

    Amen brother!

  6. #36
    Jack of many languages Dino's Avatar
    Join Date
    Nov 2007
    Location
    Katy, Texas
    Posts
    2,309
    Quote Originally Posted by Sebastiani View Post
    I think what he means is that the old instructions simply act on a portion of the an existing 64-bit register (although it would seem that it should be the lower word - not higher). Just as you can access the lower 16 bits of eax on a 32-bit machine by referencing the 'ax register', for instance.

    Or maybe I misunderstood you?
    Yes, this is correct. At first, I too considered that it might be appropriate to have the original 32-bit half on the left, but the more I use them, the more it makes sense to me to put them on the right, and there are several new instructions for managing this too, like "clear high half" (the left half), "clear high (left) half all registers", "load 64 bit value from 32 bit value already in right half", etc. In the case of a negative value, the sign bit is propagated left. (These are not the real instruction names, but I express them this way for simplicity.
    Mac and Windows cross platform programmer. Ruby lover.

    Quote of the Day
    12/20: Mario F.:I never was, am not, and never will be, one to shut up in the face of something I think is fundamentally wrong.

    Amen brother!

  7. #37
    Jack of many languages Dino's Avatar
    Join Date
    Nov 2007
    Location
    Katy, Texas
    Posts
    2,309
    Quote Originally Posted by cyberfish View Post
    I thought what he meant was, 64-bit data needs to be stored in 2 independent registers, like how the x86 div instruction divides edx:eax by... something . That's not true for x86-64, though.
    Nope - it's one 64-bit register that can be managed like a 64-bit or 32 bit (or, in regards to the right half, managed a byte at a time - like AL, AH, etc.).
    Mac and Windows cross platform programmer. Ruby lover.

    Quote of the Day
    12/20: Mario F.:I never was, am not, and never will be, one to shut up in the face of something I think is fundamentally wrong.

    Amen brother!

  8. #38
    Captain Crash brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,239
    64-bit ints would be stupid. Nobody needs numbers that large normally, they take up twice as much memory and twice as much cache. In fact, most places where people use a 32-bit integer they could easily get away with a 16-bit short or even a char.

    I'd challenge anyone here to examine a piece of their own code and tell me that in all those places you are using "int" that it would not be a pointless waste of space for that quantity to be 64 bits.
    Code:
    //try
    //{
    	if (a) do { f( b); } while(1);
    	else   do { f(!b); } while(1);
    //}

  9. #39
    C++まいる!Cをこわせ! Elysia's Avatar
    Join Date
    Oct 2007
    Posts
    22,587
    There is still a lot of "oh, it's better this way" (car analogy) or "I like it this way", but still no practical reason why int should be 64 bits.
    To the processor, 32 bits or 64 bits doesn't matter, so why the bother?
    Besides, with int being 4 bytes, we could have char (1), short (2), int (4), long (8), long long (16). That makes more sense to me, as we're able to use all kinds of sizes. Especially good where performance and memory print really and truly matters.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  10. #40
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by brewbuck View Post
    64-bit ints would be stupid. Nobody needs numbers that large normally, they take up twice as much memory and twice as much cache. In fact, most places where people use a 32-bit integer they could easily get away with a 16-bit short or even a char.
    Just in case anyone gets the wrong idea here (I did at a certain point; I still keep finding old code with lots of short int in it) using a short int saves memory but is slower for the processor.

    So unless you are dealing with a very restricted memory, using a short int for low numbers is a bad practice. Use chars or ints.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  11. #41
    C++まいる!Cをこわせ! Elysia's Avatar
    Join Date
    Oct 2007
    Posts
    22,587
    Is it really slower to use a smaller data type?
    The lower bits of the register would be filled with the data of the variable and the upper bits would simply be 0. That doesn't speak of much penalty to me.
    Still, even if there was a penalty, how much would it be? 1 clock cycle or so?
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  12. #42
    Malum in se abachler's Avatar
    Join Date
    Apr 2007
    Posts
    3,189
    Quote Originally Posted by Elysia View Post
    Besides, with int being 4 bytes, we could have char (1), short (2), int (4), long (8), long long (16).
    or use the existing types int8 int16 int32 int64 and the as yet to be widely implemented int128.

    That is actually a feature I think SHOULD be added to the standard, defining that all compliant compilers must support integer types of any size when they are written as intN and recognize those as keywords. of course the details fo how the compilers did so woudl be left as implementation specific. A compiler for x86/x64 could use native register sizes, while a compiler for the 68HC11 would have to use some behind the scenes inline support functions. And of course they woudl all have to use some fancy math for int41
    Last edited by abachler; 09-14-2009 at 11:05 AM.
    Until you can build a working general purpose reprogrammable computer out of basic components from radio shack, you are not fit to call yourself a programmer in my presence. This is cwhizard, signing off.

  13. #43
    C++まいる!Cをこわせ! Elysia's Avatar
    Join Date
    Oct 2007
    Posts
    22,587
    Hmm? I do not know of such types implemented into the standard. If they were, then that would be excellent news.
    If they were to exist, the above would a void argument indeed.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  14. #44
    C++ Witch laserlight's Avatar
    Join Date
    Oct 2003
    Location
    Singapore
    Posts
    21,712
    Quote Originally Posted by Elysia
    Hmm? I do not know of such types implemented into the standard. If they were, then that would be excellent news.
    C99 only, I think. Plus the types have a "_t" suffix, and are not guaranteed to be available.

    EDIT:
    Quote Originally Posted by abachler
    That is actually a feature I think SHOULD be added to the standard, defining that all compliant compilers must support integer types of any size when they are written as intN and recognize those as keywords.
    Yeah, C99 only requires them to be available if integer types with those sizes are supported by the implementation.
    Last edited by laserlight; 09-14-2009 at 11:09 AM.
    C + C++ Compiler: MinGW port of GCC
    Version Control System: Bazaar

    Look up a C++ Reference and learn How To Ask Questions The Smart Way

  15. #45
    Captain Crash brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,239
    Quote Originally Posted by MK27 View Post
    Just in case anyone gets the wrong idea here (I did at a certain point; I still keep finding old code with lots of short int in it) using a short int saves memory but is slower for the processor.

    So unless you are dealing with a very restricted memory, using a short int for low numbers is a bad practice. Use chars or ints.
    Yes, and I didn't really finish my entire thought in that post. I'm not arguing that a programmer should use the smallest integer type possible. In fact, I'm trying to argue the opposite: we need a generic, reasonably-sized type that we can use everywhere without having to think too hard about it. That type, in my mind, is a 32-bit integer. It gives a good trade between range and memory use in normal situations.

    (Of course, a programmer should always think about the consequences of limit range, overflow, and underflow. My point is that these issues should normally be solvable with an every-day data type, and that the solution should also not be a gross overkill in 99% of cases)
    Code:
    //try
    //{
    	if (a) do { f( b); } while(1);
    	else   do { f(!b); } while(1);
    //}

Page 3 of 4 FirstFirst 1234 LastLast
Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Replies: 48
    Last Post: 09-26-2008, 03:45 AM
  2. Replies: 3
    Last Post: 05-13-2007, 08:55 AM
  3. Working with random like dice
    By SebastionV3 in forum C++ Programming
    Replies: 10
    Last Post: 05-26-2006, 09:16 PM
  4. Half-life SDK, where are the constants?
    By bennyandthejets in forum Game Programming
    Replies: 29
    Last Post: 08-25-2003, 11:58 AM
  5. Quack! It doesn't work! >.<
    By *Michelle* in forum C++ Programming
    Replies: 8
    Last Post: 03-01-2003, 11:26 PM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21