Thread: how to get Data Type Size Independet of System?

  1. #1
    Registered User
    Join Date
    Feb 2008
    Posts
    1

    how to get Data Type Size Independet of System?

    Hello!!

    I'm programming functions to differents system (Unix, Solaris, Microcontrolers,...) and I just need tu use data tipe with constat size independently of the system. I've heard there are some pre-processing intructions to force these sizes.

    More exactly, I'm looking for:

    int --> 4 bytes
    short --> 2 bytes
    float --> 4 bytes
    double --> 8 bytes
    (char is alwais 1 bytes in ASCII compatible system)

    Thanks so much!!

    Pablo, from Barcelona

  2. #2
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    So, you will need to define your own types, e.g.
    Code:
    typedef int int32;
    typedef short int int16;
    And then where these types do not match your particular criteria, you will need some sort of #if solution to declare the appropriate type, e.g.
    Code:
    #if __X64__
    typedef long int int64;
    #else
    typedef long long int int64;
    #endif
    I would also add a piece of code that does like this (somewhere in the early part of the code, e.g. in main):
    Code:
    if (sizeof(int32) != 4) { printf("sizeof(int32) is not 4, it is %d\n", (int)sizeof(int32)); }
    That way, if the sizes do get messed up, you know about it, rather than trying to figure out why everything behaves a bit strange.

    --
    Mats

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  3. #3
    Registered User
    Join Date
    Jun 2005
    Posts
    6,815
    Quote Originally Posted by matsp View Post
    I would also add a piece of code that does like this (somewhere in the early part of the code, e.g. in main):
    Code:
    if (sizeof(int32) != 4) { printf("sizeof(int32) is not 4, it is %d\n", (int)sizeof(int32)); }
    On the philosophy of "make an error occur at compile time rather than run time" I wouldn't do that. For example, with an int16 type, use the following sort of logic.
    Code:
    #include <limits.h>
    #if SHRT_MIN == -32767 && SHRT_MAX == 32767
    typedef short int16;
    #elif   /*  do other tests to find an appropriate int16 type */
    #else
    /*    trigger compiler error if we cannot find a suitable int16 type *.
    #error short is not 2 8-bit bytes
    #endif
    The macros SHRT_MIN and SHRT_MAX specify the minimum and maximum value that can be stored in a short, respectively. For long the corresponding macros are LONG_MIN and LONG_MAX, and for long long LLONG_MIN and LLONG_MAX. For unsigned types, the macros to check are USHRT_MAX, ULONG_MAX, and ULLONG_MAX (no corresponding _MIN macros as, for unsigned types, the minimum value is zero).

    In general, it is not possible to force the size of short/int/long/long long to particular values via the preprocessor. It is, with only a few compilers, possible to select sizes via command line options or other compiler-configuration options. Hence I agree with providing a set of typedefs for types of required sizes.

    I'll leave floating point types alone, as they are trickier, but the macros are in <float.h> for checking properties of floating point types. However, keep in mind that there is at least one machine where a float is 8 bytes and a double is 16 (that machine does not support 4 byte floating point at all). sizeof() for floating point types is the least of your worries -- floating point types have exponent and mantissa fields, which are both of variable sizes.

  4. #4
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Grumpy,

    Those are good suggestions. I would still use sizeof() to VERIFY that the size is what you think - because just that the value range of a short is right doesn't ABSOLUTELY say that the size of it is right [although I guess that is strange and obscure]. And unfortunately, you can't use sizeof() in preprocessor code, so you can't do #if sizeof(int) ...

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  5. #5
    Registered User
    Join Date
    Jun 2005
    Posts
    6,815
    I would tend to query that, matsp.

    The range of values that can be stored in a variable correlates directly with its size (and correlates exactly with all real-world computing devices that are based on binary representations). The range [-32767, 32767] (+/- one on one end or the other, but that's only a minor wrinkle) is the range that can be represented using two 8-bit characters. i.e. if we assume a char is 8-bit, sizeof(short) will be 2 if it can represent exactly that range.

    In practice, for example, when deciding what size integer to use the most common considerations are memory usage, speed of operations on the type, and the range of values that can be stored. You will not try to use a 16 bit signed integer to represent 40000, for example.

  6. #6
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Agreed, that's why I said it would probably be an obscure special case. I do know that there are machines where a 16-bit integer isn't directly usable in the machine (e.g. MIPS), but I guess that would still be stored in 16 bits, it would just be awkward (in the machine language) to extract the 16 bits from the related 32-bit word.

    However, I will still defend the "make sure it's right" using sizeof(). This of course depends on exactly why one would like to have 4-byte (32-bit) integers, for example.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  7. #7

  8. #8
    uint64_t...think positive xuftugulus's Avatar
    Join Date
    Feb 2008
    Location
    Pacem
    Posts
    355
    As rowbhit implied with the link, the <stdint.h> header is a good habit to use when you need to move code around processors. I no longer remember an int type in C... int32_t comes to mind the moment i think number. :P
    Code:
    ...
        goto johny_walker_red_label;
    johny_walker_blue_label: exit(-149$);
    johny_walker_red_label : exit( -22$);
    A typical example of ...cheap programming practices.

  9. #9
    Code Goddess Prelude's Avatar
    Join Date
    Sep 2001
    Posts
    9,897
    >char is alwais 1 bytes in ASCII compatible system
    Actually, ASCII has nothing to do with it. char isn't required to be 1 byte (presumably you're thinking of an octet) even on an "ASCII compatible system", but sizeof(char) is guaranteed to be 1.

    Fortunately, you can safely assume the minimum range of each type and still be portable. For example:

    char - 8 bits
    short - 16 bits
    int - 16 bits
    long - 32 bits
    float - 1e[+-]37
    double - 1e[+-]37

    I'm curious why you think you need exact sizes though.
    My best code is written with the delete key.

  10. #10
    Registered User
    Join Date
    Sep 2006
    Posts
    835
    Quote Originally Posted by Prelude View Post
    >char is alwais 1 bytes in ASCII compatible system
    Actually, ASCII has nothing to do with it. char isn't required to be 1 byte (presumably you're thinking of an octet) even on an "ASCII compatible system", but sizeof(char) is guaranteed to be 1.
    My understanding was that in the context of C/C++, a byte by definition was whatever the size of a char is (so one can say that sizeof() returns the size in bytes, with sizeof(char) always being 1, but a C/C++ byte isn't necessarily 8 bits).

  11. #11
    Code Goddess Prelude's Avatar
    Join Date
    Sep 2001
    Posts
    9,897
    >in the context of C/C++, a byte by definition was whatever the size of a char is
    Correct. Byte and char are more or less interchangeable terms in C/C++. Since my statement seems to be causing confusion, I'll rephrase it:
    Quote Originally Posted by Prelude
    Actually, ASCII has nothing to do with it. char doesn't have to be an 8-bit type even on an "ASCII compatible system", but sizeof(char) is still guaranteed to be 1. I think you're confusing the C concept of a byte with the typical PC terminology that specifies an 8-bit quantity (an octet).
    My best code is written with the delete key.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Getting an error with OpenGL: collect2: ld returned 1 exit status
    By Lorgon Jortle in forum C++ Programming
    Replies: 6
    Last Post: 05-08-2009, 08:18 PM
  2. failure to import external C libraries in C++ project
    By nocturna_gr in forum C++ Programming
    Replies: 3
    Last Post: 12-02-2007, 03:49 PM
  3. Error with a vector
    By Tropicalia in forum C++ Programming
    Replies: 20
    Last Post: 09-28-2006, 07:45 PM
  4. Reading a file with Courier New characters
    By Noam in forum C Programming
    Replies: 3
    Last Post: 07-07-2006, 09:29 AM
  5. HUGE fps jump
    By DavidP in forum Game Programming
    Replies: 23
    Last Post: 07-01-2004, 10:36 AM