Thread: Create integer type that is always 32 bit

  1. #1
    Registered User MartinR's Avatar
    Join Date
    Dec 2013
    Posts
    200

    Lightbulb Create integer type that is always 32 bit

    Hello,


    I try to make integer type that will be always 32 bit wide, independent of the underlying hardware.

    Because char is always 1 byte we could arrange four chars in some way to form integer varaiable. The question is how?

    My first attempt:

    Code:
    typedef struct type32 {int a:32;} myInt_t;
    but this require initialization with curly braces:
    Code:
    myInt_t a = {11};
    Typedef consisting of a struct with char array has the same problem - initialization:

    Code:
    typedef struct {char a[4];} myInt_t;
    How to solve this problem?

  2. #2
    Registered User
    Join Date
    Dec 2017
    Posts
    1,632
    Technically it's not the case that char is always 1 byte (i.e., 8 bits).
    The number of bits of a char is given by the CHAR_BIT macro in <limits.h>.

    The proper way to get a 32-bit value is to use the modern <stdint.h> (or <inttypes.h> for printf and scanf formats, too).
    Code:
    #include <stdio.h>
    #include <inttypes.h>   // also includes <stdint.h>
    
    int main() {
        int32_t n = 12345;
        printf("%" PRId32 "\n", n);
        return 0;
    }
    A little inaccuracy saves tons of explanation. - H.H. Munro

  3. #3
    Registered User MartinR's Avatar
    Join Date
    Dec 2013
    Posts
    200
    Quote Originally Posted by john.c View Post
    Technically it's not the case that char is always 1 byte (i.e., 8 bits)
    Why not? Isn't that guaranteed that char must be 1 byte? Do you have some evidences that disclaim that?

    The proper way to get a 32-bit value is to use the modern <stdint.h> (or <inttypes.h> for printf and scanf formats, too).
    You misunderstood my goal or I wasn't clear enough. I need to guarantee 32 bit int type by creating my own type for it - no use some library etc. So my only utility is plain C keywords.

  4. #4
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    39,659
    A char is always 1 byte, but a byte only has to be a minimum of 8 bits.

    Quote Originally Posted by c99 draft
    Annex E
    (informative)
    Implementation limits
    1 The contents of the header<limits.h>are given below, in alphabetical order. The
    minimum magnitudes shown shall be replaced by implementation-defined magnitudes
    with the same sign. The values shall all be constant expressions suitable for use in#if
    preprocessing directives. The components are described further in 5.2.4.2.1.
    #define CHAR_BIT 8
    The best you're going to manage is to use a bit-field and live with the braces in the initialiser.

    Or you have some conditional code and some static assertions to make sure your type is exactly what you want.

    Along the lines of
    Code:
    typedef int my32t;
    STATIC_ASSERT((sizeof(my32t)*CHAR_BIT)==32,"32 bit int size failed");
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

  5. #5
    Registered User
    Join Date
    Dec 2017
    Posts
    1,632
    Quote Originally Posted by MartinR View Post
    Why not? Isn't that guaranteed that char must be 1 byte? Do you have some evidences that disclaim that?
    Where's YOUR evidence, you lying piece of crap!

    All that's guaranteed is that sizeof(char) is 1, but all that means is that the sizes of objects are calculated in multiples of the size of a char.

    And in what sense did I "misunderstand" your question, you brainless idiot! You never mentioned anything about not using standard libraries.
    Last edited by john.c; 06-16-2018 at 12:58 PM.
    A little inaccuracy saves tons of explanation. - H.H. Munro

  6. #6
    Registered User MartinR's Avatar
    Join Date
    Dec 2013
    Posts
    200
    Salem, thanks for the answer.
    john.c, where all that aggression comes from? ;/

  7. #7
    misoturbutc Hodor's Avatar
    Join Date
    Nov 2013
    Posts
    1,787
    I think the only misunderstanding is that a byte has to be 8 bits. People, like me, who grew up in the 8-bit era got it hammered into our brains that 1 byte is 8 bits but that's not always the case. I've used chips that have 12-bit and 14-bit bytes but one byte is still one byte. So sizeof(char) on those chips is 1 (byte) as guaranteed by the standard but that doesn't mean that the char only has 8 bits (they obviously do not).

    But, MartinR, why can't you use stdint.h? It's plain old standard C since 1999 and not a "library".

  8. #8
    C++ Witch laserlight's Avatar
    Join Date
    Oct 2003
    Location
    Singapore
    Posts
    28,413
    Quote Originally Posted by john.c
    Where's YOUR evidence, you lying piece of crap!
    That was uncalled for. I suggest that you restrain yourself from wildly insulting other members just because they requested that you provide evidence for something that they found contradicted what they thought they understood. After all, if you're right, you get to show that you're right in a way that's helpful.

    Quote Originally Posted by john.c
    All that's guaranteed is that sizeof(char) is 1, but all that means is that the sizes of objects are calculated in multiples of the size of a char.
    The evidence lies in the C standard:
    Quote Originally Posted by C11 Clause 6.5.3.4 Paragraph 2a
    The sizeof operator yields the size (in bytes) of its operand, which may be an expression or the parenthesized name of a type.
    So yes, because the sizes of objects are calculated in multiplies of the size of a char, and because the sizeof operator yields that size in bytes (as per C's notion of what constitutes a "byte"), it follows that in C, it is indeed "guaranteed that char must be 1 byte", therefore MartinR is proven correct. It is just that as Hodor noted, outside of C there's a strong prevalent convention that a byte is 8 bits, whereas C also caters for systems where a "byte" as known to the system may be more than 8 bits, hence you rightly mentioned CHAR_BIT.
    Quote Originally Posted by Bjarne Stroustrup (2000-10-14)
    I get maybe two dozen requests for help with some sort of programming or design problem every day. Most have more sense than to send me hundreds of lines of code. If they do, I ask them to find the smallest example that exhibits the problem and send me that. Mostly, they then find the error themselves. "Finding the smallest program that demonstrates the error" is a powerful debugging tool.
    Look up a C++ Reference and learn How To Ask Questions The Smart Way

  9. #9
    Registered User MartinR's Avatar
    Join Date
    Dec 2013
    Posts
    200
    Hodor, yes I must admit - byte which consist of more than 8 bits is actually a surprise to me.

    why can't you use stdint.h?
    Sure I can but this subject is more about how to define such type using standard C types - char for example (assuming char is 8 bits).


    laserlight, thanks for vey comprehensive answer.

  10. #10
    C++ Witch laserlight's Avatar
    Join Date
    Oct 2003
    Location
    Singapore
    Posts
    28,413
    Quote Originally Posted by MartinR
    Sure I can but this subject is more about how to define such type using standard C types - char for example (assuming char is 8 bits).
    Is this merely an exercise to satisfy your curiosity, or is this a requirement for an actual program to be released in production such that you cannot even use the C standard library and hence cannot use what john.c suggested in post #2?
    Quote Originally Posted by Bjarne Stroustrup (2000-10-14)
    I get maybe two dozen requests for help with some sort of programming or design problem every day. Most have more sense than to send me hundreds of lines of code. If they do, I ask them to find the smallest example that exhibits the problem and send me that. Mostly, they then find the error themselves. "Finding the smallest program that demonstrates the error" is a powerful debugging tool.
    Look up a C++ Reference and learn How To Ask Questions The Smart Way

  11. #11
    Registered User MartinR's Avatar
    Join Date
    Dec 2013
    Posts
    200
    Quote Originally Posted by laserlight View Post
    Is this merely an exercise to satisfy your curiosity, or is this a requirement for an actual program to be released in production such that you cannot even use the C standard library and hence cannot use what john.c suggested in post #2?
    Yes the former one, I am just curious how to define such universal type. Do you have better solution than mine, presented in first post?

  12. #12
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    39,659
    Well the whole point of the standard includes is that it takes all the guess-work out of doing the right thing.

    Otherwise you need Pre-defined Compiler Macros / Wiki / Home and a very long list of conditional compilation flags, followed by the static assertion I posted earlier.
    Code:
    #if defined(WIN32) && defined(MSVC)
    typedef int my32t;
    #elif defined(DOS) && defined(MSVC)
    typedef long int my32t;
    #elif defined(LINUX)
    typedef int my32t;
    #elif // ad nauseam
    #else
      #error "Invalid Platform/compiler combo"
    #endif
    If what you're proposing was that easy on a standard compiler, then stdint wouldn't exist because there would be no need for it.

    An unadorned int in C has traditionally aligned itself with the natural register and/or bus width of the target architecture, to be as efficient as possible.

    Also, an unadorned int only needs to hold signed 16-bit values to be compliant with the standards. If you want a guaranteed 'at least 32-bits' integer, then choose long int.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. 'char' integer type or character type?
    By password636 in forum C Programming
    Replies: 5
    Last Post: 09-26-2012, 09:50 AM
  2. how to create an array of digit of an integer
    By lopix in forum C Programming
    Replies: 2
    Last Post: 12-10-2011, 10:54 PM
  3. Need to create an integer-to-word program
    By toadkiwi in forum C Programming
    Replies: 8
    Last Post: 02-22-2008, 10:00 AM
  4. How can I create my own var type?
    By Queatrix in forum C++ Programming
    Replies: 7
    Last Post: 11-03-2005, 06:50 PM
  5. Trying to create a 5 element integer array
    By nadeni0119 in forum C++ Programming
    Replies: 5
    Last Post: 03-26-2003, 08:35 PM

Tags for this Thread