Thread: Expected datatype range

  1. #1
    Registered User
    Join Date
    Aug 2012
    Posts
    77

    Expected datatype range

    I wanted to deduce the number of bytes reserved by integer datatype.
    I got help from this forum before about this perticular problem but still stuck on one part of it.

    Code:
    #include <stdio.h>
    #include <conio.h>
    #include <string.h>
    #include <math.h>
    int main(void)
    {
        int a[2],r;
        char t;
        clrscr();
        t= (char*)&a[1] - (char*)&a[0];
        r=pow(2,8*t);
        printf("%d  %d",t,r);
        getch();
        return 0;
    }
    Its expected to print .

    Code:
    2 65536
    But its printing

    Code:
    2 0
    please help!

  2. #2
    Registered User
    Join Date
    Aug 2012
    Posts
    41
    Quote Originally Posted by progmateur View Post
    Its expected to print .

    Code:
    2 65536
    Can you explain this?
    If integer is 32 bit then expected is "4 -2147483648".

  3. #3
    Registered User
    Join Date
    Aug 2012
    Posts
    77
    int type reserves 2 byte!

  4. #4
    Registered User
    Join Date
    Aug 2012
    Posts
    41
    Ok, then int can have values form -32768 to 32767(16 bits signed). Your result is not in this range show it is giving as '0'.

  5. #5
    Registered User
    Join Date
    Aug 2012
    Posts
    77
    Quote Originally Posted by sana.iitkgp View Post
    Ok, then int can have values form -32768 to 32767(16 bits signed). Your result is not in this range show it is giving as '0'.
    Can you please hep me with this!?

  6. #6
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    39,659
    > r=pow(2,8*t);
    Write out the powers of two in binary
    1
    10
    100
    1000
    10000
    and so on

    If you've got an integer with n bits, then two to the power of n will have exactly n zeros in it as well.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

  7. #7
    Registered User
    Join Date
    Aug 2012
    Posts
    41
    You said int is 2 Bytes i.e 16 bits. So you can represent 65536 values with int both positive and negative. That range is -32768 to 32767. If any number which is not in that range you cannot represent with 'int'. You can try like
    Code:
    int j=65536;
    printf(" j = %d",j);
    This doesn't print 65536, because it cannot hold that value in 'int'.

  8. #8
    Registered User
    Join Date
    May 2012
    Location
    Arizona, USA
    Posts
    948
    Quote Originally Posted by progmateur View Post
    int type reserves 2 byte!
    Correction: int is 2 bytes wide on your compiler/system (which appears to be an ancient DOS compiler).

    Anyway, the expected range of an int is INT_MIN to INT_MAX, which are at most -2^(n-1) and 2^(n-1)-1, where n is the width in bits. In your seriously outdated compiler, int is 16 bits wide, so INT_MAX can be no larger than 2^(16-1)-1 = 32767. Clearly you cannot fit the value 65536 in an int whose maximum value is less than half that number!

    The solution is to store the value (which is not the range but is actually the number of distinct values an int can have) in a larger data type.

  9. #9
    Registered User
    Join Date
    Jun 2005
    Posts
    6,815
    The size of int is implementation defined. It might be 2 with your compiler, but that is not guaranteed. Similarly, the number of bits in a char (which you have assumed to be 8) is also implementation defined.

    The C standard specifies the sizeof operator, which computes the size of any type (relative to sizeof(char), which is defined to be 1).

    Even if your ducks align (sizeof int is 2, number of bits in a char is 8) then the int type is signed. That - typically - means one bit of the representation determines the sign. It also means that the maximum value an int can support is not equal to 2 raised to the number of bits in its representation.

    Lastly, pow() works with floating point (double) values. There is rounding involves in converting pow(2, 8*t) into an int.

    Since you're playing with implementation defined values, you might want to look up the meaning of CHAR_BIT and INT_MAX. These are macros (or constants) that all implementations are required to defined in <limits.h>, and represent the number of bits in a char and the maximum value of an int, respectively.
    Right 98% of the time, and don't care about the other 3%.

    If I seem grumpy or unhelpful in reply to you, or tell you you need to demonstrate more effort before you can expect help, it is likely you deserve it. Suck it up, Buttercup, and read this, this, and this before posting again.

  10. #10
    Registered User
    Join Date
    Aug 2012
    Posts
    77
    I got more confused after the replies
    ..But is'nt the size of datatype defined by the C compiler itself?

  11. #11
    Registered User
    Join Date
    Aug 2012
    Posts
    77
    For C language ,it stores 2 bytes isnt it??!
    Quote Originally Posted by christop View Post
    Correction: int is 2 bytes wide on your compiler/system (which appears to be an ancient DOS compiler).

    Anyway, the expected range of an int is INT_MIN to INT_MAX, which are at most -2^(n-1) and 2^(n-1)-1, where n is the width in bits. In your seriously outdated compiler, int is 16 bits wide, so INT_MAX can be no larger than 2^(16-1)-1 = 32767. Clearly you cannot fit the value 65536 in an int whose maximum value is less than half that number!

    The solution is to store the value (which is not the range but is actually the number of distinct values an int can have) in a larger data type.

  12. #12
    Registered User
    Join Date
    Sep 2006
    Posts
    8,868
    It varies, just as Grumpy has stated. On some systems, (like this PC, for instance), int's are 4 bytes, not 2.

    The thing to keep in mind is that the C standard is used by a lot of systems -- everything from huge supercomputers, to PC's, to tiny embedded micro-controllers. So it covers a WIDE range.

  13. #13
    Registered User
    Join Date
    Jun 2005
    Posts
    6,815
    Quote Originally Posted by progmateur View Post
    But is'nt the size of datatype defined by the C compiler itself?
    Quote Originally Posted by progmateur View Post
    For C language ,it stores 2 bytes isnt it??!
    Ah, if life were that simple, there would only be one C compiler, it would be optimised for MS-DOS, and C programs would be woefully inefficient when on modern operating systems.

    Firstly, you need to understand the relationship between the C standard, the C language, and a C compiler.

    The C standard is the authoritative definition of the C language. A C compiler interprets (or parses) source code as part of the process of translating C source code into an executable form. The C standard is the basis on which correctness of a C compiler is assessed. If a compiler does something that the standard forbids then the compiler is incorrect (or, formally, not compliant with the standard).

    This means that a C compiler does not define anything related to C.

    Now, when specifying the int type, the C standard says nothing about number of bits used to represent it. What it does is specify minimum ranges that an int must be able to represent. Specifically, the minimum value of an int must not be greater than -32767 and the maximum value must not be less than 32767. Nothing is said about using bits to represent an int, let alone about number of bits that must be used to represent an int.

    The standard calls the ranges of int "implementation defined". A compiler is allowed to support a 16 bit int, a 32 bit int, a 19 bit int. It is also allowed to represent an int using some means that doesn't involve bits at all (for example, a C compiler could, feasibly, be implemented on the Setun ternary computer that the Soviets built in 1950). Whatever the compiler does, the documentation of the compiler must specify what range of values is supported by the int type. A 32 bit int typically supports values in a range something like [-2147483648, 2147483647]. A compiler that supports such an int type is allowed, since -2147483648 is less than -32767 and 2147483647 is greater than 32767.

    The standard also requires that a compiler be accompanied by a header file called <limits.h>. Within that header file, INT_MIN represents the smallest value that an int can support, and INT_MAX the largest.

    There is nothing in all that which requires an int to be 2 bytes. In fact, an int that is 4 bytes (the typical 32-bit int, assuming a char is composed of 8 bits) is permitted by the standard - the requirement is stated in terms of the range of values an int can support, not in terms of bits and bytes.
    Right 98% of the time, and don't care about the other 3%.

    If I seem grumpy or unhelpful in reply to you, or tell you you need to demonstrate more effort before you can expect help, it is likely you deserve it. Suck it up, Buttercup, and read this, this, and this before posting again.

  14. #14
    Registered User
    Join Date
    May 2012
    Location
    Arizona, USA
    Posts
    948
    Quote Originally Posted by grumpy View Post
    Now, when specifying the int type, the C standard says nothing about number of bits used to represent it. What it does is specify minimum ranges that an int must be able to represent. Specifically, the minimum value of an int must not be greater than -32767 and the maximum value must not be less than 32767. Nothing is said about using bits to represent an int, let alone about number of bits that must be used to represent an int.

    The standard calls the ranges of int "implementation defined". A compiler is allowed to support a 16 bit int, a 32 bit int, a 19 bit int. It is also allowed to represent an int using some means that doesn't involve bits at all (for example, a C compiler could, feasibly, be implemented on the Setun ternary computer that the Soviets built in 1950).
    Just to nitpick here, the C99 standard (as well as C11 and probably C89 too) specify that objects are stored in binary formats:
    Quote Originally Posted by 6.2.6 Representation of types
    #3
    Values stored in objects of type unsigned char shall be represented using a pure binary notation.
    #4
    Values stored in objects of any other object type consist of n|CHAR_BIT bits, where n is the size of an object of that type, in bytes. The value may be copied into an object of type unsigned char [n] (e.g., by memcpy); the resulting set of bytes is called the object representation of the value. Two values (other than NaNs) with the same object representation compare equal, but values that compare equal may have different object representations.
    You're correct about everything else regarding the size and range of int. I think a lot of people (myself included) like to "simplify" the standards in our minds, for example, by saying that an int is at least 16 bits wide. The standards don't spell it out in those words, but it is true. No smaller type can hold the required range. Similarly for char (8 bits), short (16 bits), and long (32 bits).

    Let me also add that in some systems, sizeof(int) is 1. Since sizeof(char) is also 1 by definition, this means that char and int are both at least 16 bits wide. short must also be the same size due to size relationship constraints (sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)).

  15. #15
    Registered User
    Join Date
    Jun 2005
    Posts
    6,815
    Quote Originally Posted by christop View Post
    Just to nitpick here, the C99 standard (as well as C11 and probably C89 too) specify that objects are stored in binary formats:
    That is not actually the meaning of the text you quoted.

    It is true with respect to an abstract machine that the C standard specifies. That abstract machine means that C code can act as if any data is made up of bits. It does not require that the actual implementation of the abstract machine make use of bits. It does mean that C code (unless it breaks out and does something in the realm of undefined behaviour) cannot detect the difference.

    That is why, for example, that multiplication by two can be specified as equivalent in effect to a left shift operation in C. It does not mean that multiplication by two and a left shift operation are equivalent as far as the underlying hardware is concerned. In practice, it often is .... but is not required to be. The C compiler handles the bookkeeping to preserve that illusion for the C code.

    Quote Originally Posted by christop View Post
    I think a lot of people (myself included) like to "simplify" the standards in our minds, for example, by saying that an int is at least 16 bits wide.
    Sure. It is a common teaching technique to give a simple explanation for many things. That does not exclude the possibility of a deeper underlying truth. Many people will not step into circumstances where those simple truths break down. But that does not mean the darker corners (of C in this case) do not exist.

    Quote Originally Posted by christop View Post
    Let me also add that in some systems, sizeof(int) is 1.
    That is true. On those systems, a char type is able to hold a wider range of values than the standard requires. The standard requires that a char can support at least the range [-127,127]. An int that supports the range [-32767,32767] is able to represent any value on the range [-127,127], plus a lot of other values, so can be used as a char type (by the compiler). If you write code that relies on an char being able to store a value of 4000, then you cannot port your code without modification to a compiler that only supports a char in the range [-127,127].

    Quote Originally Posted by christop View Post
    Since sizeof(char) is also 1 by definition, this means that char and int are both at least 16 bits wide.
    You need to qualify that statement .... it is only true on those particular machines you mention. It is not true on all machines.

    Quote Originally Posted by christop View Post
    short must also be the same size due to size relationship constraints (sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)).
    Just bear in mind that "size relationship" is not required. That size relationship is often (possibly even usually) a consequence that a short variable can represent all values that a char can, an int variable can represent all values that a short can, ....... etc etc.
    Right 98% of the time, and don't care about the other 3%.

    If I seem grumpy or unhelpful in reply to you, or tell you you need to demonstrate more effort before you can expect help, it is likely you deserve it. Suck it up, Buttercup, and read this, this, and this before posting again.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Replies: 7
    Last Post: 09-23-2012, 03:06 PM
  2. datatype bit?
    By MK27 in forum C Programming
    Replies: 3
    Last Post: 02-05-2009, 01:56 PM
  3. Datatype
    By ssharish2005 in forum C Programming
    Replies: 6
    Last Post: 08-04-2008, 12:53 PM
  4. C datatype, enumerator
    By onebrother in forum C Programming
    Replies: 3
    Last Post: 07-26-2007, 12:49 AM
  5. Problem with datatype
    By HAssan in forum C Programming
    Replies: 1
    Last Post: 10-19-2005, 10:51 PM