# Expected datatype range

This is a discussion on Expected datatype range within the C Programming forums, part of the General Programming Boards category; I wanted to deduce the number of bytes reserved by integer datatype. I got help from this forum before about ...

1. ## Expected datatype range

I wanted to deduce the number of bytes reserved by integer datatype.
I got help from this forum before about this perticular problem but still stuck on one part of it.

Code:
```#include <stdio.h>
#include <conio.h>
#include <string.h>
#include <math.h>
int main(void)
{
int a[2],r;
char t;
clrscr();
t= (char*)&a[1] - (char*)&a[0];
r=pow(2,8*t);
printf("%d  %d",t,r);
getch();
return 0;
}```
Its expected to print .

Code:
`2 65536`
But its printing

Code:
`2 0`

2. Originally Posted by progmateur
Its expected to print .

Code:
`2 65536`
Can you explain this?
If integer is 32 bit then expected is "4 -2147483648".

3. int type reserves 2 byte!

4. Ok, then int can have values form -32768 to 32767(16 bits signed). Your result is not in this range show it is giving as '0'.

5. Originally Posted by sana.iitkgp
Ok, then int can have values form -32768 to 32767(16 bits signed). Your result is not in this range show it is giving as '0'.
Can you please hep me with this!?

6. > r=pow(2,8*t);
Write out the powers of two in binary
1
10
100
1000
10000
and so on

If you've got an integer with n bits, then two to the power of n will have exactly n zeros in it as well.

7. You said int is 2 Bytes i.e 16 bits. So you can represent 65536 values with int both positive and negative. That range is -32768 to 32767. If any number which is not in that range you cannot represent with 'int'. You can try like
Code:
```int j=65536;
printf(" j = %d",j);```
This doesn't print 65536, because it cannot hold that value in 'int'.

8. Originally Posted by progmateur
int type reserves 2 byte!
Correction: int is 2 bytes wide on your compiler/system (which appears to be an ancient DOS compiler).

Anyway, the expected range of an int is INT_MIN to INT_MAX, which are at most -2^(n-1) and 2^(n-1)-1, where n is the width in bits. In your seriously outdated compiler, int is 16 bits wide, so INT_MAX can be no larger than 2^(16-1)-1 = 32767. Clearly you cannot fit the value 65536 in an int whose maximum value is less than half that number!

The solution is to store the value (which is not the range but is actually the number of distinct values an int can have) in a larger data type.

9. The size of int is implementation defined. It might be 2 with your compiler, but that is not guaranteed. Similarly, the number of bits in a char (which you have assumed to be 8) is also implementation defined.

The C standard specifies the sizeof operator, which computes the size of any type (relative to sizeof(char), which is defined to be 1).

Even if your ducks align (sizeof int is 2, number of bits in a char is 8) then the int type is signed. That - typically - means one bit of the representation determines the sign. It also means that the maximum value an int can support is not equal to 2 raised to the number of bits in its representation.

Lastly, pow() works with floating point (double) values. There is rounding involves in converting pow(2, 8*t) into an int.

Since you're playing with implementation defined values, you might want to look up the meaning of CHAR_BIT and INT_MAX. These are macros (or constants) that all implementations are required to defined in <limits.h>, and represent the number of bits in a char and the maximum value of an int, respectively.

10. I got more confused after the replies
..But is'nt the size of datatype defined by the C compiler itself?

11. For C language ,it stores 2 bytes isnt it??!
Originally Posted by christop
Correction: int is 2 bytes wide on your compiler/system (which appears to be an ancient DOS compiler).

Anyway, the expected range of an int is INT_MIN to INT_MAX, which are at most -2^(n-1) and 2^(n-1)-1, where n is the width in bits. In your seriously outdated compiler, int is 16 bits wide, so INT_MAX can be no larger than 2^(16-1)-1 = 32767. Clearly you cannot fit the value 65536 in an int whose maximum value is less than half that number!

The solution is to store the value (which is not the range but is actually the number of distinct values an int can have) in a larger data type.

12. It varies, just as Grumpy has stated. On some systems, (like this PC, for instance), int's are 4 bytes, not 2.

The thing to keep in mind is that the C standard is used by a lot of systems -- everything from huge supercomputers, to PC's, to tiny embedded micro-controllers. So it covers a WIDE range.

13. Originally Posted by progmateur
But is'nt the size of datatype defined by the C compiler itself?
Originally Posted by progmateur
For C language ,it stores 2 bytes isnt it??!
Ah, if life were that simple, there would only be one C compiler, it would be optimised for MS-DOS, and C programs would be woefully inefficient when on modern operating systems.

Firstly, you need to understand the relationship between the C standard, the C language, and a C compiler.

The C standard is the authoritative definition of the C language. A C compiler interprets (or parses) source code as part of the process of translating C source code into an executable form. The C standard is the basis on which correctness of a C compiler is assessed. If a compiler does something that the standard forbids then the compiler is incorrect (or, formally, not compliant with the standard).

This means that a C compiler does not define anything related to C.

Now, when specifying the int type, the C standard says nothing about number of bits used to represent it. What it does is specify minimum ranges that an int must be able to represent. Specifically, the minimum value of an int must not be greater than -32767 and the maximum value must not be less than 32767. Nothing is said about using bits to represent an int, let alone about number of bits that must be used to represent an int.

The standard calls the ranges of int "implementation defined". A compiler is allowed to support a 16 bit int, a 32 bit int, a 19 bit int. It is also allowed to represent an int using some means that doesn't involve bits at all (for example, a C compiler could, feasibly, be implemented on the Setun ternary computer that the Soviets built in 1950). Whatever the compiler does, the documentation of the compiler must specify what range of values is supported by the int type. A 32 bit int typically supports values in a range something like [-2147483648, 2147483647]. A compiler that supports such an int type is allowed, since -2147483648 is less than -32767 and 2147483647 is greater than 32767.

The standard also requires that a compiler be accompanied by a header file called <limits.h>. Within that header file, INT_MIN represents the smallest value that an int can support, and INT_MAX the largest.

There is nothing in all that which requires an int to be 2 bytes. In fact, an int that is 4 bytes (the typical 32-bit int, assuming a char is composed of 8 bits) is permitted by the standard - the requirement is stated in terms of the range of values an int can support, not in terms of bits and bytes.

14. Originally Posted by grumpy
Now, when specifying the int type, the C standard says nothing about number of bits used to represent it. What it does is specify minimum ranges that an int must be able to represent. Specifically, the minimum value of an int must not be greater than -32767 and the maximum value must not be less than 32767. Nothing is said about using bits to represent an int, let alone about number of bits that must be used to represent an int.

The standard calls the ranges of int "implementation defined". A compiler is allowed to support a 16 bit int, a 32 bit int, a 19 bit int. It is also allowed to represent an int using some means that doesn't involve bits at all (for example, a C compiler could, feasibly, be implemented on the Setun ternary computer that the Soviets built in 1950).
Just to nitpick here, the C99 standard (as well as C11 and probably C89 too) specify that objects are stored in binary formats:
Originally Posted by 6.2.6 Representation of types
#3
Values stored in objects of type unsigned char shall be represented using a pure binary notation.
#4
Values stored in objects of any other object type consist of n|CHAR_BIT bits, where n is the size of an object of that type, in bytes. The value may be copied into an object of type unsigned char [n] (e.g., by memcpy); the resulting set of bytes is called the object representation of the value. Two values (other than NaNs) with the same object representation compare equal, but values that compare equal may have different object representations.
You're correct about everything else regarding the size and range of int. I think a lot of people (myself included) like to "simplify" the standards in our minds, for example, by saying that an int is at least 16 bits wide. The standards don't spell it out in those words, but it is true. No smaller type can hold the required range. Similarly for char (8 bits), short (16 bits), and long (32 bits).

Let me also add that in some systems, sizeof(int) is 1. Since sizeof(char) is also 1 by definition, this means that char and int are both at least 16 bits wide. short must also be the same size due to size relationship constraints (sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)).

15. Originally Posted by christop
Just to nitpick here, the C99 standard (as well as C11 and probably C89 too) specify that objects are stored in binary formats:
That is not actually the meaning of the text you quoted.

It is true with respect to an abstract machine that the C standard specifies. That abstract machine means that C code can act as if any data is made up of bits. It does not require that the actual implementation of the abstract machine make use of bits. It does mean that C code (unless it breaks out and does something in the realm of undefined behaviour) cannot detect the difference.

That is why, for example, that multiplication by two can be specified as equivalent in effect to a left shift operation in C. It does not mean that multiplication by two and a left shift operation are equivalent as far as the underlying hardware is concerned. In practice, it often is .... but is not required to be. The C compiler handles the bookkeeping to preserve that illusion for the C code.

Originally Posted by christop
I think a lot of people (myself included) like to "simplify" the standards in our minds, for example, by saying that an int is at least 16 bits wide.
Sure. It is a common teaching technique to give a simple explanation for many things. That does not exclude the possibility of a deeper underlying truth. Many people will not step into circumstances where those simple truths break down. But that does not mean the darker corners (of C in this case) do not exist.

Originally Posted by christop
Let me also add that in some systems, sizeof(int) is 1.
That is true. On those systems, a char type is able to hold a wider range of values than the standard requires. The standard requires that a char can support at least the range [-127,127]. An int that supports the range [-32767,32767] is able to represent any value on the range [-127,127], plus a lot of other values, so can be used as a char type (by the compiler). If you write code that relies on an char being able to store a value of 4000, then you cannot port your code without modification to a compiler that only supports a char in the range [-127,127].

Originally Posted by christop
Since sizeof(char) is also 1 by definition, this means that char and int are both at least 16 bits wide.
You need to qualify that statement .... it is only true on those particular machines you mention. It is not true on all machines.

Originally Posted by christop
short must also be the same size due to size relationship constraints (sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)).
Just bear in mind that "size relationship" is not required. That size relationship is often (possibly even usually) a consequence that a short variable can represent all values that a char can, an int variable can represent all values that a short can, ....... etc etc.

Page 1 of 2 12 Last