Hello there. I was reading this book (C premire) and came up with this
Code:
The ranking of types, from highest to lowest, is long double, double, float, unsigned long long, long long, unsigned long, long, unsigned int, and int. One possible exception is when long and int are the same size, in which case unsigned int outranks long. The short and char types don't appear in this list because they would have been already promoted to int or perhaps unsigned int.
Remembering in one of the threads where user "cas" mentioned this :
Code:
255 / -1 as an unsigned char:
1111 1111
Converted to int:
0000 0000 0000 0000 0000 0000 1111 1111
The above is only -1 if the least significant 8 bits are used. As an int (signed or unsigned) it's clearly 255.
4294967295 / -1 as an unsigned int:
1111 1111 1111 1111 1111 1111 1111 1111
Converted to int:
1111 1111 1111 1111 1111 1111 1111 1111
This is ambiguous as an int: unsigned, it's 4294967295. Signed, it's -1.
Therefore having this code :
Code:
signed char x = -5;
int y=5;
printf("%d",x+y);
x should be converted to int
5 in the binary form can be written as a char by 0000 0101
=> x=-5 is equal to 1111 1011 (2's compliment)
after converting to an int y will be represented by
0000 0000 0000 0000 0000 0000 1111 1011
+ 0000 0000 0000 0000 0000 0000 0000 0101 (adding 5)
0000 0000 0000 0000 0000 0001 0000 0000
which is 256.
But how come it is printing 0, which is the logical answer that i should get ?