I was going through the following statement in a c book:
"interpreting -1 as unsigned int leads to a big +ve number"
-1 is already unsigned …. then what does the above statement mean ??
I was going through the following statement in a c book:
"interpreting -1 as unsigned int leads to a big +ve number"
-1 is already unsigned …. then what does the above statement mean ??
unsigned integral types can only represent values between 0 and their maximum (which is always positive).
-1 is NOT unsigned. It is a value of type int (signed). When that value is converted from int to a unsigned integer type, modulo arithmetic is used, so the value obtained is the maximum value that unsigned integer type can represent. That is the way that the standard specifies unsigned integral types work (because, on most real hardware, that's how machine instructions work with unsigned values)
On most (If not all) systems, signed integer types are stored using two's compliment arithmetic. At least on gcc (my compiler) when you copy a signed integer to an unsigned integer of the same size, the bit pattern is copied.
For example if a signed short value, -3, is copied to an unsigned short, the unsigned short value is 65533.
The bit pattern (Little Endian on my system) of the signed and unsigned values after the assignment are the same: 11111101 11111111.
Bottom line, don't mix negative signed and unsigned values and variables.
There are some older systems that used ones-complement.
Yes, twos-complement is most common at present. However, the C standard deliberately avoids making it necessary for compilers to do that. If, for some reason, hardware evolution drives toward some other representation, it would be imprudent for a compiler to use something different.
Assuming an unsigned short with maximum value 65535, that is consistent with modulo arithmetic that I described. Whether or not the representation is twos-complement.
Yes, but there is no requirement that the bit patterns be the same. The requirement in the C standard is in terms of the relationship between values. Not bit patterns.
The more general version of that rule is to understand your code, and how it behaves.
The "don't mix them" rule is unfortunately the rule for a significant majority of programmers - who can't (more or less understandable) or don't bother (a sign of laziness) to understand what is involved when converting a negative value to a unsigned representation.
Last edited by grumpy; 10-30-2014 at 04:24 AM.