internal representation of signed/unsigned chars, ints
If I run the following program:
Code:
#include <stdio.h>
#include <stdlib.h>
int main()
{
unsigned char c = -1;
unsigned i = -1;
printf("c: %d\n", c);
printf("i: %d\n", i);
return 0;
}
I get as output:
c: 255
i: -1
I am assuming that on my machine (Pentium 4) a twos complement representation is used.
An int is 4 bytes long, so I presume the constant -1 is represented internally as a signed integer 11111111 11111111 11111111 11111111.
I also presume that when the unsigned char c is set equal to -1, the eight bits of c are set equal to the 8 lowest order bits of the above signed integer constant, i.e. 11111111.
This would be consistent with the value of 255 output for c.
So what is going on with i????? I would have expected the same logic to apply and a value of 65535 to be output. Even if the above assumptions are incorrect (in particular I am not certain if -1 is stored as an signed int and whether it is true that a signed int is converted to an unsigned char simply by discarding the 3 highest order bytes), I can't see why printf should output a value of -1 for a variable that has been defined as unsigned.
An suggestions would be most appreciated.
Incidentally, if I try the same thing in C++
Code:
#include <iostream>
using namespace std;
int main()
{
unsigned i = -1;
cout << "i: " << i << endl;
return 0;
}
I get the output:
i: 4294967295
which is FFFFFFFF, or eight bytes with all bits set to 1. How you get that from a 4-byte unsigned int I cannot imagine.