hi there:
i am just learning more about data types and thought i will write a short code to see the bit patterns of signed variable, in this case a char.
i know that the first bit of signed data types is called sign bit, so i suspected to see something like:
decimal 1 -> 00000001
decimal -1 -> 10000001
the left most bit being the sign bit.
however, in the code i worte i got:
decimal value -> -1 :: bit pattern -> 11111111
decimal value -> 1 :: bit pattern -> 00000001
and it seems that -128 comes after +127 as:
decimal value -> 127 :: bit pattern -> 01111111
decimal value -> -128 :: bit pattern -> 10000000
my question is that, if this is the correct bit representation of signed char, how does negative numbers are convereted into bit patterns? of course, it could be something wrong in the code i wrote, my code are as below:
main()
{
char string[9];
int i;
char test;
unsigned char MASK;
for(test = -128; test<127; test++)
{
MASK = 1<<7;
for(i=0; i<8; i++)
{
sprintf(string+i, "%d", (test & MASK) ? 1 : 0);
MASK>>=1;
}
string[8] = (char)NULL;
printf("decimal value -> %d :: bit pattern -> %s\n", test, string);
}
}
many thanks
CHUN