# Thread: char type and bits

1. ## char type and bits

Hello everybody!
I have a question about chars in C. I know that chars are integer represented with 8 bits (1 byte). So, we can represent numbers from 0 up to 255.
If in fact we write a function printbits(char c) and call printbits(255) we get 11111111.
in the same way, 87=01010111and 88= 01011000. if we then write this code:
Code:
```	char a= 87,b=88;
char c=a+b;
if (c>100) printf("OK");```
OK never gets written. the reason is that 87+88 evaluates to 10101111 and is automatically interpreted as -81, a negative number. This means that the result is interpreted in 2's complement.
Actually, if we say that chars are integers, this would make sense. But what i cannot understand is why the 255 is evaluated correctly then..? Is it casted to something like an unsigned?
thanks for lighting my mind!

2. Because 255 is ALSO -1, which is 11111111 in an 8-bit integer. If you do:
Code:
```char c = 255;
int x = c;
printf("%d\n", x);```
then I expect you'll see that it comes out as -1.

--
Mats

3. That depends entirely on how you defined printbits. If printbits is defined as taking a char value, then 255 is converted to a char value (and therefore becomes -1, which has the bit pattern 11111111, which is what you wanted anyway).

4. >I know that chars are integer represented with 8 bits (1 byte).
A byte isn't required to be 8 bits, but yes, "char" and "byte" are synonymous in C.

>But what i cannot understand is why the 255 is evaluated correctly then..?
Probably because printbits is written in such a way that the signedness of the type is irrelevant. You always end up with the bit pattern, which doesn't change.

5. Change it to this and it will work:
Code:
```char a= 87,b=88;
unsigned char c=a+b;
if (c>100) printf("OK");```

6. Originally Posted by Dino
Change it to this and it will work:
Code:
```char a= 87,b=88;
unsigned char c=a+b;
if (c>100) printf("OK");```
i read that in some systems char is unsigned by default, while in other it's signed..
so it seems that in linux (my system) it's signed by default...

7. a+b would still be an overflow if char is signed.

8. Originally Posted by smoking81
i read that in some systems char is unsigned by default, while in other it's signed..
so it seems that in linux (my system) it's signed by default...
It actually depends more on the compiler than the OS. There is nothing easier or harder in dealing with signed verses unsigned char from an OS or processor point of view, given current technology.

9. Originally Posted by smoking81
i read that in some systems char is unsigned by default, while in other it's signed..
I've heard that too, but does anyone know which platforms are unsigned by default?
Wouldn't that break a lot of code, since most people don't bother explicitly writing signed char, they just write char...

10. Originally Posted by cpjust
I've heard that too, but does anyone know which platforms are unsigned by default?
Wouldn't that break a lot of code, since most people don't bother explicitly writing signed char, they just write char...
I quote this question: it would be very useful to know that! bye

11. Originally Posted by cpjust
I've heard that too, but does anyone know which platforms are unsigned by default?
Wouldn't that break a lot of code, since most people don't bother explicitly writing signed char, they just write char...
Not really, most of the time when you use char, you intend on putting an ASCII character into it. printf functions will always correctly decode these as unsigned chars when printing out for example. The trouble only comes when you use char for anything else (i.e. as a byte storage). Just like you would do with shorts and ints, you should then explicitly set it to unsigned if you don't want it to hold negative numbers.

QuantumPete

12. Originally Posted by QuantumPete
Not really, most of the time when you use char, you intend on putting an ASCII character into it. printf functions will always correctly decode these as unsigned chars when printing out for example. The trouble only comes when you use char for anything else (i.e. as a byte storage). Just like you would do with shorts and ints, you should then explicitly set it to unsigned if you don't want it to hold negative numbers.

QuantumPete
But then you should also explicitly set it to signed if you DO want it to hold negative numbers, right?

13. Dino says declare what you want.

14. Originally Posted by QuantumPete
Not really, most of the time when you use char, you intend on putting an ASCII character into it. printf functions will always correctly decode these as unsigned chars when printing out for example.
Actually, printf decodes these as ints. But only the last 7 or 8 bits are relevant to printing a character symbol, and these are the same even if the argument is sign extended.

15. Originally Posted by King Mir
Actually, printf decodes these as ints. But only the last 7 or 8 bits are relevant to printing a character symbol, and these are the same even if the argument is sign extended.
I should have said "treat these" instead of "decodes these". Sorry about the confusion!

QuantumPete