if i have char c='9'
and i say int num=c-'0';
i cant understand how it goes from char to integer type
??
if i have char c='9'
and i say int num=c-'0';
i cant understand how it goes from char to integer type
??
How do you mean? A char is a small integer. The value of small integers can be extended to large integers by filling the right-most part of the number with the value of the sign-bit.
As to what the posted code does: If we assume that all numeric characters '0'-'9' are in sequence and without gaps or holes, the integer value of the digit is the character code minus the character code for '0'.
--
Mats
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.
so can i say
char c=36;
??
and that 36 will be some ascii code char?/
can you visualize these word more??
"filling the right-most part of the number with the value of the sign-bit. "
Last edited by transgalactic2; 03-27-2009 at 06:11 AM.
Yes, it would be '$' if it's ASCII. I have no idea what it would be if it was EBCDIC or some even less common encoding of characters.
Actually, I meant the left-most bits (not having a good day today).
So if we assume that "int" is a 16-bit number (because I'm lazy and don't want to write down 16 more 1's or 0's) and char is 8 bit (and we also assume that char is a signed char, which is common but not ALWAYS the case).
We have the value 65 = 0x41 = 'A' = 01000001:
Convert this to integer by filling with the sign-bit (0 in this case) would make it:
0000000001000001 = 65 decimal.
If, on the other hand we have the value (unsigned) 240 -> -16 = 0xF0 = 11110000, extending it would result in this:
1111111111111000 = -16 decimal.
I hope that makes some sense.
--
Mats
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.