I can't seem to find any tutorial on the web that explains how to convert a decimal to a 16 bit unsigned binary integer and vice versa done with hand.
I can't seem to find any tutorial on the web that explains how to convert a decimal to a 16 bit unsigned binary integer and vice versa done with hand.
Do you know what "binary" means -- as in, powers of two? If so, then you're done.
http://en.wikipedia.org/wiki/Binary_numeral_system
It may not explain precisely how to do this for a 16-bit number, but the principle that you apply is the same regardless of number of bits.
--
Mats
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.
I think I understand the question, so let me reask his question in a non-uninformed way:
Well blurx, that is not a hard thing to do at all, actually. Here is some sample code, since you formulated your question in such a cordial way:Originally Posted by blurx
Example:
Code:void int16tobstr(short x, char s[17]) { int bit; for(bit = 0; bit < 16; ++bit) *s++ = '0' + !!(x & (1 << bit)); *s = 0; }
I realized that it was just more powers of 2, since I got confused with 16 digits compared to 8.
16 digits means you were dealing with a 16bit number. 32 digits means a 32 bit number. So when someone says a 64-bit number, that means that its binary digits could be quantified as 64.
Binary digIT is what a bit is anyhow. So now some information you have in your brain is going to go full circle