# Thread: Decimal to 16 bit unsigned?

1. ## Decimal to 16 bit unsigned?

I can't seem to find any tutorial on the web that explains how to convert a decimal to a 16 bit unsigned binary integer and vice versa done with hand.

2. Do you know what "binary" means -- as in, powers of two? If so, then you're done.

3. http://en.wikipedia.org/wiki/Binary_numeral_system

It may not explain precisely how to do this for a 16-bit number, but the principle that you apply is the same regardless of number of bits.

--
Mats

4. I think I understand the question, so let me reask his question in a non-uninformed way:

Originally Posted by blurx
Greetings C Programming.com patrons,

I am wondering if someone knows how to convert a 16-bit scalar into a binary string by hand. Any links or input would be greatly appreciated.

-blurx
Well blurx, that is not a hard thing to do at all, actually. Here is some sample code, since you formulated your question in such a cordial way:

Example:
Code:
```void int16tobstr(short x, char s[17])
{
int bit;

for(bit = 0; bit < 16; ++bit)
*s++ = '0' + !!(x & (1 << bit));
*s = 0;
}```

5. I realized that it was just more powers of 2, since I got confused with 16 digits compared to 8.

6. 16 digits means you were dealing with a 16bit number. 32 digits means a 32 bit number. So when someone says a 64-bit number, that means that its binary digits could be quantified as 64.

Binary digIT is what a bit is anyhow. So now some information you have in your brain is going to go full circle