# Dec to Binary code

• 01-11-2009
4dice
Dec to Binary code
Hello

This is my first post and i have searched forums to try and find a suiteable answer but didnt find anything that would help.

I have written some code that is aimed to change decimal to binary, 16bits worth.

Unfortunatly i can not get it working, the code does compile but doesnt seem to be outputting the correct data.

Code:

```void dec_bin(unsigned int bigdata)            //function passed decimal number les than 65k {         while(num>0)         {                 rem[j]=num%2;          //modulus of 2 to see if there is remainder                 num=num/2;              //divide number by 2 each time                 j++;                          //increment number of positions                 length++;                //count length of data         }                 for(j=length-1; j>=0; j--)         {                 mask=0b1000000000000000;                 if((&rem[j])!=0) {dec=dec^mask;}          //XOR each time with mask if modulus =1                        mask=mask>>1;                                    //right shift mask each time         seperate_data(bigdata);                        //send random 16bit data to be split into 2 bytes, this function has been tested and works OK         } }```
• 01-11-2009
tabstop
Since it outputs nothing that I can see, I guess the output would be "wrong" in that sense. None of your variables appear to exist anywhere that I can see either.

So take a deep breath, explain what you want to get, and why this isn't it.
• 01-11-2009
4dice
More details
Sorry the first post was lacking a bit of detail, i dont want to post the entire code as it is for a project that is roughly 2000 lines

That function was passed a number in decimal less than 65000, and sends the data as you can see to another function that seperates the word into 2 bytes, it is then sent one byte at a time to a function that sends it through serial.

The seperating function as well as the serial sending function are tested and work fine.

the following are defined at start of code

int dec;
unsigned int num;
#define numberofbits 16
static bank3 int rem[numberofbits];
static bank2 unsigned int j;
static bank2 unsigned int length;

Sorry if this is unclear, but i am trying my best to state everything as clearly as possible

If there is anything else i can state to help let me know

Thankyou
• 01-11-2009
tabstop
So the thing you haven't quite gotten to is (1) why you want binary in the first place (it's neither necessary nor relevant if all you want to do is split a 16-bit integer into two 8-bit integers) and (2) why you think rem does not, in fact, contain the binary representation of your number (because it does, although backwards).
• 01-12-2009
4dice
Dec to Binary
Hello

Great response time.

I am placing the decimal numbers into binary so i can send in bytes through serial connection, and can be read from the corresponding program.

Thankyou for pointing out that it did infact store the value but just inverted as i when outputting i didnt think i was getting anything useful

I appreciate your help and in future will try to give all information in enough detail.

Thankyou
• 01-12-2009
tabstop
Quote:

Originally Posted by 4dice
Hello

Great response time.

I am placing the decimal numbers into binary so i can send in bytes through serial connection, and can be read from the corresponding program.

Well, okay, but what do you think the number is stored as internally? Roman numerals? It's already binary when you start.
• 01-12-2009
4dice
Dec to Binary
Gday

I did figure it was stored as binary somewhere as thats all computers really store things as, but i dont know how to get the binary value out?

In the past all i have done is sent 8 bit binary through RS232 and RS485.

Could you inform me of how i would get the binary value out another (i'm assuming easier) way?

I have a range of 0-20000 and would like it to be accurate to at least 0.5, can you suggest a better way of transmitting the data?

Thankyou
• 01-12-2009
tabstop
What do you mean by accurate to at least 0.5? I thought you had an integer? Anyway, if you have a two-byte integer and a machine where short is two bytes, then this
Code:

```void split(unsigned short number, unsigned char *high_byte, unsigned char *low_byte) {     *high_byte = number / 256;     *low_byte = number % 256; }```
splits the number into the high byte and the low byte (passed by pointer so that you can change their value inside the function). And then you can send those two bytes to whereever you want to send them.

Edit: Actually, rereading, I guess you have the splitting part done. As you said, you've got the context and I don't, so I'm really just guessing. If for some reason you need the array of 16 1's and 0's (thus turning your two-byte integer into 64 bytes of data) then I guess you've got it, although turning two bytes into 64 bytes doesn't seem like it's an improvement.