Built in Conversion from Integer to Binary String?

This is a discussion on Built in Conversion from Integer to Binary String? within the C Programming forums, part of the General Programming Boards category; Hello all, Newish to C and just wanted to ask a variation on a common question on this board and ...

1. Built in Conversion from Integer to Binary String?

Hello all,

Newish to C and just wanted to ask a variation on a common question on this board and google. I have a variable that I'm reading that runs anywhere between 0-128, with each bit of that binary representation meaning a different thing. (1, 2, 4,...64, 128 each mean a different thing when returned) Well problem is now we're allowing multiple values inside there which breaks us out of those 8 responses.

So i'm looking to basically convert the value to a binary number and just send each bit to a different variable to be able to capture multiple bits being on.

I've not done this in C, but converting to binary has always been common in the languages I've done this same task in. Is there a built in function within C that could do this conversion also?

FYI - I searched the boards and google quite a bit but was only able to find academic exercises of how to do this from scratch. This isn't for a class or anything academic. Honestly for my purposes, I'm not a programmer and its not really important "how" it works at all. Just to patch some old code that was written long ago lol.

2. It sounds like you want to test the bits of that integer rather than convert it into a binary string representation. If so, this would commonly be done using the bitwise and operator.

3. Not likely.

Here's a fast way to count the number of bits in an integer. It uses a 4 bit wide lookup table and interates through each nibble to add the number of bits set in it to the total number of bits set.
The OP is looking for a way to test individual bits. The link you provided shows how to count how many bits are set, but not determine specifically which bits are set.

I have to do this on embedded devices from time to time, and my preferred method is simple, well-named macros using bitwise operators.

4. A common technique is to use a set of defines, which help make the code self-documenting.

e.g.

Code:
```#define THIS 1
#define THAT 2
#define OTHER 128```
then a series of if statements can check for the bits being set

Code:
```if(number & THIS)
{
// code for this bit being set
}

if(number & THAT)
{
// code for that bit being set
}

if(number & OTHER)
{
// code for other bit being set
}```

5. Hi Gemera,

Thx for the response. What you're proposing I think is kind of the way I did it to check for the 8 different values, but I didn't use Defines (Remember not a programmer, never used em).

Basically my code was along the lines of
Code:
`If value = 1 { code }, if value = 2 {code}, ... if value = 128 {code})`
If I do the define way you're showing above does that mean I'm basically going to have to have 128 different defines, one for each value and basically the IF statement is going to become
Code:
`(If value = every possible case of values that could include the value I want {code}?)`
I'm probably reading it wrong cause that would be a lot of conditions to check (and type).

Appreciate the help

6. Originally Posted by dleach
If I do the define way you're showing above does that mean I'm basically going to have to have 128 different defines
If you do indeed have 128 different conditions to test, then yes, you cannot avoid testing for them no matter what.

Originally Posted by dleach
the IF statement is going to become
Not necessarily. I suggest that you read up on bitwise operations and manipulating bits, starting from the link I gave you.

7. Here is a simple program using preprocessor directives to create simple macros. For a byte of eight bits, you would only need eight macros to test each possible bit.

Code:
```#include <stdio.h>

#define TEST_BIT_0(byte)  ((byte) & (0x01))  // 0x01 = 0000 0001
#define TEST_BIT_1(byte)  ((byte) & (0x02))  // 0x02 = 0000 0010
#define TEST_BIT_2(byte)  ((byte) & (0x04))  // 0x04 = 0000 0100
#define TEST_BIT_3(byte)  ((byte) & (0x08))  // 0x08 = 0000 1000
#define TEST_BIT_4(byte)  ((byte) & (0x10))  // 0x10 = 0001 0000
#define TEST_BIT_5(byte)  ((byte) & (0x20))  // 0x20 = 0010 0000
#define TEST_BIT_6(byte)  ((byte) & (0x40))  // 0x40 = 0100 0000
#define TEST_BIT_7(byte)  ((byte) & (0x80))  // 0x80 = 1000 0000

int main(void)
{
char testByte = 0x5E;  // 0101 1110

if(TEST_BIT_0(testByte))  printf("Bit 0 is set\n");
if(TEST_BIT_1(testByte))  printf("Bit 1 is set\n");
if(TEST_BIT_2(testByte))  printf("Bit 2 is set\n");
if(TEST_BIT_3(testByte))  printf("Bit 3 is set\n");
if(TEST_BIT_4(testByte))  printf("Bit 4 is set\n");
if(TEST_BIT_5(testByte))  printf("Bit 5 is set\n");
if(TEST_BIT_6(testByte))  printf("Bit 6 is set\n");
if(TEST_BIT_7(testByte))  printf("Bit 7 is set\n");

if(TEST_BIT_2(testByte) && TEST_BIT_3(testByte))
printf("\nBits 2 and 3 are set\n");

return 0;
}

/*
Output:

Bit 1 is set
Bit 2 is set
Bit 3 is set
Bit 4 is set
Bit 6 is set

Bits 2 and 3 are set
*/```
This program takes a single byte "testByte" with the value of 0x5E (I'm using hex because it's easier to visualize bit patterns in base 16). It tests each bit and if set, prints a message indicating so.

It also shows how you can test multiple bits at once using the logic and operator.

To understand logically what is happening:

Code:
```TEST_BIT_0:
0x5E : 0101 1110    // byte being tested
0x01 : 0000 0001 &  // bit position to test
---------
0000 0000    // returns FALSE (zero), so bit zero is not set

TEST_BIT_1:
0x5E : 0101 1110    // byte being tested
0x02 : 0000 0010 &  // bit position to test
---------
0000 0010    // returns TRUE (non-zero), so bit one is set

// ...etc```

8. laserlight -
When I did this same task before using bitwise operators is how I did it but it wasn't in C so I'm having trouble replicating it. The basic idea I used was: Start with U16 value -> convert to binary (using built in function) -> started at one end of the string of 1's and 0's and shifted 1. If it was a 1, I set the variable true for what that bit represented. Then shift again another 1 place - do the same test and write to another variable if that bit was high or not., and repeat this until the end.

The problem is, I don't know how to do that second step of taking the U16 into that string of 1's and 0's within C. Before it was built in...

Is this the same procedure you're recommending? If so, is there a built in command I can just call to do this conversion? I could take care of the rest on my own. If not, I may have to keep working on the other commenters method of using the Defines and typing out all those different test cases. :-/

9. Matt -

Really appreciate that code I think I might be able to get that to work. Essentially it is just testing each bit, so instead of printf, I'd just do my other code.
Until I hear back from laserlight I'm going to move forward with testing your strategy, only question I have is - my value isn't hex, so I'd just change like this?
Also it looks like in your examples it won't matter if multiple bits are high in the 8 bits, ie having bit 6 on won't affect our test at bit 3, etc.

Code:
```#define TEST_BIT_0(byte)  ((byte) & (0x01))  // 0x01 = 0000 0001
#define TEST_BIT_1(byte)  ((byte) & (0x02))  // 0x02 = 0000 0010
.
.

#define TEST_BIT_6(byte)  ((byte) & (0x40))  // 0x40 = 0100 0000
#define TEST_BIT_7(byte)  ((byte) & (0x80))  // 0x80 = 1000 0000
```

to:
Code:
```#define TEST_BIT_0(byte)  ((byte) & (1))  // 1 = 0000 0001
#define TEST_BIT_1(byte)  ((byte) & (2))  // 2 = 0000 0010
#define TEST_BIT_2(byte)  ((byte) & (4))  // 4 = 0000 0100
#define TEST_BIT_3(byte)  ((byte) & (8))  // 8 = 0000 1000
#define TEST_BIT_4(byte)  ((byte) & (16))  // 16 = 0001 0000
#define TEST_BIT_5(byte)  ((byte) & (32))  // 32 = 0010 0000
#define TEST_BIT_6(byte)  ((byte) & (64))  // 64 = 0100 0000
#define TEST_BIT_7(byte)  ((byte) & (128))  // 128 = 1000 0000```
Code:

10. Originally Posted by dleach
When I did this same task before using bitwise operators is how I did it but it wasn't in C so I'm having trouble replicating it. The basic idea I used was: Start with U16 value -> convert to binary (using built in function) -> started at one end of the string of 1's and 0's and shifted 1. If it was a 1, I set the variable true for what that bit represented. Then shift again another 1 place - do the same test and write to another variable if that bit was high or not., and repeat this until the end.

The problem is, I don't know how to do that second step of taking the U16 into that string of 1's and 0's within C. Before it was built in...
I don't understand your first step: why do you need to "convert to binary" in the first place, in that other programming language?

Like, suppose you have:
Code:
`unsigned int x = 5;`
If you want to test for a particular bit, you might write:
Code:
`unsigned int result = x & (1 << 2);`
The (1 << 2) left shifts these bits:
Code:
`... 0001`
to get:
Code:
`... 0100`
then you compare with the bits of x:
Code:
`x: ... 0101`
to get:
Code:
`... 0100`
which, being non-zero, indicates that the bit that you were testing for was set.

Instead of doing the shift, you might define some macros with the pre-computed values like what Matticus suggested, but it boils down to the same thing. In fact, notice that Matticus has an example similiar to mine at the end of post #8.

Originally Posted by dleach
If so, is there a built in command I can just call to do this conversion? I could take care of the rest on my own.
There is no conversion. An unsigned int is already in binary. Even if you were using a quantum computer, you can treat it as if it were in binary, by using the bitwise operators.

When you say "string of 1's and 0's", did you really mean a string of chars, or do you mean "string" in the general sense of a sequence?

EDIT:
Originally Posted by dleach
Until I hear back from laserlight I'm going to move forward with testing your strategy
Matticus and I have the same strategy. However, his method is rather verbose. Instead of:
Code:
```#define TEST_BIT_0(byte)  ((byte) & (0x01))  // 0x01 = 0000 0001
#define TEST_BIT_1(byte)  ((byte) & (0x02))  // 0x02 = 0000 0010
#define TEST_BIT_2(byte)  ((byte) & (0x04))  // 0x04 = 0000 0100
#define TEST_BIT_3(byte)  ((byte) & (0x08))  // 0x08 = 0000 1000
#define TEST_BIT_4(byte)  ((byte) & (0x10))  // 0x10 = 0001 0000
#define TEST_BIT_5(byte)  ((byte) & (0x20))  // 0x20 = 0010 0000
#define TEST_BIT_6(byte)  ((byte) & (0x40))  // 0x40 = 0100 0000
#define TEST_BIT_7(byte)  ((byte) & (0x80))  // 0x80 = 1000 0000```
You could just write:
Code:
`#define TEST_BIT(value, bit_offset)  ((value) & (1 << (bit_offset)))`
In which case you could turn this:
Code:
```if(TEST_BIT_0(testByte))  printf("Bit 0 is set\n");
if(TEST_BIT_1(testByte))  printf("Bit 1 is set\n");
if(TEST_BIT_2(testByte))  printf("Bit 2 is set\n");
if(TEST_BIT_3(testByte))  printf("Bit 3 is set\n");
if(TEST_BIT_4(testByte))  printf("Bit 4 is set\n");
if(TEST_BIT_5(testByte))  printf("Bit 5 is set\n");
if(TEST_BIT_6(testByte))  printf("Bit 6 is set\n");
if(TEST_BIT_7(testByte))  printf("Bit 7 is set\n");```
into a loop:
Code:
```for (i = 0; i < 8; ++i)
{
if(TEST_BIT(testByte, i))
{
printf("Bit %d is set\n", i);
}
}```
Originally Posted by dleach
my value isn't hex
That is true, but your value is not binary or decimal either. Your value is (presumably) an integer. It can be represented in binary. It can be represented in decimal. It can also be represented in hexadecimal. In other words, 2 == 0x02 and 2 == (1 << 1). They are the different ways to refer to the same value.

11. That is just two different ways to do exactly the same thing. However, using hexadecimal values makes it more obvious for the reader.

12. Hi Laserlight,

Great response, thanks. I understand how those defines are working and basically you and matticus were going about it the same way. His was just a little more broken out. (His response helped me to understand yours though by being like that lol)

The previous language I did this in was Labview (yes debateable to call it a language) but you couldn't strictly do bitwise operations directly on an integer. It sounds like with C, if I had a variable equal to say 15. I could directly just do bit operations on that directly. In labview you couldn't, you had to change its representation to binary directly, and then perform operations. Nice to know in C you don't have to do that.

Anyhow, I've adapted your take on this to my code and I'm pretty sure its all done, I'm just seeing 2 errors with the for loop in line:

Code:
`for (int i = 0; i < 8; ++i)`
line 114:error #29: expected an expression
line 114:error #20: identifier "i" is undefined

13. You are probably compiling with respect to C89/C90, hence you must declare the variable before the loop, at the start of the block.

14. Originally Posted by laserlight
You are probably compiling with respect to C89/C90, hence you must declare the variable before the loop, at the start of the block.
You rock. That fixed it.
Everything seemed to compile error free. I'll download it to the chip on Monday and see if we're in business.

Thanks for the help!

Page 1 of 2 12 Last