Originally Posted by
dleach
When I did this same task before using bitwise operators is how I did it but it wasn't in C so I'm having trouble replicating it. The basic idea I used was: Start with U16 value -> convert to binary (using built in function) -> started at one end of the string of 1's and 0's and shifted 1. If it was a 1, I set the variable true for what that bit represented. Then shift again another 1 place - do the same test and write to another variable if that bit was high or not., and repeat this until the end.
The problem is, I don't know how to do that second step of taking the U16 into that string of 1's and 0's within C. Before it was built in...
I don't understand your first step: why do you need to "convert to binary" in the first place, in that other programming language?
Like, suppose you have:
Code:
unsigned int x = 5;
If you want to test for a particular bit, you might write:
Code:
unsigned int result = x & (1 << 2);
The (1 << 2) left shifts these bits:
to get:
then you compare with the bits of x:
to get:
which, being non-zero, indicates that the bit that you were testing for was set.
Instead of doing the shift, you might define some macros with the pre-computed values like what Matticus suggested, but it boils down to the same thing. In fact, notice that Matticus has an example similiar to mine at the end of post #8.
Originally Posted by
dleach
If so, is there a built in command I can just call to do this conversion? I could take care of the rest on my own.
There is no conversion. An unsigned int is already in binary. Even if you were using a quantum computer, you can treat it as if it were in binary, by using the bitwise operators.
When you say "string of 1's and 0's", did you really mean a string of chars, or do you mean "string" in the general sense of a sequence?
EDIT:
Originally Posted by
dleach
Until I hear back from laserlight I'm going to move forward with testing your strategy
Matticus and I have the same strategy. However, his method is rather verbose. Instead of:
Code:
#define TEST_BIT_0(byte) ((byte) & (0x01)) // 0x01 = 0000 0001
#define TEST_BIT_1(byte) ((byte) & (0x02)) // 0x02 = 0000 0010
#define TEST_BIT_2(byte) ((byte) & (0x04)) // 0x04 = 0000 0100
#define TEST_BIT_3(byte) ((byte) & (0x08)) // 0x08 = 0000 1000
#define TEST_BIT_4(byte) ((byte) & (0x10)) // 0x10 = 0001 0000
#define TEST_BIT_5(byte) ((byte) & (0x20)) // 0x20 = 0010 0000
#define TEST_BIT_6(byte) ((byte) & (0x40)) // 0x40 = 0100 0000
#define TEST_BIT_7(byte) ((byte) & (0x80)) // 0x80 = 1000 0000
You could just write:
Code:
#define TEST_BIT(value, bit_offset) ((value) & (1 << (bit_offset)))
In which case you could turn this:
Code:
if(TEST_BIT_0(testByte)) printf("Bit 0 is set\n");
if(TEST_BIT_1(testByte)) printf("Bit 1 is set\n");
if(TEST_BIT_2(testByte)) printf("Bit 2 is set\n");
if(TEST_BIT_3(testByte)) printf("Bit 3 is set\n");
if(TEST_BIT_4(testByte)) printf("Bit 4 is set\n");
if(TEST_BIT_5(testByte)) printf("Bit 5 is set\n");
if(TEST_BIT_6(testByte)) printf("Bit 6 is set\n");
if(TEST_BIT_7(testByte)) printf("Bit 7 is set\n");
into a loop:
Code:
for (i = 0; i < 8; ++i)
{
if(TEST_BIT(testByte, i))
{
printf("Bit %d is set\n", i);
}
}
Originally Posted by
dleach
my value isn't hex
That is true, but your value is not binary or decimal either. Your value is (presumably) an integer. It can be represented in binary. It can be represented in decimal. It can also be represented in hexadecimal. In other words, 2 == 0x02 and 2 == (1 << 1). They are the different ways to refer to the same value.