I'd like to take a char, convert it to its bit-pattern, and be able to modify the bits, is this possible in C++?
Printable View
I'd like to take a char, convert it to its bit-pattern, and be able to modify the bits, is this possible in C++?
Well, yes, except for the first part: I don't know why you think there's a difference between a char and its "bit-pattern". But chars are ints (if smaller), and you can do bit manipulations just the same.
I guess I should rephrase: how do I perform those bitwise operations?
Well, there are six bitwise operators:
<< shift left
>> shift right
| bitwise or
& bitwise and
~ bitwise not
^ bitwise xor
The first two just shift the number of bits (so x << 1 shifts the bits in x one spot to the left, and fills in with 0). ~ reverses bits; the other three do the boolean operator bit-by-bit.
okay, cool. so's I'd like to get bits 5 and 6 of a char, and modify them (because these bits control which grouping of 32 chars I'm in). Bear with me, this is all part of a program I'm writing.
How do we go about that?
If I say:
Perhaps you can tell where I'm getting confused about how to use the bitwise operations to perform this.Code:
char a_char;
cin >> a_char;
// get the 5th and 6th bits:
bool bit5 = a_char << 5;
Bit shifting is going to move the bits, not tell you what's where. You should look up "bitwise operation" in Wikipedia.
You might want something like this:
(As a side note, you really need to write a letter to the publisher of your C++ book and tell them how bad their index is, since it doesn't have bit operations in it.)Code:#define BIT_FIVE 0x10
#define BIT_SIX 0x20
bool bit5 = a_char & BIT_FIVE;
Edit to add: of course 0x10 is the fifth bit from the right; the fifth bit from the left would be 0x08 in a 1-byte char, where "right" means "least significant" and "left" means "most significant".
Let me explain what tabstop's code would do.
It basically uses a mask to get only the bits concerned and save them in another variable.
assume a_char = (in binary) 01110001
76543210
Please note that (according to my books) bits start counting from 0.
So you want the 5th bit, which would be bit 4 here, right?
What you do is you create a mask, that will omit all other bits and only keep that one bit, whether it is 0 or 1.
This mask would be 00010000
Now, if you compare these two using & this is what would happen:
a_char: 01110001
mask: 00010000
-------------
result: 00010000
As you can see, since all the bits are set to zero except bit #4 then only that bit will be saved and the others all set to 0.
Now, if you want to save bits #4,5 then you'll just have to modify the mask. I'll leave that for you to solve (it's really easy).
The binary and operator only "keeps" the bits which are 1 in the mask. It doesn't matter what state the other bits than the fifth is because they'll be tossed away. And if the 5th bit is 0, well, then you get the result 0 because it extracts that bit nevertheless. If it's one, then the result is one because it was extracted.
It seems many C++ books omit this information, it's as if they think this is just old useless C-Style programming that C++ programmers shouldn't concern themselves with.
Anyways, back to the problem at hand.
So the "mask" is really a bitwise trick of sorts? I need to use it as an intermediate step to get the bits I want, right?
So for bits 5 and 6:
char a_char = 01101010;
char my_mask = 00110000;
char result = 00100000;
and then use the result as needed (keeping in mind that if I end up with 0 at 5 or 6 I had a 0 to begin with, and a 1 to begin with for a result of 1)
I think I get the idea now.
okay thank you all, I've still got more to learn about this but at least I've got a starting point now.
Don't get angry though, I don't understand the syntax you guys are using with:
0X30
0X10
etc.
How do those represent a specific bit within a byte?
They are numbers in hexadecimal representation. Makes it easier to convert to binary, e.g.,
0x30
0011 0000
since 3_16 = 11_2 and 0_16 = 0_2, and of course 0x30 = 3 * 16 + 0 = 48.
thanks, so does the 0x refer to the sign of the number, or is it simply denoting that it's in hex?
should i assume two's complement then? i've heard that's processor specific though...
Simply notation.Quote:
thanks, so does the 0x refer to the sign of the number, or is it simply denoting that it's in hex?
It will be either one's or two's complement, but exactly which is implementation defined.Quote:
should i assume two's complement then? i've heard that's processor specific though...