>Can you explain it for me? I am a dummy in C.
CHAR_BIT is a macro representing the number of bits in the char data type, so if your system uses octets for bytes, CHAR_BIT will be 8. Assuming that, the macro becomes:
Code:
!!(a[i / 8] >> (8 - (i % 8) - 1) & 1)
i / 8 will take a value, presumably in the range of [0,32) and force it into holes that match the array index we want. A quick test program will probably highlight the effect better than I can explain it:
Code:
#include <stdio.h>
int main(void)
{
int i;
for (i = 0; i < 32; i++)
printf("%02d -> %d\n", i, i / 8);
return 0;
}
i % 8 is the bit index within a single byte from the least significant bit. Much like i / 8, where the value of i is forced into a range of indices for the char array ([0,4) in the case of your four byte array), i % 8 forces i into a range of indices for the bits within a byte:
Code:
#include <stdio.h>
int main(void)
{
int i;
for (i = 0; i < 32; i++)
printf("%02d -> %d\n", i, i % 8);
return 0;
}
However, if we want that bit index to work from the most significant bit rather than the least significant bit, it needs to be reversed. That's where subtracting the index from the upper end of a range comes in, and because indices are zero-based, the result is decremented:
Code:
#include <stdio.h>
int main(void)
{
int i;
for (i = 0; i < 32; i++)
printf("%02d -> %d -> %d\n", i, 8 - (i % 8), 8 - (i % 8) - 1);
return 0;
}
So for any index from 0 to 32, we want to convert that index to both an appropriate array index and an appropriate bit index. Something like this if you think of the bit array as a two dimensional array:
Code:
#include <stdio.h>
int main(void)
{
int i;
for (i = 0; i < 32; i++)
printf("a[%d][%d]\n", i / 8, 8 - (i % 8) - 1);
return 0;
}
Once you have those values, the actual bit can be acquired through bitwise operations. Removing all of the index expressions turns this into !!(byte >> index & 1). The right shift (>>) moves the bit we want to the least significant position. The AND (&) extracts the least significant bit. Using 0xAA with an index of 3 as an example:
Code:
10101010 >> 3 = 00010101
00010101 & 1 = 00000001
Finally, the double NOT forces the extracted bit into a strict 0 or 1. Technically, the double NOT isn't necessary here because the least significant bit using this method will always be 0 or 1, but another common method is ANDing against a mask built using the index:
Code:
00000001 << 3 = 00001000
10101010 & 00001000 = 00001000
Code:
#include <stdio.h>
#include <limits.h>
int main(void)
{
unsigned char x = 0xAA;
int i;
for (i = 0; i < CHAR_BIT; i++)
printf("%d\n", x & (1 << i));
return 0;
}
Notice that the output doesn't produce 0's and 1's. Logical NOT can fix this by first saying "is this value zero?", which produces 0 for set bits and 1 for unset bits. However, that's the opposite of what we want, so a second NOT flips the result such that set bits produce 1 and unset bits produce 0:
Code:
00000001 << 3 = 00001000
10101010 & 00001000 = 00001000
!00001000 = 0
!0 = 1
Code:
#include <stdio.h>
#include <limits.h>
int main(void)
{
unsigned char x = 0xAA;
int i;
for (i = 0; i < CHAR_BIT; i++)
printf("%d\n", !!(x & (1 << i)));
return 0;
}
>I wonder how to change the value of a bit at a assigned position in above code.
Bitwise AND is used with a reverse mask to clear a bit, bitwise OR is used with a mask to set a bit. You'll typically see macros that do all of this for you, here are the three most common single bit operations (test, set, and clear):
Code:
#include <stdio.h>
#include <limits.h>
#define BIT_AT(x,i) !!((x) & (1 << (i)))
#define BIT_CLEAR(x,i) ((x) & ~(1 << (i)))
#define BIT_SET(x,i) ((x) | (1 << (i)))
void print_bits(unsigned char x)
{
int i;
for (i = 0; i < CHAR_BIT; i++)
printf("%d", x >> (CHAR_BIT - (i % CHAR_BIT) - 1) & 1);
puts("");
}
int main(void)
{
unsigned char x = 0;
puts("Testing BIT_SET");
print_bits(x);
x = BIT_SET(x, 3);
print_bits(x);
x = BIT_SET(x, 0);
print_bits(x);
x = BIT_SET(x, 7);
print_bits(x);
puts("\nTesting BIT_AT");
printf("%s at index 3\n", BIT_AT(x, 3) ? "SET" : "UNSET");
printf("%s at index 0\n", BIT_AT(x, 0) ? "SET" : "UNSET");
printf("%s at index 1\n", BIT_AT(x, 1) ? "SET" : "UNSET");
printf("%s at index 7\n", BIT_AT(x, 7) ? "SET" : "UNSET");
printf("%s at index 6\n", BIT_AT(x, 6) ? "SET" : "UNSET");
puts("\nTesting BIT_CLEAR");
print_bits(x);
x = BIT_CLEAR(x, 3);
print_bits(x);
x = BIT_CLEAR(x, 0);
print_bits(x);
x = BIT_CLEAR(x, 7);
print_bits(x);
return 0;
}
Note that the macros work from the least significant bit. The reason why my initial macro works from the most significant bit is to match the indexing of the char array. Logically the bit array from your example would look like the following under such a scheme:
Code:
0 32
00001111 00110011 01010101 00001111
It makes figuring out the indices easier when they all go in the same direction. Working from the most significant bit also makes sense in cases such as print_bits, where the bits need to be displayed from left to right starting with the most significant bit. Otherwise you get the bits printed in reverse, which can be confusing.
Now your homework is to test your understanding of all this crap by editing the BIT_SET macro to come up with a solution on your own for your most recent question of how to set one of the bits in the char array. All of the tools and information are available in this post, so it shouldn't be difficult provided you've understand my rambling explanations.