I am following one of the exercises from C Programming language book:
The C Programming Language Exercise 2-9
Now as I am playing around with this, I notice something strange:
Code:
#include <stdio.h>
#define BUF_SIZE 33
char *int2bin(int a, char *buffer, int buf_size) {
buffer += (buf_size - 1);
int i;
for (i = 31; i >= 0; i--) {
*buffer-- = (a & 1) + '0';
a >>= 1;
}
return buffer;
}
int main(void) {
char buffer[BUF_SIZE];
buffer[BUF_SIZE - 1] = '\0';
/* xx here is even */
int xx = 0x4;
int2bin(xx, buffer, BUF_SIZE - 1);
printf("a = %s\n", buffer);
xx &= (xx-1);
int2bin(xx, buffer, BUF_SIZE – 1);
printf("a = %s\n", buffer);
return 0;
}
This program returns this:
a = 00000000000000000000000000000100
a = 00000000000000000000000000000000
I understand why it happens. 00000100 is binary representation of 4. 00000011 is binary representation of 3. When we use AND-operator on these two, we get 00000000.
Notice how the solution states what AND-operator will do:
If x is even, then the representation of (x-1) has the rightmost zeros of x becoming ones and the rightmost one becoming a zero. Anding the two clears the rightmost 1-bit in x and all the rightmost 1-bits from (x-1).
Ok, but what is the purpose of all this? What are we trying to prove here? I don't understand what's the point of this exercise.