# Thread: using bitwise & to convert to octal

1. ## using bitwise & to convert to octal

okay, so I have no idea on how this would work. Im suppose to use convert binary to decimal then decimal to octal. i got the binary to decimal part but the decimal to octal part confuses me. the assignment says to "employ bit mask" integer which when ANDed with the original binary integer give you the corresponding octal bits) on it. The bit mask is normally generated by taking several integers ‘1’ and left-shifting these ‘1’s by an appropriate number of times until the bits with value 1 shifts from the least significant position to the position which corresponds to the bit we want to extract in the input binary integer. Then the result will need to be shifted right until the extracted bits arrive to the least significant position.

HEres my failed attempt.....
Code:
```void convertBase(int decimal) //Function that convert decimal to base of 8
{

char mask1[MAXBITS + 1] = {1,1,1,0,0,0};
char mask2[MAXBITS + 1] = {0,0,0,1,1,1};
int bit = 0;
int weight = 1;
int firstDigit;
int secondDigit;
int result = 0; /*Where the decimal result will go*/

/*Convert mask1 from binary to decimal*/
for (bit = (MAXBITS - 1); bit >= 0; bit--, weight *= 2)
for (bit = (MAXBITS - 1); bit >= 0; bit--, weight *= 2)

printf("Octal Representation of Binary Number: %d%d\n", firstDigit, secondDigit);

}```

2. Is your name Rube Goldberg by any chance? (Rube Goldberg)

Code:
```void convertBase(int decimal) //Function that convert decimal to base of 8
{
const int mask1 = (7 << 3);
const int mask2 = (7 << 0);
firstDigit = (decimal & mask1) + '0';
secondDigit = (decimal & mask2) + '0';
printf("Octal Representation of Binary Number: %d%d\n", firstDigit, secondDigit);
}```
--
Mats