All operations are promoted to the highest involved type during the operation, and arithmetics (but I think not bit stuff) is performed with a minimum type of int.
What does that mean? Well, Dave_Sinkula mentioned the problem, but apparently didn't explain it enough.
So let's activate the human debugger and get moving
const int HIGH_SPEED = (1<<7);
-> HIGH_SPEED = binary 00000000 00000000 00000000 10000000
const int DIRECT_CONNECT = (1<<8);
-> DIRECT_CONNECT = binary 00000000 00000000 00000001 00000000
char flags = 0;
-> flags = binary 00000000
flags |= HIGH_SPEED;
-> Operation performed on integers. flags is fetched and promoted to 4 0-bytes. HIGH_SPEED is or-ed in, resulting in this:
-> binary 00000000 00000000 00000000 10000000 = decimal 128
-> Truncation to char (which is signed) is outside of the range (CHAR_MAX is 127), implementation-defined behaviour.
-> As MS happens to do it, the result is bit-pattern conversion, thus flags results in
-> binary 10000000 = decimal -128
flags |= DIRECT_CONNECT;
-> Again, flags is promoted. You can see the problem here for the first time, although it has no efffect yet.
-> Promotion from signed type to signed type always preserves value, thus the promoted value is -128 too, in 4 bytes:
-> binary 11111111 11111111 11111111 10000000
-> The or-ing has no effect, the result is the same, truncation is defined this time, as -128 is in range
-> flags = binary 10000000 = decimal -128
if((flags & HIGH_SPEED) != 0)
-> flags again needs to be promoted. The problem occurs again and again has no effect.
if((flags & DIRECT_CONNECT) != 0)
-> flags again promoted.
-> ((int)flags) = binary 11111111 11111111 11111111 10000000 = decimal -128
-> DIRECT_CONNECT = binary 00000000 00000000 00000001 00000000
-> AND = binary 00000000 00000000 00000001 00000000 != 0
I hope the problem is clear now. There are two parts to the solution.
First, ALWAYS use UNSIGNED types for bit fields and bit constants.
Second, ALWAYS use the SAME type for bit fields and their associated constants.