I'm personally not too keen on bit fields myself - I prefer packing information into a standard (unsigned) data type and using bitwise operators.
Is there any difference between writing:
Code:
#define CONSTANT 0x01
// ... and ...
#define CONSTANT 1
You might not see much of a difference there, since the numerical representations of decimal and hex are basically identical for values from zero to nine . But what about:
Code:
#define CONSTANT 0x1D
// ... and ...
#define CONSTANT 29
It comes down to context. If you're using the values as actual numbers (such as defining the maximum size of an array), you'll want them to be in a decimal representation so that it's easy for you, the programmer, to easily understand the quantity.
Hex values can be defined instead for several reasons. One is that, since it's a shorthand for binary, it's easier to define hex values used for masking.
Code:
#define ENABLE_MOTOR_1 0x08 // binary: 0000 1000
#define ENABLE_MOTOR_2 0x02 // binary: 0000 0010
// ...
unsigned char control = 0x00;
control |= ENABLE_MOTOR_1; // control = 0x08, motor 1 on
control |= ENABLE_MOTOR_2; // control = 0x0A, motor 1 and motor 2 on
control &= ~(ENABLE_MOTOR_1); // control = 0x02, motor 2 on