Quote Originally Posted by Subsonics View Post
Makes you wonder why it was not defined as: (1<<7), (1<<6), (1<<5) etc. to begin with.
It's so that you can see the 0 or 1 and change it easily, whilst also seeing what bit in the register is being written to - See the example writing to SPCR and note that they are not all (1<<n).

I find that this is the best way, because I can easily see what is being set and cleared and refer directly to the data sheet. If they were already defined as (1<<n) and you wanted to clear the bit, you would have to omit it or come up with another name for an off definition.

But as I said earlier, most people hate this notation - I like it because it is a ANSI standard way of dealing with each bit (unlike microchip's use of anonomous bit fields within a union) and it is easy to see what is being set/cleared (unlike CCS where a description of what you want done is defined and you don't actually know what is being set cleared - This makes referring to the data sheet annoying).