I was just thinking, why can't you use binary numbers like this: 0b11010101, when you can use hexadecimal numbers 0xF7A0??? Is there some reason for this?
This would make bitmasking easier...
(MyFlags & 0b00000101)
instead of
(MyFlags & 5)
I was just thinking, why can't you use binary numbers like this: 0b11010101, when you can use hexadecimal numbers 0xF7A0??? Is there some reason for this?
This would make bitmasking easier...
(MyFlags & 0b00000101)
instead of
(MyFlags & 5)
MagosX.com
Give a man a fish and you feed him for a day.
Teach a man to fish and you feed him for a lifetime.
Well you could always write an inline function to convert bitfields to numbers, but for most people hex representation is adequate, do enough bitwise work and you eventually memorize the nibbles.
>This would make bitmasking easier...
Perhaps for small values, but it isn't practical when the values get larger than 8 bits. Would you rather write this:
mask &= 0b0100000000000000
or this:
mask &= 0x4000
Looks like no contest to me, it's confusing enough with just a 16 bit binary value. Can you be sure that it's really 16 bits without counting? What if I needed a high bit on a 64 bit value? Would you like to maintain code with that? It's just too error prone to have any real use.
-Prelude
My best code is written with the delete key.