Is it a good idea to decode bitmasks by making a structure with corresponding bitfields then casting it as a pointer to the memory register that needs to be decoded? Or are (as somebody told me) bitfields slow?
Is it a good idea to decode bitmasks by making a structure with corresponding bitfields then casting it as a pointer to the memory register that needs to be decoded? Or are (as somebody told me) bitfields slow?
Current Setup: Win 10 with Code::Blocks 17.12 (GNU GCC)
Bit fields are non-portable, that is you cannot use them to pick apart individual bits in some externally defined data (for example a file or a hardware status register).
For external data, read an unsigned char / short / long as appropriate, then use &, |, >> and << to manipulate individual bits as required.
You can't use them for that? Then what's the point in them?
Current Setup: Win 10 with Code::Blocks 17.12 (GNU GCC)
If using bit-fields are slower than using manually coded bit-wise operations or not,
depends on two things as I see it. First, if the compiler is smart it may produce a
similar sequence of bit-wise operations when dealing with bit-fields, as you would
code them manually using bit-wise operations and may even schedule them for speed
optimization (if scheduling the instruction stream yields better execution performance).
Secondly, it might use instructions that are superior to those available through C.
For example the PowerPC processor has a few instructions (rlwinm Rotate Left Word
Immediate then AND with Mask, rlwnm Rotate Left Word then AND with Mask, and
rlwimi Rotate Left Word Immediate then Mask Insert) that can fetch a bit-field in
one instruction and one cycle only. BOOM! But that instruction is not directly available in C
(you would have to do inline
, etc.). But when a good compiler sees a bit-Code:asm { }
field access it can use such an instruction that the PowerPC has and the access to
that data is faster than using bit-wise operations available through C, etc. I suggest
you experiment with it and see the result. As Salem said, bit-fields are not portable,
and especially watch out for little endian and big endian architectures!
If you are trying to make it portable, you may be successfull with this type of handling:
But don't count on it!!! You must check it very carefully. But if you want to release software for a few different target machines, you may port the code successfully like that.Code:#if defined BIG_ENDIAN typedef struct T { unsigned int : 20; /* pad bits */ unsigned int i : 12; } T; #elif defined LITTLE_ENDIAN typedef struct T { unsigned int i : 12; } T; #else #error ENDIAN NOT DEFINED #endif
Last edited by fischerandom; 02-08-2006 at 03:47 AM.
Bobby Fischer Live Radio Interviews http://home.att.ne.jp/moon/fischer/
From a portable code point of view, they have no use at all.
But for storing a lot of internal information in a compact form, they can be very useful indeed.
Originally of course, when there was only one C compiler, running on a machine with very little memory, bit-fields were a very efficient method of representing some external data structures.
But when ANSI got a hold of it, it looks like they relaxed the rules for bit-fields to favour efficient local representation rather than portability.
For example, it is implementation specific as to whether bits are assigned left-to-right or right-to-left.
That is
unsigned int foo:1;
may be in the LSB on one machine and the MSB on another.
If I wanted to represent 4 characters with only 2 bits, I have two alternatives:
#1:
To use a struct with 2 bitfields
#2:
To use 1 char to store 2 characters, using bit-manipulation
And well, #2 seems to be the best way in my opinion?
A struct of only 2 bitfields with only 1 bit each, will get padded to atleast 1 byte anyway, right?
This is the kind of operations im thinking about at #2.
(not exactly the same values and stuff of course)
Is this approach any more portable or thread-safe though?
From wikipedia:
Edit: Sorry for bringing this old thread back to life, but I figured it was better to use a old one, than start a new thread about the exact same subjectCode:#define PREFERENCE_LIKES_ICE_CREAM (1 << 0) /* 0x01 */ #define PREFERENCE_PLAYS_GOLF (1 << 1) /* 0x02 */ #define PREFERENCE_WATCHES_TV (1 << 2) /* 0x04 */ #define PREFERENCE_READS_BOOKS (1 << 3) /* 0x08 */ unsigned char preference; void set_preference(unsigned char flag) { preference |= flag; } void reset_preference(unsigned char flag) { preference &= ~flag; } unsigned char get_preference(unsigned char flag) { return (preference & flag) != 0; }
Last edited by Drogin; 09-04-2009 at 06:40 PM.
You might as well use the first two bits in 4 chars initialized to 0, the added operations will not be worth it.
Well, you can set 8 flags in a byte. XOR (^=) toggles a bit from it's present value, eg.This is the kind of operations im thinking about at #2.
(not exactly the same values and stuff of course)
Is this approach any more portable or thread-safe though?
From wikipedia:
toggles the 5th bit.Code:flag ^= 16;
C programming resources:
GNU C Function and Macro Index -- glibc reference manual
The C Book -- nice online learner guide
Current ISO draft standard
CCAN -- new CPAN like open source library repository
3 (different) GNU debugger tutorials: #1 -- #2 -- #3
cpwiki -- our wiki on sourceforge