Since the size of (for example) short and long integers aren't strictly defined, the correct way to "know" the size of any variable is to have a definition that matches the compiler, e.g. have a include-file called "types.h" (or something like that).
This file should then contain defitions like:
Code:
typedef unsigned char uint8;
typedef unsigned short uint16;
#if __64BIT__
typedef unsigned int uint32;
typedef unsigned long uint64;
#else
typedef unsigned long uint32;
typedef unsigned __int64 uint64;
#endif
Of course, you should probaboy also define your own "tilenotype", such as:
Code:
#define uint16 tilenotype;
That way, you can quite easily change the type to a different type if you need to.
Bitfields are useful sometimes, but using "odd-sized" bitfields [not 8, 16, 32 or 64 wide] will increase the code-size, because the compiler will have to mask the bits.
Also watch out for alignment issues - if, in a struct, you put a 16-bit next to a 32-bit value, it's most likely going to use up 4 bytes in the struct, even though only 2 is used to store soemthing.
Note that you may need to change your "types.h" if you use a different compiler.
Note also the "__64BIT__" isn't a standard thing - but there's usually a way to detect 32 vs 64-bit compile, so you need to "replace" this with the appropriate define (or generate __64BIT__ from the relevant define made by the compiler).
--
Mats