# Bit Questions...

• 04-24-2003
Cheeze-It
Bit Questions...
I don't know what "least significant bits" and "most significant bits"
mean...

docs; it says that if the least significant bit is set, the key was
pressed after the previous call. If the most significant bit is set,
the key is down.

What are they talking about? What's the difference between
least and most signifcant bits?

Also, about LOWORD and HIWORD... How do these work? 'The
LOWORD macro retrieves the low-order word from the given
32-bit value.'

#define LOWORD(l) ((WORD) (l))

That's typcasting (l), right? But what specifically is happening?
What's a low-order word? Isn't a WORD just an unsigned int
typedef? And aren't unsigned ints already 32 bits? So what's
happening?
:confused:

#define HIWORD(l) ((WORD) (((DWORD) (l) >> 16) & 0xFFFF))

And that one looks really complicated. What exactly is happening
here?

:confused:

Thanks a lot.
• 04-24-2003
Cheeze-It
Here's another question...

How come MSDN says that a WORD is an unsigned int. But when
I do this:

Code:

``` int main() {     cout<<sizeof(WORD)<<endl;     cout<<sizeof(unsigned int)<<endl;     return 0; }```
I get WORD = 2 bytes, and unsigned int = 4 bytes?

How can an unsigned int be 16-bits? Wouldn't that mean that
a WORD is a short int?

Oh my Argh!!!
• 04-25-2003
From MSDN...

>>>
WORD 16-bit unsigned integer.
<<<

... thus WORD is defined to be 16 bits, thus sizeof(WORD) returns 2.

Code:

```MSBit              LSBit !                  ! 128 64 32 16 8 4 2 1```
The least significant bit in a byte is the one with lowest value, i.e. change that bit and the value of the byte only changes a little, change the one at the other end and it changes a lot. Same with the 2 bytes in a WORD of the 4 in an unsigned int, changing the value of the least significant byte has the smallest change on the value of the larger item.

LOWORD is effectively truncating the passed DWORD, i.e. it returns the least significant 16 bits.

HIWORD treats the passed value as 32 bits, shifts the contents 16 bits to the right, (i.e. what was now the most significant 16 bits now lie in the least significant bit positions), then it does a logical AND against 0xFFFF to clear out any rubbish that has been shifted into the old most significant bit positions, then casts the result of that to a 16 bit value.

Don't worry to much about macros, the preprocessor is falling out of favour. Use them if you like, but don't reckon on developing them.