1. ## HIWORD macro

Hello all,

I am a bit puzzled by this macro definition:

Code:
`#define HIWORD(l) ((WORD) (((DWORD) (l) >> 16) & 0xFFFF))`
Couldn't the same effect be achieved with the following?

Code:
`#define HIWORD(l) ((WORD) (((l) >> 16)))`
If so, what is the advantage of the longer version (from WINDEF.H)?

Thanks for any help.

2. If l is considered signed, then you may have a whole lotta 1's at the front of the number that you don't want.

3. I still don't get it. Even if that happened, and you ended up with 16 1s in front of the number, aren't they going to be discarded when the result is cast to WORD?

It seems to me that all you are interested in is the values of the original sixteen highest order bits. After the shift operation, those values will have been shifted to the sixteen lowest order bits. Then all you need to do is dispose of the new sixteen highest order bits (whatever they are), which can done with the cast to WORD. I can't see what it matters whether they are all 0s, all 1s or whatever. So it still strikes me that the cast to DWORD and the & operation are redundant. I suppose I must be missing something.

4. its clearer what is actually taking place, but the results are the same, at least for teh entire range of signed long.

Code:
```#include <stdio.h>

#define WORD unsigned short
#define DWORD unsigned long

#define HIWORD(l) ((WORD) (((DWORD) (l) >> 16) & 0xFFFF))
#define newHIWORD(l) ((WORD) (((l) >> 16)))

int main(){

for(signed long x = -2147483600;x<2147483600;x++){
if(HIWORD(x) != newHIWORD(x)){
printf("%d\n" , x);
return 0;
}
}

return 0;
}```

5. I think the point you are missing is that it's often preferable to write code in a way that is easily shown to be correct. You needed a fairly dense paragraph to justify your theory that the cast is unnecessary. Whereas the correctness of the operation as-written is simply obvious on its face, without having to remember the semantics of signed shifts or twos-complement or anything else.

Even experts can get things totally wrong. A few months ago I tracked down a bug in a piece of code that would do the wrong thing when the input was negative. Turned out that the original author of the code (a spectacular engineer who outclasses me easily) had assumed that a cast to integer is equivalent to a floor() operation -- it's actually a truncation toward zero, which is not the same thing when the value is negative, and so the expression computed the wrong values when passed negative inputs.

At that point we had two options -- replace the cast with a floor(), which would hurt performance, or rewrite the expression in a more complicated way that was still correct for negative inputs. As this was not a piece of performance-sensitive code, guess which option we picked?

6. I am not trying to be obtuse here, but I can't actually see why casting makes the code more obviously correct.

In the normal case, I will be a DWORD (that is surely the whole point of the macro), so casting it to DWORD is clearly unnecessary.

If I is longer than a DWORD, then casting it to DWORD will just chop off the excess high-order bits, which is fine but IMO likewise clearly unnecessary, since the shift operator is not going to be assigning any unknown values to the left-hand bits in this case anyway.

If I is shorter than a DWORD, things get marginally tricker, but I don't see that the cast really aids clarity, since if I is e.g. a signed char, it's probably not going to be immediately obvious what effect the cast to DWORD is going to have.

Anyway, I guess ease of being shown to be correct is in the eye of the beholder or something so this boils down to a question of personal preference (or maybe I have still missed the point?!).

7. Code:
`#define HIWORD(l) ((WORD) (((l) >> 16)))`
Produces implementation defined behaviour if l is signed and negative. The longer version is guaranteed to produce consistent results since the shift is on an unsigned quantity. The & 0xFFFF part is unnecessary though as far as I can tell.