I'm not sure what you mean exactly, it's pretty common with conditional directives like: #ifdef __i386__ for example.
The reason a conditional like #ifdef __i386__ is needed is because the code that will be substituted assumes a particular architecture.
Yes, but if __i386__ is defined then it's not an assumption, is it?
Apparently you assume so. You assume incorrectly.
I still say, given that glibc has some HORRENDOUS hacks elsewhere in the library for performance reasons, the fact that they just use an if speaks volumes. You need to benchmark your code against an if. I don't see how your code will be any faster. This is what's known as premature optimisation. You're employing bit-twiddling techniques that run afoul on other architectures to the point you need to #ifdef.
Seriously. Benchmark it. In my test, a naive version that just copies the glibc branch code but with long long's instead is twice as fast, even in a tight loop of 100,000,000 executions of the functions and with all compiler optimisations turned off (with optimisations on, it's still the same because there's nothing to optimise here). Which is what someone else showed you too.
- Compiler warnings are like "Bridge Out Ahead" warnings. DON'T just ignore them.
- A compiler error is something SO stupid that the compiler genuinely can't carry on with its job. A compiler warning is the compiler saying "Well, that's bloody stupid but if you WANT to ignore me..." and carrying on.
- The best debugging tool in the world is a bunch of printf()'s for everything important around the bits you think might be wrong.