When C was first implemented, they used 0 to mean a pointer that had no value.

This was because a 0 pointer had no meaning on their systems.

However, when they started spreading C to other machine types, they came across a class of computers that had flag bits in their pointers -- the pointers contained an address, but there were more bits in the pointer than there were bits in the address space, and the computer vendor chose to put protection bits and other strangeness in the pointer. (Like, imagine if you had a number that could store up to 100,000. But there were only 1000 valid addresses, so the top digits weren't being used until some clever idiot said, "hey! we can put apartment numbers up there" or something. So "#705 main street, apartment 99" becomes 99705.

As a result, it was possible that a "null pointer" was actually something like 0x6000. Why? Because some idiot wanted to use all the bits, that's why. But the reality was, every OTHER programmer compared their pointers with 0. So what's a compiler writer to do? What's the standards committee to do?

They made a rule: if you're comparing a pointer against "zero", then you're checking for a null pointer. And if "null pointer" on your system isn't zero, then substitute that value. But the source code is still gonna say "0" someplace.

If it helps, think of "0" as being the null-pointer-operator and also the "zero number operator", much like how "-" can be the "negative number prefix" and also the "subtraction" operator. It all depends on the context where they are used.

Finally, see here: Question 5.17