On a bit level, yes they are the same, but on a numeric meaning they are very different, and the loss of data is the negative number flag. I've seen VC++ give warnings about signed/unsigned comparisons, so I can't see why it wouldn't warn you about this? Is this in the C++ standard because of some legacy C crap, or did they actually think it was a good idea?