I haven't been able to figure out what is going on and why this is happening. I'm using 64-bit integers and whenever I use constants, I sometimes get completely unexpected results unless the constant is outside the range of a 32-bit value. I've tried typecasting it, but that didn't work. Is there a way to use constants and have them treated as a 64-bit integer when the value is from about 2.15 to 4.3 billion? Here's some examples on what I'm referring to:

Code:
__int64 Example; // declare a 64-bit integer

...

Example = 13741152; // 13.7 million - works fine
Example = -5120000000; // negative 5.1 billion - works fine
Example = 68014747325468582; // 68 quadrillion - works fine
Example = -4096000000; // negative 4.1 billion - problem
Example = (__int64)-4096000000; // typecasting the constant - no good
Example = -2048000000+-2048000000; // adding smaller numbers - no good
The last 3 return a value of 198,967,296. Adding this to the positive counterpart of what I'm attempting to use adds up to 4,294,967,296, which is exactly how many possible values there are for a 32-bit integer, a sign that the compiler is treating the constant as a 32-bit value when the variable I'm trying to modify is a 64-bit value. Is there any way I can set the 64-bit variable to what I intend on using?