# Thread: How come short + int = int but short + uint = long?

1. ## How come short + int = int but short + uint = long?

Code:
```   static void Main(string[] args)
{
short a = 10;
int x = 10;
uint y = 10;

Console.WriteLine((a + x).GetType());//System.Int32
Console.WriteLine((a + y).GetType());//System.Int64 ?
Console.ReadKey();
}```
Can someone explain?

while we are at it, check this one out:

Code:
```        static void Main(string[] args)
{
short a = 10;
ushort b = 10;

int x = 10;
uint y = 10u;

long j = 10;
ulong k = 10;

Console.WriteLine((a * b).GetType());//System.Int32
Console.WriteLine((x * y).GetType());//System.Int64
Console.WriteLine((j * k).GetType());//won't compile
Console.ReadKey();
}```
I know that arithmetic operations on integers (signed & unsigned) and floating points will result in an expression of floating point type.

I also know that arithmetic operations on single/double and decimals is not permitted.

I also know that arithmetic operations on integers of different bits will result in an expression of type equal to that of the integer with the higher number of bits.

But arithmetic operations involving unsigned integers result in expressions of unexpected type. Can someone explain why that's the case?

2. You probably want this: 7.2.6.2 Binary numeric promotions (C#)

Quzah.

3. The logic behind this is simple:

1. If one operand is signed, all operands must be converted to signed.

2. An unsigned operand has to be promoted to the next larger signed type, because otherwise half of its possible values would overflow.

For example, an unsigned 8-bit number can have values from 0 to 255. A signed 8-bit number can only represent numbers as high as 127, because half of the signed type's range is below zero. So if an unsigned 8-bit number is converted to a signed, it needs at least 16 bits to guarantee the number can be represented.

Popular pages Recent additions