Can someone explain?Code:static void Main(string[] args) { short a = 10; int x = 10; uint y = 10; Console.WriteLine((a + x).GetType());//System.Int32 Console.WriteLine((a + y).GetType());//System.Int64 ? Console.ReadKey(); }

while we are at it, check this one out:

I know that arithmetic operations on integers (signed & unsigned) and floating points will result in an expression of floating point type.Code:static void Main(string[] args) { short a = 10; ushort b = 10; int x = 10; uint y = 10u; long j = 10; ulong k = 10; Console.WriteLine((a * b).GetType());//System.Int32 Console.WriteLine((x * y).GetType());//System.Int64 Console.WriteLine((j * k).GetType());//won't compile Console.ReadKey(); }

I also know that arithmetic operations on single/double and decimals is not permitted.

I also know that arithmetic operations on integers of different bits will result in an expression of type equal to that of the integer with the higher number of bits.

But arithmetic operations involving unsigned integers result in expressions of unexpected type. Can someone explain why that's the case?