Hi
How can assembler determine that a number is signed or unsigned?
we are looking first bit of a number for positive or negative.
+0 i.e. 0000 0000
-0 i.e. 1000 0000
But how can compiler understand if it is signed or unsigned?
Hi
How can assembler determine that a number is signed or unsigned?
we are looking first bit of a number for positive or negative.
+0 i.e. 0000 0000
-0 i.e. 1000 0000
But how can compiler understand if it is signed or unsigned?
The compiler considers each variable that isn't specified as being signed (or unsigned depending of your compiler). So for every integer where you don't specify it explicitly, it automatically assumes that the first bit sets the sign.
It depends on the compiler you use if it automatically assumes that the number is signed or unsigned.
> -0 i.e. 1000 0000
There is no -0 in a twos-complement representation (the most common one in use).
Whether a number is signed or unsigned depends on the instructions the compiler chooses to use, based on the code you have written.
As many people know, dividing by 2 is the same as shifting right by one.
So for an unsigned int, the compiler might choose a SHR instruction (which shifts in a zero)
for a signed int, the compiler might choose a ASR instruction (which shifts in the sign bit)
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
Weird -- looks like you're on a one's complement system. Most modern system are two's complement. If all you have is a bit pattern, there is no way to tell. At the compiler level, signed/unsigned is a property of a variable. But at the machine level, it is a property of an operation, not a value. For instance, you can multiply two values in either a signed sense or an unsigned sense.
So, the COMPILER knows because you declare it one way or the other. The machine has no clue, it is just told to perform specific operations which give the results indicated by the source code.
No, -0 in 1's complement is 11111111. 10000000 is a simple sign/value representation.Weird -- looks like you're on a one's complement system.
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law