# Thread: I need a really good technical explantion on the absolute value

1. ## I need a really good technical explantion on the absolute value

How can something like

Code:
`define abs(x) (((x) >= 0) ? (x) : -(x))`
Cause a possilbe overflow.

But something like
Code:
`define abs(x) (((x) < 0) ? -(x) : (x))`
is passable 2. hmm... with two's complement arithmetic, they both seem susceptible to overflow when the most negative value for the given type is used. 3. Originally Posted by laserlight hmm... with two's complement arithmetic, they both seem susceptible to overflow when the most negative value for the given type is used.
Thinking back on it, maybe it would have made more sense to design an arithmetic where INT_MIN == INT_MAX - 1, instead of the other way around.

Can you imagine any tradeoff, aside from eliminating the problem in this example? 4. Originally Posted by brewbuck Thinking back on it, maybe it would have made more sense to design an arithmetic where INT_MIN == INT_MAX - 1, instead of the other way around.

Can you imagine any tradeoff, aside from eliminating the problem in this example?
I've thought that exact same thing myself many times.

In order to protect yourself against 0x80000000 you would need something like this, which turns it into zero instead:
Code:
`#define abs(x) (((x) >= 0) ? (x) : (-(x) >= 0 ? -(x) : 0))`
Or you could make the result unsigned, though this wont stop someone from assigning it to a signed int and stuffing that up:
Code:
`#define abs(x) ((unsigned)(((x) >= 0) ? (x) : -(x)))` 5. In fact, as laserlight said, for two's complement number systems both forms give "bad" results for exactly one particular value of integer.

Implementing either form will necessarily cause "integer overflow" for a value that is equal to the most negative representable integer. (Since the negative of that integer is not representable in the same number system.)

There is no mathematical reason that either form would give bad results for any other value of integer. If either form gives bad results for any other value, then that's a bug, and should be reported to your compiler vendor.

Interestingly, in the C standard (ISO/IEC 9899) there is the following comment in section 3.4.3 (defining defined behavior), that says:

"EXAMPLE An example of undefined behavior is the behavior on integer overflow."

This is absolutely shocking, since millions of programs (probably more) depend on benign behavior from integer overflow. Furthermore, the language Standards documents don't require that implementations give a way to detect or prevent integer overflow in the first place.

However, in informational Annex H (talking about "Language-independent Arithmetic"), it specifically says that:

"C’s unsigned integer types are ‘‘modulo’’ in the LIA−1 sense in that overflows or out-of-bounds results silently wrap."

That's the behavior upon which all of those millions of programs depend: The legacy behavior of the "silent wrap," which means that upper bits are ignored and the lower bits are what they are.

The C++ standard ISO/IEC 114884 has this to say about integer overflow (in Section 5, about Expressions)

"If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for its type, the behavior is undefined, unless such an expression is a constant expression (5.19), in which case the program is ill-formed."

Creepy.

"[Note: most existing implementations of C + + ignore integer overflows...]"

Both Standards documents have a lot to say about floating point arithmetic overflows, but not much about integer overflows.

K&R2 simply says that, "The handling of overflow...is not defined by the language. Most existing implementations of C ignore overflow in evaluation of signed integral expressions and assignments, but this behavior is not guaranteed."

Even though the Standards say that the result of integer overflow is undefined behavior, my feeling is that programmers are free to continue the assumption that integer overflows will be handled by throwing away the upper bits from the result and keeping the lower bits.

(Of course, nothing in the Standards requires that integers be represented as two's complement binary quantities in the first place, but that's what we usually see these days, and that is what this discussion is about---I think.)

D. 6. Originally Posted by brewbuck ... INT_MIN == INT_MAX - 1...Can you imagine
Since INT_MIN is the most negative value, I think you probably mean
INT_MIN == -(INT_MAX -1)

I'll answer a question with a couple of questions:

Is there any advantage of such a system over the binary two's complement representation of integers other than its being closed under the "absolute value" operation?

Are there any disadvantages in implementation of such a system?

Note that the one's complement system is symmetrical since INT_MIN==-INT_MAX, and computers have been built around one's complement integers. There was never a problem with abs(). We just don't see many of them these days. Have you ever wondered why?

Of course, a C compiler doesn't have to use the computer's native number representation for its integer data types, but, given the intent of the original C author (to write operating systems), it really wouldn't make much sense, I think, to have a compiler use anything other than what is most efficient for the target architecture.

D. 7. Originally Posted by Dave Evans Since INT_MIN is the most negative value, I think you probably mean
INT_MIN == -(INT_MAX -1)
I knew that's what he meant.
Actually I've realised that this wouldn't be without it's problems either.
Consider a = -abs(b);
Now you have one case where a positive can't be turned into a negative! Popular pages Recent additions 