Originally Posted by
grumpy
I said no such thing. What you are not realising is that microprocessor design involves trade-offs between competing requirements. Mathematical fidelity of basic operations (eg integer division rounding to nearest, rather than up or down) is not a primary design constraint in microprocessor design.
You are assuming there are no trade-offs in microprocessor design. In reality, repeatability of microprocessor behaviour is statistical: there is a likelihood that any operation will not behave correctly, because (in modern processors) effects like voltage variations within a circuit and other random effects. The speed of an instruction tends to go down as the number of circuit elements (resistors, capacitors, transistors, etc) used to implement that instruction goes up. The number of components needed to implement a "round up" or "round down" integer division circuit is much less than the number needed to implement a "round to nearest" circuit.
Because of things like this, even in modern microprocessor design, basic operations tend to be implemented in as simple a form as possible to maximise performance. Techniques are also used that reduce likelihood of a circuit being subjected to some variation that causes it to produce erroneous results. Those sorts of trade-offs tend to discourage hardware designers from implementing operations (eg integer division) in a more costly (i.e. demanding more physical circuit components) manner.
It offers consistency. However, it also broke one of the mandatory design constraints that guided development of the 1989 C standard. A requirement of the 1989 C standard was that any given operation could be implemented in a way that was most efficient on all possible hardware platforms. That meant that simple operations (eg integer division) could be implemented in few machine instruction. That meant allowing use of native instructions.
Some microprocessors in 1989 supported "round up" integer division when given negative operands. Some others supported "round down". Very few - none as far as I know - supported "round to nearest". The 1989 C standard had to allow "implementation defined" in order to allow maximum efficiency on all possible hardware.