# Thread: Integer division standard question

1. ## Integer division standard question

Does the standard say "round down" or truncate on Integer division?

Example under my GCC, 5/2 = 2
Edit: 11/4 also gives 2; so, my GCC truncates.

Code:
```#include <stdio.h>
#include <stdlib.h>

int main()
{
printf("result := %d\n", 5/2);
return 0;
}```
Code:
`result := 2`

2. Here's what C99 says:
Originally Posted by c99
6.5.5 Multiplicative operators
...

5 The result of the / operator is the quotient from the division of the first operand by the
second; the result of the % operator is the remainder. In both operations, if the value of
the second operand is zero, the behavior is undefined.

6 When integers are divided, the result of the / operator is the algebraic quotient with any
fractional part discarded. 78) If the quotient a/b is representable, the expression
(a/b)*b + a%b shall equal a.

Footnote:
78) This is often called ‘‘truncation toward zero’’.

3. Thanks, that is what my memory thought would happen; but, I am going to make some coding on an C-like compiler code base; that I hope to convert to a standard C compiler in a few months or years. I already got bitten with my C-like compiler non-standard way. And, I wished to avoid another large code fix; in my new code. Likely going to be enough needed fixes.

Tim S.

4. From 6.5.5 of he C18 Standard:
The result of the / operator is the quotient from the division of the first operand by the second; the
result of the % operator is the remainder. In both operations, if the value of the second operand is
zero, the behavior is undefined.

5. Another way to put it is this, to the cpu the division is done roughly the same as you would normal numbers, only it's using just 0 & 1, so when it reaches bit -1 it quits because that cannot be mapped to the physical representation, just like you can't map any more numbers if you use the edge of a paper sheet to represent the decimal point, you just leave the left overs where they started, the original number you're modifying, which by the way is part of the reason division by 0 should always equal 0, you're supposed to be moving from one number to another, if you move nothing then nothing is added to the starting number, 0, something like:

Code:
```int a = 10;
int b = 0;
int c = 0;
int d = 0;
int digits = bitsof(int);
while ( a >= b && digits-- > 0 )
{
int rem = a - b;
int mov = a - rem;
c += (mov > 0);
d += mov;
a = rem;
}```
An example where this very method is put into practice in the real world is math like 10 / 3, you don't do 3.3..., no, instead you fill out what digits you have space for (or care for) and then stop, or when dividing people into groups, when you divide people into groups of 0 you don't divide them at all, ergo, no movement = 0 groups with all remaining in the original.

6. Originally Posted by awsdert
Another way to put it is this, to the cpu the division is done roughly the same as you would normal numbers, only it's using just 0 & 1, so when it reaches bit -1 it quits because that cannot be mapped to the physical representation
It's not so much how the CPU would do it (some CPUs might round down for division of a negative numerator) but how the C standards define it. The designers of C could have instead defined the integer division operation to round down rather than toward zero (the modulo operator would have been redefined to match).