I need to do the following calculation on a 32 bit embedded system:
C = 10000 * A / B
The values A and B are signed, and can be -10000000 .. 10000000.
I need the calculation to be as accurate as possible. The value B will always be higher than the value of A, and therefore it shouldn't be a problem storing the result in the value C.
I have tried making the devision, and then calculating the remainder seperatly, and then adding it afterward, but for some values of A and B the accuracy just got worse.
The calculation has to be done rather often, so it has to be optimized for speed.