ok I understand now.
I'll just have to be more carefull from now on, step by step to make sure that the numbers get (or not get) truncated when they should (or not should).
P.S. I attached the code again on my changes post on the previous page.
This is a discussion on Doing 2 things at the same time within the C Programming forums, part of the General Programming Boards category; ok I understand now. I'll just have to be more carefull from now on, step by step to make sure ...
ok I understand now.
I'll just have to be more carefull from now on, step by step to make sure that the numbers get (or not get) truncated when they should (or not should).
P.S. I attached the code again on my changes post on the previous page.
>> or maybe I could use a do while
That would probably be more readable since the looping condition is easily identified.
>> Is it because I don't have my compiler set to C99...?
Yep. You need "-std=c99".
>> My compiler doesn't give me errors about changing from u_long_64 to size_t.
Hmmm, I can't get MinGW to complain about it either. I originally got the warnings from VC-2005.
This is what MinGW does with "-pedantic -Wall -std=c99":
Not sure why there isn't a warning for "a = g". For reference, the warnings were due to the size_t parameters of malloc, realloc, and qsort.Code:u_long_64 g = (u_long_64)1 << 60; size_t a = g; // no warning size_t b = (u_long_64)1 << 60; // warning
>> I'd suggest you keep the original. There is a difference. Your original code computes the equation in floating point, then truncates it. But the new version doesn't calculate using decimals at all, which may give misleading result...
With this expression, the results will be the same either way. So my suggestion was to use integer math over floating point math (as a premature optimization). But I'm glad I did it since it's exposing new information to the op
>> How exactly [integer division] works is platform, compiler and CPU dependent, though, so we can't say for sure if it's just truncated or not.
Not exactly. Under C89, integer division involving negative numbers is implementation defined. Under C99 all integer division is well-defined. Unsigned integer division is well-defined under both.
To summarize:
>> (int)(100.0 * x / semiprimes_count)
x is promoted to double for the multiplication.
semiprimes_count is promoted to double for the division.
double result is converted to int (via cast) which "discards" the fractional part. Here's where you can get into trouble - "If the value of the integral part cannot be represented by the integer type, the behavior is undefined" (6.3.1.4-1, ISO/IEC 9899:1999).
In this context, we know x < semiprimes_count and both are positive, so the result is always between 0 and 100.
Dropping the ".0" gives the same results in this context - the only difference is that the integer division "discards" the fractional part instead of the cast from double to int. You can also get into trouble with a cast from unsigned long long to int - but again, we're safe since we know the result will be between 0 and 100.
gg
Last edited by Codeplug; 06-19-2008 at 10:13 PM.
GCC doesn't give narrowing warnings.
That's the problem. They are not the same.>> I'd suggest you keep the original. There is a difference. Your original code computes the equation in floating point, then truncates it. But the new version doesn't calculate using decimals at all, which may give misleading result...
With this expression, the results will be the same either way. So my suggestion was to use integer math over floating point math (as a premature optimization). But I'm glad I did it since it's exposing new information to the op
Imagine calculating %.
int percent = (15 / 200) * 100;
This gives wrong result, because 10 / 100 = 0.075!
It's therefore truncated to 0 and multiplied by 100 and the result is 0.
However, if we do
int percent = (15 / 200) * 100;
The result is 0.075 * 100 = 7.5, which is truncated to 7.
The actual calculation needs the extra precision, even if the end result is truncated.
But it won't be safe in the actual calculation, if the calculations contains fractions. The end result will be less precise because it shaves off the decimals. If the calculations ends up with a number < 0, you will get incorrect results, as well!Dropping the ".0" gives the same results in this context - the only difference is that the integer division "discards" the fractional part instead of the cast from double to int. You can also get into trouble with a cast from unsigned long long to int - but again, we're safe since we know the result will be between 0 and 100.
They are not the same.
But it also seems weird that C99 would define how division works. That makes it very poorly flexible.
>> That's the problem. They are not the same.
>> int percent = (15 / 200) * 100;
What does that have to do with the Op's equation of (100 * x / y)? That *is* what we're talking about?...at least I am. The Op has already learned the difference between that and (100 * (x / y)).
>> But it won't be safe in the actual calculation, if the calculations contains fractions.
It's perfectly safe, and defined since we have constraints on the values used in the equation. Precision is exactly the same - both versions (of the equations I'm referring to) have their fractional part chopped off, just at different points (as I described). Perhaps some code will speak louder than words...(and to ensure we're talking about the same thing)
Here we have the same constraints, x < y. You just have to ensure that (x * 100) does not exceed ULLONG_MAX. We're not generating that many semi-primes, so that's ok. If you ever do exceed ULLONG_MAX, then both equations are busted anyways - in which case you would need to perform the division first as a double.Code:unsigned long long x; for (x = 1; x < (ULLONG_MAX / 100); ++x) { unsigned long long y = 2 * x; int per1 = (int)(100 * x / y); int per2 = (int)(100.0 * x / y); if (per1 != per2) { // you'll never see this printf("Difference at %d, p1=%d, p2 = %d\n", x, per1, per2); break; } }
>> But it also seems weird that C99 would define how division works.
Not to me. I would think standard-defined is better than implementation-defined.
More info: http://groups.google.com/group/comp....166ba5495aee84
gg
I suppose we can look at it in two ways. The OP is trying to calculate percentage.
If we chop off the decimals after the division (integer division), we get:
(15 / 200) * 100 =
(0.075) * 100 =
0 * 100 =
0
And if we simply chop off the decimals after the equation is complete (floating division), we get:
(15.0 / 200) * 100 =
(0.075) * 100 =
0.075 * 100 =
7.5 =
7
In this case, it does matter where we chop of the decimals.
But on the other hand, you could do:
(15 * 100) / 200 =
(1500) / 200 =
1500 / 200 =
7.5 =
7
In which case the result will be the same, regardless if we use integer division or floating division.
The result is defined, not the implementation or how it's done.
If the exact procedure of how it's done is defined, then it would mean compability problems with some machines.
That's why C/C++ leaves so much undefined, isn't it?
That's what I said a couple of posts before, and that's why when I did
(100 * x) / y
it came out right. When I do it how a normal human would:
100 * (x/y)
It would come out wrong because the decimal will be truncated when it calculates (x/y) which will always be less than 1, meaning that when it gets truncated it will be 0.
I don't know much about how C/C++ or the compilers or whatever defines what or how, I just keep "Please Excuse My Dear Aunt Sally" in my mind and put a truncation part in it.