Originally Posted by
Salem
http://cboard.cprogramming.com/showt...tf+performance
My guess is, adding the printf() is actually printing a variable which was otherwise unused, and the compiler had optimised it out.
By printing it (and hence using it), it forced the compiler to include a whole bunch of expensive code which was otherwise optimised out.
While the phenomenon you describe (adding I/O operations changing how effectively a compiler does optimisation) can have some effect, those effects are rarely of the order of those described in the original post.
The more likely reason is that I/O is slower than many other operations on a machine, and doing many I/O operations consumes both CPU time and actual (clock) time.
It is a pretty fair bet that
Code:
#include <stdio.h>
int main()
{
int i, j;
for (int i = 0; i < 10000; ++i)
{
for (int j = 0; j < 10000; ++j)
{
some_complex_calculations();
}
}
print_results_of_calculations();
}
will run noticeably more quickly on most systems than;
Code:
#include <stdio.h>
int main()
{
int i, j;
for (int i = 0; i < 10000; ++i)
{
for (int j = 0; j < 10000; ++j)
{
some_complex_calculations();
fprintf(stdout, "%s\n", "Hello"); /* only addition */
}
}
print_results_of_calculations();
}