Hello all
I could not find anything I could understand on this, so I have heard that -O3 option may reduce the numerical accuracy of doubles. Is this true?
s
Hello all
I could not find anything I could understand on this, so I have heard that -O3 option may reduce the numerical accuracy of doubles. Is this true?
s
this would be bad news for me, as I just ran tons of math calculations over the last weeks using that option. So I hope you're right!
Compiler optimisation does not directly affect behaviour of floating point operations.
It can, however, do things such as reorganising code, precomputing results, and even eliminating sections of code. Properties of numerical algorithms - such as numerical stability - are often sensitive to the order in which a long sequence of floating point operations are performed, so compiler optimisation can affect results from numerical algorithms. If the compiler precomputes results (based on constants known at compile time) it may produce different results [different round-off characteristics] than would happen if the calculations were done on floating point variables. More aggressive optimisation levels are more likely to (it's not 100% certain) have such effects.
Well, no, I would not argue that no optimisation is the solution - I just said that more aggressive optimisation is more likely to uncover problems.
If I was concerned about accuracy (or, more correctly, precision) of floating point, I would focus on quality of algorithm and quality of coding of that algorithm, not on compiler optimisation settings. No amount of compiler optimisation (or turning off compiler optimisation) can compensate for a poorly crafted algorithm. Conversely, really well-crafted algorithms are less likely to be disturbed by compiler optimisation settings.
O_oBut you would not argue for no optimization if you are concerned about the accuracy (64bit) as I am?
Most compilers have separate categories for optimizations for exactly this scenario.
You might, for example, enable optimizations related to "inlining" and "cse" while still disabling optimizations that relate to mathematics restructuring.
Soma
“Salem Was Wrong!” -- Pedant Necromancer
“Four isn't random!” -- Gibbering Mouther
Your first sentence is misleading, given your following explanation and the question the OP originally asked, or rather the question he really is asking. The first thing I thought of when I read the first sentence was -ffast-math from GCC and things like -no-prec-div from the Intel compiler.
-O3 (assuming gcc) does not affect such FP ops AFAIK, but -Ofast definitely does.
If you are using gcc, you'd do well to look at this page: https://gcc.gnu.org/onlinedocs/gcc/O...e-Options.html and to also look up the -mfpmath option.
Yes, options like -ffast-math - which changes implementation of floating point so it is non-compliant with IEEE or ISO specifications - will obviously change floating point behaviour.
My response was based on the definition of compiler optimisation in computer science - reorganising code to optimise performance by some measure without changing the output it produces. Yes, some compiler vendors group options that relate to changing floating point operations with optimisation settings.
On default settings, most compilers will avoid changing the order of floating point operations, in part because of rounding problems. So while grumpy is right that higher optimization may change the result of floating point operations, it probably won't make it significantly less accurate, rather it just means the result is not always bitwise equal. For example, inlining may cause a floating point computation to occur at a higher precision than otherwise, because the number is stored in a wider register.
Setting such as -ffast-math will allow more actual reordering of floating point operation, sacrificing accuracy, and sometimes handling of non finite numbers, for speed.
It is too clear and so it is hard to see.
A dunce once searched for fire with a lighted lantern.
Had he known what fire was,
He could have cooked his rice much sooner.
There are plenty of ways that optimisation can affect the result of floating point math operations. E.g. optimising a division by a constant into a multiplication by the pre-calculated multiplicative inverse constant.
It just so happens that depending on the settings the compiler may not do such optimisations for you precisely because it can ever so slightly affect the result.
Visual Studio at least has options of "Precise", "Strict", and "Fast".
"Fast" for example "Creates the fastest code in most cases by relaxing the rules for optimizing floating-point operations.".
More info here: /fp (Specify Floating-Point Behavior)
I can't speak for other compilers, but I would be surprised if a flag such as -O3 didn't at least relax some of the optimisation rules concerning floating point.
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"