# Thread: Testing Floating Point Return Values

1. ## Testing Floating Point Return Values

I have several functions that return floating point values. I am currently exploring unit test suites and would like to write up some pretty extensive tests. My question is - what's the best way to test floating point values? A difficulty with floating point numbers is that if I perform the same calculation in a different order I may get a slightly different number due to shifting and roundoff.

Quite some time ago I used a testing framework for Visual Basic that had an assertion function that took two floating point values and an integer value for the number of places of precision beyond the decimal to compare. I am playing around with check right now and I don't think they have a similar function.

I suppose I can do something similar by going through the calculations on my own and then doing something like:
Code:
```float answer;
float retval;

answer = ...  /* Some long computation here */
retval = function_to_test(some, args); /* Supposedly, same computation here */
/* Test to some (arbitrary?) level of precision */
/* fail_unless() is provided by Check...it is just the test/comparison function */
fail_unless(fabs(answer - retval) < 0.0001, "failing message");```
Is that an alright way to test floating point values? Is there a preferred/standard way of doing this?

Thanks,
Jason 2. It is actually unusual to do unit tests involving floating point, precisely because the precision/range/etc of floating point types is variable between machines (or even compiler settings) and because numerical algorithms often have convergence criteria that depend on machine characteristics.

Assuming you're working on one machine only, this means that considerable analysis is necessary to identify a representative set of (input data, output value) sets that can be tested and, if passed, allow extrapolation of a conclusion that the functions work correctly for all possible sets of input data. That analysis needs to be performed on all target machines (or do an analysis that allows extrapolation of result "works on machine X" to "works on machine Y").

Doing a complete analysis across all variables is often quite challenging and time-consuming. If you've done that analysis then, by default, you have identified the success and failure conditions for each test. There is no fixed test: the test type depends on the algorithm being exercised. 3. Aside from grumpy's as usual good answer, I'd like to point out that dividing answer and result and then subtracting 1.0 and checking the absolute value is within a certain range would also be a possibility - so check that the proportional difference is not greater than say 1%. This works better if the range of values allowed is ranging from tiny to huge - for example a correct answer that is in the range of 1000000, you could be off by 100 and still be only 0.01% off, but a small value (0.001) could be almost 10% off and still pass your current test.

By normalizing the result with the division by the correct answer, you would get a more manageable range to compare errors in.

--
Mats 4. Thanks for the reply. I would rather not let these functions go untested, but I see what you are saying about the difficulty in coming up with tests. The foundation of what I am working with is functions involving probabilities - either read from a table or resulting from the evaluation of a distribution function. When a table is used, I have provided for several different methods of interpolation between the table values. I feel that this all should be tested since it is the core of everything else I will be working with.

I am only working on x86-32 and x86-64 machines. Right now this is just a personal recreational project. I hadn't really given much thought to the compiler also playing a role, and I suppose that if I post my code up somewhere for anyone to download and compile, then I have no control over what compiler they may use.

I still haven't ruled out testing the values to a specified level of precision. My biggest concern with that approach is one of the themes of a numerical analysis book I've read that goes something like,
"The largest potential loss of precision occurs when two numbers of about the same size are subtracted so that most of the leading digits cancel out."

This is exactly what is happening in:
Code:
`fail_unless(fabs(answer - retval) < 0.0001, "failing message");`
But, these numbers are probabilities, and between 1 and 0, so the precision I am losing my be something I don't care about. Thinking back to my High School days, the idea of significant figures is coming to mind here. The probabilities in the tables I am reading may only go out a couple places beyond the decimal point, so does it really matter if I am losing something on the scale of 0.1 x 10 ^-20? I suppose that is really only a question that I can answer, but that is my line of thinking. 5. The problem with precision loss is something like
Code:
```double x = 2.0;
double y = 2.000000001;
double z = x - y;```
But in your case, if all you care about is that you don't get more than about 1 in 10000 off from the actual number, with a limited range of 0...1, then I'm sure you are fine.

--
Mats 6. Find "What every computer scientist should know about floating point" by Goldberg on the web.
A long and indepth discussion of all these kind of problems. Popular pages Recent additions 