Testing Floating Point Return Values

This is a discussion on Testing Floating Point Return Values within the C Programming forums, part of the General Programming Boards category; I have several functions that return floating point values. I am currently exploring unit test suites and would like to ...

  1. #1
    Registered User
    Join Date
    May 2008
    Posts
    87

    Testing Floating Point Return Values

    I have several functions that return floating point values. I am currently exploring unit test suites and would like to write up some pretty extensive tests. My question is - what's the best way to test floating point values? A difficulty with floating point numbers is that if I perform the same calculation in a different order I may get a slightly different number due to shifting and roundoff.

    Quite some time ago I used a testing framework for Visual Basic that had an assertion function that took two floating point values and an integer value for the number of places of precision beyond the decimal to compare. I am playing around with check right now and I don't think they have a similar function.

    I suppose I can do something similar by going through the calculations on my own and then doing something like:
    Code:
    float answer;
    float retval;
    
    answer = ...  /* Some long computation here */
    retval = function_to_test(some, args); /* Supposedly, same computation here */
    /* Test to some (arbitrary?) level of precision */
    /* fail_unless() is provided by Check...it is just the test/comparison function */
    fail_unless(fabs(answer - retval) < 0.0001, "failing message");
    Is that an alright way to test floating point values? Is there a preferred/standard way of doing this?

    Thanks,
    Jason

  2. #2
    Registered User
    Join Date
    Jun 2005
    Posts
    6,245
    It is actually unusual to do unit tests involving floating point, precisely because the precision/range/etc of floating point types is variable between machines (or even compiler settings) and because numerical algorithms often have convergence criteria that depend on machine characteristics.

    Assuming you're working on one machine only, this means that considerable analysis is necessary to identify a representative set of (input data, output value) sets that can be tested and, if passed, allow extrapolation of a conclusion that the functions work correctly for all possible sets of input data. That analysis needs to be performed on all target machines (or do an analysis that allows extrapolation of result "works on machine X" to "works on machine Y").

    Doing a complete analysis across all variables is often quite challenging and time-consuming. If you've done that analysis then, by default, you have identified the success and failure conditions for each test. There is no fixed test: the test type depends on the algorithm being exercised.

  3. #3
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Aside from grumpy's as usual good answer, I'd like to point out that dividing answer and result and then subtracting 1.0 and checking the absolute value is within a certain range would also be a possibility - so check that the proportional difference is not greater than say 1%. This works better if the range of values allowed is ranging from tiny to huge - for example a correct answer that is in the range of 1000000, you could be off by 100 and still be only 0.01% off, but a small value (0.001) could be almost 10% off and still pass your current test.

    By normalizing the result with the division by the correct answer, you would get a more manageable range to compare errors in.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  4. #4
    Registered User
    Join Date
    May 2008
    Posts
    87
    Thanks for the reply. I would rather not let these functions go untested, but I see what you are saying about the difficulty in coming up with tests. The foundation of what I am working with is functions involving probabilities - either read from a table or resulting from the evaluation of a distribution function. When a table is used, I have provided for several different methods of interpolation between the table values. I feel that this all should be tested since it is the core of everything else I will be working with.

    I am only working on x86-32 and x86-64 machines. Right now this is just a personal recreational project. I hadn't really given much thought to the compiler also playing a role, and I suppose that if I post my code up somewhere for anyone to download and compile, then I have no control over what compiler they may use.

    I still haven't ruled out testing the values to a specified level of precision. My biggest concern with that approach is one of the themes of a numerical analysis book I've read that goes something like,
    "The largest potential loss of precision occurs when two numbers of about the same size are subtracted so that most of the leading digits cancel out."

    This is exactly what is happening in:
    Code:
    fail_unless(fabs(answer - retval) < 0.0001, "failing message");
    But, these numbers are probabilities, and between 1 and 0, so the precision I am losing my be something I don't care about. Thinking back to my High School days, the idea of significant figures is coming to mind here. The probabilities in the tables I am reading may only go out a couple places beyond the decimal point, so does it really matter if I am losing something on the scale of 0.1 x 10 ^-20? I suppose that is really only a question that I can answer, but that is my line of thinking.

  5. #5
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    The problem with precision loss is something like
    Code:
    double x = 2.0;
    double y = 2.000000001;
    double z = x - y;
    But in your case, if all you care about is that you don't get more than about 1 in 10000 off from the actual number, with a limited range of 0...1, then I'm sure you are fine.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  6. #6
    and the hat of wrongness Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    32,484
    Find "What every computer scientist should know about floating point" by Goldberg on the web.
    A long and indepth discussion of all these kind of problems.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.
    I support http://www.ukip.org/ as the first necessary step to a free Europe.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Drawing Program
    By Max_Payne in forum C++ Programming
    Replies: 21
    Last Post: 12-21-2007, 04:34 PM
  2. Replies: 2
    Last Post: 03-24-2006, 07:36 PM
  3. floating point values
    By jmarsh56 in forum C++ Programming
    Replies: 1
    Last Post: 12-14-2005, 09:30 AM
  4. opengl program as win API menu item
    By SAMSAM in forum Game Programming
    Replies: 1
    Last Post: 03-03-2003, 06:48 PM
  5. OpenGL and Windows
    By sean345 in forum Game Programming
    Replies: 5
    Last Post: 06-24-2002, 10:14 PM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21