Benchmarking and Optimisation Q's

This is a discussion on Benchmarking and Optimisation Q's within the C++ Programming forums, part of the General Programming Boards category; 1) I've always wondered how at the most basic level, mathematical operations are handled, efficiently. I've looked through the math.h ...

  1. #1
    Registered User
    Join Date
    Oct 2006
    Posts
    26

    Benchmarking and Optimisation Q's

    1) I've always wondered how at the most basic level, mathematical operations are handled, efficiently. I've looked through the math.h several times really carefully, trying to atleast understand or make out what each bit was doing. In general, I succeeded, but I didn't at all get how the code computed trignometric functions. I'm wondering whether it is a machine routine or what. I'd be thrilled to find out how it works

    2) Another thing I've always wanted to check out was how much better a particular way of doing the program is over another method. I can perhaps reason out to a large extent, and make small little calculations of the number of calls require and all, but can't get any quantitative comparision. Also when programs are nearly the same, I don't know how to compare. So is there a nice little package that could print out the execution time and memory usage? (I'm not using too complex programs yet, so I generally compute the difference in time between the start and finish using time.h and all, but I'd like a better way).

    Thanks for all the help.

  2. #2
    Its hard... But im here swgh's Avatar
    Join Date
    Apr 2005
    Location
    England
    Posts
    1,475
    1 Best bet I think would contact the makers of the compiler in question, they may be able to help you furrther. Look on their website for email or contact details.

    2 The time a program takes to execute will vary on three things:

    1) Does the user have to input - this takes time

    2) size of the program - even the smallest program can take a while to execute. You may be able to create a while true loop that ends when the program finishes and reports back the seconds it took to complete the execution. Use <ctime> header for this

    3} OS. Dependant on the operating system it is running on, porablility is still a slight issue regarding C/.C}} programs. if it is run on an older system, the procsessor may take extra time to complerw the tasks

    These are only vaige answers, more people on the baord may have a better response

  3. #3
    pwns nooblars
    Join Date
    Oct 2005
    Location
    Portland, Or
    Posts
    1,094
    2. You are looking for profiling... google it and you will find many resources.

  4. #4
    Registered User
    Join Date
    Oct 2006
    Posts
    26
    Ok, I'm using the gcc complier, but I've checked up the MSVC++8 too. I thought that something kinda basic like the math header would have been standardised to all compilers by now. But the question was absolutely in general.
    Thanks for te quick replies

  5. #5
    Cat Lover
    Join Date
    May 2005
    Location
    Sydney, Australia
    Posts
    109
    I believe trig functions and similar are done using a Taylor Series to approximate to a given precision.
    http://en.wikipedia.org/wiki/Taylor_series

    Note: I said believe, I could easily be wrong, and implementations can vary.

  6. #6
    Registered User
    Join Date
    Oct 2006
    Posts
    26
    @ Dweia, I did feel the same, but I don't think that it would be very efficient. I've heard about an assembly code manner, but know no details (which is why I posted here).

  7. #7
    and the hat of wrongness Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    32,335
    You're not going to get much information on efficiency from looking at math.h.

    Header files are interfaces, not implementations. Everyone has to provide the same interface. After that, the compiler / standard library writer has a free hand in deciding how to implement things, so long as the end result conforms to the interface.

    If you want to read the implementation, then you typically have two choices.
    1. Download the source code for glibc, if you're using GNU
    2. Pay Microsoft lots of $$$ to get all the debug / symbol tools for your visual studio.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.
    I support http://www.ukip.org/ as the first necessary step to a free Europe.

  8. #8
    aoeuhtns
    Join Date
    Jul 2005
    Posts
    581
    Quote Originally Posted by Dweia
    I believe trig functions and similar are done using a Taylor Series to approximate to a given precision.
    http://en.wikipedia.org/wiki/Taylor_series

    Note: I said believe, I could easily be wrong, and implementations can vary.
    Yep. The IA-32 instruction set already has instructions fsin, fcos, fptan, fpatan, and y*log_2(x) instructions, along with instructions that load constants such as log_2(e), pi, etc.
    There are 10 types of people in this world, those who cringed when reading the beginning of this sentence and those who salivated to how superior they are for understanding something as simple as binary.

  9. #9
    Registered User
    Join Date
    Oct 2006
    Posts
    26
    Ok I'm reviving this thread. I've gotten the glibc 2.4, and absolutely searched it head to toe. I looked into the math.h which refered all its prototypes the file mathcalls.h which contains macros of the form:
    #define __MATHCALLX(function,suffix, args, attrib) \

    eg: __MATHCALL (cos,, (_Mdouble_ __x));

    Other than that, both files contain no other information. I've looked around the other files, but nothing of any use. Most of them refered to the ieee754 definitions, which I couldn't find or wasn't there. I'd really appreciate some help on how to find this stuff.

    Is the code all in assembly? and is it integrated into the compiler?

    I've also seen __builtin several times, giving me the above suspicion

  10. #10
    and the hat of wrongness Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    32,335
    Listen, code isn't in header files.

    It is in source files, which probably means either math.c or math.asm, somewhere in the glibc SOURCE code distribution.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.
    I support http://www.ukip.org/ as the first necessary step to a free Europe.

  11. #11
    Registered User
    Join Date
    Oct 2006
    Posts
    26
    Thanks a lot salem, I found what I was looking for there (assembly). That's what I was asking before, if it were in assembly. I guess I didn't really understand the reply. Now I've got to understand it

  12. #12
    Registered User
    Join Date
    Oct 2006
    Posts
    26
    As I suspected, the approximation isn't too taylor like:
    for e^x:
    /* How this works:

    The input value, x, is written as

    x = n * ln(2) + t/512 + delta[t] + x;

    where:
    - n is an integer, 127 >= n >= -150;
    - t is an integer, 177 >= t >= -177
    - delta is based on a table entry, delta[t] < 2^-28
    - x is whatever is left, |x| < 2^-10

    Then e^x is approximated as

    e^x = 2^n ( e^(t/512 + delta[t])
    + ( e^(t/512 + delta[t])
    * ( p(x + delta[t] + n * ln(2)) - delta ) ) )

    where
    - p(x) is a polynomial approximating e(x)-1;
    - e^(t/512 + delta[t]) is obtained from a table.

    The table used is the same one as for the double precision version;
    since we have the table, we might as well use it.

    It turns out to be faster to do calculations in double precision than
    to perform an 'accurate table method' expf, because of the range reduction
    overhead (compare exp2f).
    */

    copied from the comments of ieee754 - e_expfx

Popular pages Recent additions subscribe to a feed

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21