this is a subject that i know almost nothing about and I was hoping to gain enlightenment from some of you who are far more versed in the ways of C.
I have read references to function-call overhead...
and suggestions that using macros instead would avoid performance hits...
is this true/still true?
I am developing a scientific computing application that does NOT scale well with system size - by which i mean the molecular system (number of atoms and electrons) that the calculations are being done on...
it's a stoachasitc method called Diffusion Monte Carlo - used to solve intractable multidimensional wave equations by weighted sampling.
the more particles the longer it takes. So speed is of the essence (even on very fast computers or clusters...)
Would it be a very bad thing to use function calls? is there a way to optimise this in C? or must I just either explicate all the calculations (in the inner loop) or use macros to get around this?
how real is this performance hit with function calls?
many thanks in advance for any responses
by the way - I do of course intend to do some benchmarking to try to get some quantitative information on the alternative techniques... so i am not just expecting answers on a plate about something that for all I know might be a very hard to answer question (ie it might depend greatly on the type of function call etc - and hard to answer in such general terms) but I'd love to hear other peoples experiences/successes/failures etc related to this issue - and what they found worked best for them