It's very much real. Here's an example benchmarking program:
Code:
itsme@dreams:~/C$ cat speeddiff.c
#include <stdio.h>
#include <sys/time.h>
#define macro_test(a, b) ((a)+(b))
int func_test(int a, int b)
{
return a+b;
}
void show_time(char *str, struct timeval *tv)
{
printf("%s: %d.%06d\n", str, tv->tv_sec, tv->tv_usec);
}
int main(void)
{
struct timeval tv;
unsigned int i;
int val;
gettimeofday(&tv, NULL);
show_time("function time start", &tv);
for(i = 0;i < 4000000;++i)
val = func_test(1, 1);
gettimeofday(&tv, NULL);
show_time("function time end", &tv);
gettimeofday(&tv, NULL);
show_time("macro time start", &tv);
for(i = 0;i < 4000000;++i)
val = macro_test(1, 1);
gettimeofday(&tv, NULL);
show_time("macro time end", &tv);
return 0;
}
And a couple of test runs...
Code:
itsme@dreams:~/C$ ./speeddiff
function time start: 1103103678.099678
function time end: 1103103678.258196
macro time start: 1103103678.258232
macro time end: 1103103678.302928
itsme@dreams:~/C$ ./speeddiff
function time start: 1103103679.053172
function time end: 1103103679.211655
macro time start: 1103103679.211692
macro time end: 1103103679.264009
So you can see the difference there.
EDIT: Keep in mind that the more time your program spends in the function, the less important the time calling the function becomes. The difference will be the most pronounced in a stupid simple function like the one I used here.