There is a long history here....
Back in the old days optimizing compilers (as we know them now) didn't exist. The compiler was very much a "Garbage In, Garbage Out" thing. Memory was much smaller than today, and CPU cycles much more more expensive. I still remember using Lattice C, which required 96KB or RAM and two floppy disks...
Design decisions were made. One of those was that for really trivial functions they would be implemented as macros. The reason for this was that the time taken to setup and call the function and return would be a significant proportion of the execution time, and that overweighed other concerns like code size.
One easy to talk about example could be "int isdigit(int c)":
Code:
int isdigit(int c) {
return (c >= '0' && c <= '9');
}
To save resources, and speed up the program it could be implemented as a macro:
Code:
#define isdigit(c) (c >= '0' && c <= '9')
This could be the logical choice at the time, but if implemented this way it introduces bugs. For example
Code:
i += isdigit(getchar());
would become:
Code:
i += (getchar() >= '0' && getchar() <= '9');
And this calls getchar() twice.