While writing some code that is very heavy in calculations I came to think of 2 things today:
1: Performance of raw calculation vs. table lookup
Suppose I have a nested for loop:
Code:
for(i=0;i<i_max;i++)
{
for(j=0;j<j_max;j++)
{
a=some_array[j]*2;
b=some_other_array[i];
c[i]=a+b;
}
}
Would it be more efficient to calculate all the a's first in som other loop, and simply accessing them in memory? This does require that I malloc some space for another array to contain the a's. I guess it is a waste to calculate the a's over and over again. How much faster is it to access values in memory rather than doing explicit calculations (assuming all numbers are doubles)? Of course there are several unknowns in this question (most notably i_max and j_max), but can anything be said in general? I'd just like a rule of thumb since I try to write my programs as efficient as possible.
2. My second question is about the inner workings of free(). Suppose I have an array I don't need any more. Let's call it arr. The equivalence of arrays and pointers says that an array is the same as a pointer to the first element. Now when I call free(arr), how does free know when to stop? What prevents it from simply continuing to free up memory until it segfaults? Somehow free() ignores the equivalence between pointers to the first element and arrays. It always knows when the array ends.
I hope I have made myself clear enough, if not just ask and I will try to elaborate.