I'm working on this physical simulation with c language.
I need to allocate very large arrays, so I exploit much memory on my pc... I'm not so used in these kind of topics so I have some questions about static memory allocation. (Sorry, but I can't achieve to find the infos I need on the net)
(,,I'm not using dynamic allocation cause it's not necessary for my purpuses)
I work on the university lab machines (more or less 0.5 Gb of total memory when I type "top" on the shell) and on my laptop (more or less 3 Gb...).
My question is: if I try to allocate too much static memory for my process I will get segfault?
I'm asking that because my simulation works well with certain parameters but, when the arrays exceed some "critical dimensions", the program doesn't work and I get a segfault.
Besides, when I run it with gdb the segfault occurs on the declaration of the first array, regardless of the latter's dimensions.
Another problem is that I can manage to run simulation with bigger arrays on the university machines, than on my laptop.
I mean, when I work on the 0.5 Gb pc, I can push the dimensions of the array further than when I work on the 3 Gb machine...
Moreover, suppose that I'm working a little below these "critical dimensions", on the 0.5 Gb machine and I type "top" on the shell, I find that the memory is almost all occupied (as expected), while on the 3 Gb pc I have much more than 2 Gb of free memory.
I hope you can give me some good advices...