Particle mesh allocation fails for larger grid numbers
Hello all, I'm currently studying how particle mesh simulations work and they seem to be a 3D allocation of grid points equally spaced from each other. I figure an easy way to allocate this memory is to use a 3D array and fill it with structs. If there's a better way then now is the time to tell me.
If not, then yay for me and here's my code:
Code:
#include <stdlib.h> #include <stdio.h>
typedef struct node {
int type;
} *node_ref;
int main(int argc, char **argv) {
int gl=128;
node_ref omfg[gl][gl][gl];
for (int i=0; i<gl; i++) {
for (int j=0; j<gl; j++) {
for (int k=0; k<gl; k++) {
omfg[i][j][k] = malloc(sizeof (node_ref));
omfg[i][j][k]->type = i;
}
}
}
for (int i=0; i<gl; i++) {
for (int j=0; j<gl; j++) {
for (int k=0; k<gl; k++) {
free (omfg[i][j][k]);
}
}
}
return 0;
}
So as you can see, it's not that complicated. But the funny thing is, it works for smaller gl than 128 in the sense that gl=128 segmentation faults the code. Using gl=64 works fine though valgrind does give me some odd errors (Warning: client switching stacks? SP change: 0x7ff0005a0 --> 0x7fee00590to suppress, use: --max-stackframe=2097168 or greater)
I don't actually know what that means but I have 8 GB of RAM and I doubt I'm eating up that much of it.
So what's wrong here?