-
Large 2D Array's Problem
Hello all,
So I have a program for which is is necessary for me to have many (7) large arrays (order of 8000x8000) of the type long double. Obviously, this has proven difficult for a novice programmer such as myself. My pointers are global... I suppose it does not have to be this way but it made the development of the program significantly easier and I use this function to allocate the desired memory.
Code:
void init(){
int i =0;
r = malloc(sizeof(long double)*NUM_PARTICLES); rx = malloc(sizeof(long double)*NUM_PARTICLES);
ry = malloc(sizeof(long double)*NUM_PARTICLES); rz = malloc(sizeof(long double)*NUM_PARTICLES);
x = malloc(sizeof(long double)*NUM_PARTICLES);
y = malloc(sizeof(long double)*NUM_PARTICLES); z = malloc(sizeof(long double)*NUM_PARTICLES);
for(i=0;i<NUM_PARTICLES;++i) {
r[i] = malloc(sizeof(long double)*NUM_PARTICLES); rx[i] = malloc(sizeof(long double)*NUM_PARTICLES);
ry[i] = malloc(sizeof(long double)*NUM_PARTICLES); rz[i] = malloc(sizeof(long double)*NUM_PARTICLES);
}
rAA = malloc(sizeof(long double)*NUM_A); rAB = malloc(sizeof(long double)*NUM_A);
rBB = malloc(sizeof(long double)*NUM_B);
for(i=0;i<NUM_A;++i) {
rAA[i] = malloc(sizeof(long double)*NUM_A);
}
for(i=0;i<NUM_B;++i) {
rAB[i] = malloc(sizeof(long double)*NUM_B);
rBB[i] = malloc(sizeof(long double)*NUM_B);
}
}
The memory is freed with
Code:
void delete(){
int i =0;
for(i=0;i<NUM_PARTICLES;++i) {
free(r[i]); free(rx[i]);
free(ry[i]); free(rz[i]);
}
free(r); free(rx); free(ry); free(rz);
free(x); free(y); free(z);
for(i=0;i<NUM_A;++i) {
free(rAA[i]);
}
for(i=0;i<NUM_B;++i) {
free(rAB[i]);
free(rBB[i]);
}
free(rAA); free(rAB); free(rBB);
}
Ideally I would then run through the rest of my program which does some calculations on particle positions as generated by a computer simulation and outputs the results to files. That all works fine as I tested it on very small systems. When I attempt to implement the large system, however, I was forced to switch to malloc to allocate for large data sets and I have become confused. I am receiving the following errors after my program runs for a few seconds and I must suspend my program from there.
Code:
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
a.out(3088) malloc: *** mmap(size=143360) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
a.out(3088) malloc: *** mmap(size=143360) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
a.out(3088) malloc: *** mmap(size=143360) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
a.out(3088) malloc: *** mmap(size=143360) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
a.out(3088) malloc: *** mmap(size=143360) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
a.out(3088) malloc: *** mmap(size=143360) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
a.out(3088) malloc: *** mmap(size=143360) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
a.out(3088) malloc: *** mmap(size=143360) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
I have no clue what's going on. Any help would be hugely appreciated.
Cheers,
Paddon
-
> So I have a program for which is is necessary for me to have many (7) large arrays (order of 8000x8000) of the type long double
Long double is 10 bytes IIRC
10 * 8000 * 8000 * 7 = 4480000000
That's 4.8GB, which is well past ALL the memory you could ever have on a 32-bit system.
> r = malloc(sizeof(long double)*NUM_PARTICLES);
Also, this should be (note, sizeof a pointer, not a long double).
r = malloc(sizeof(long double*)*NUM_PARTICLES);
-
So...
Judging by that fact. I may need to rethink how I am performing these calculations.
-
You may need to move into 64 bit computing here.
Unless you can trim the precision requirements, you may have to use files - perhaps memory mapped files. Speed differences are vast, though.
Perhaps you can calculate the work on one array at a time?
-
I think I can accomplish the same thing by performing the calculations on one array at a time and then dumping the results of it to a histogram.
Which means I would only be working with a single array of (N-1, N~8000).
Thanks for all your help! I would have spent hours looking for errors in my code had no one enlightened me to the fact I was trying to use such a ridiculous amount of resources.