Thread: Static Memory allocation

  1. #1
    Registered User
    Join Date
    Dec 2008
    Posts
    10

    Static Memory allocation

    Hello everybody.
    I'm working on this physical simulation with c language.
    I need to allocate very large arrays, so I exploit much memory on my pc... I'm not so used in these kind of topics so I have some questions about static memory allocation. (Sorry, but I can't achieve to find the infos I need on the net)
    (,,I'm not using dynamic allocation cause it's not necessary for my purpuses)

    I work on the university lab machines (more or less 0.5 Gb of total memory when I type "top" on the shell) and on my laptop (more or less 3 Gb...).
    My question is: if I try to allocate too much static memory for my process I will get segfault?

    I'm asking that because my simulation works well with certain parameters but, when the arrays exceed some "critical dimensions", the program doesn't work and I get a segfault.
    Besides, when I run it with gdb the segfault occurs on the declaration of the first array, regardless of the latter's dimensions.

    Another problem is that I can manage to run simulation with bigger arrays on the university machines, than on my laptop.
    I mean, when I work on the 0.5 Gb pc, I can push the dimensions of the array further than when I work on the 3 Gb machine...
    Moreover, suppose that I'm working a little below these "critical dimensions", on the 0.5 Gb machine and I type "top" on the shell, I find that the memory is almost all occupied (as expected), while on the 3 Gb pc I have much more than 2 Gb of free memory.

    I hope you can give me some good advices...
    Bye
    Claudio

  2. #2
    ... kermit's Avatar
    Join Date
    Jan 2003
    Posts
    1,534
    I would not say that you would get a segfault for using a lot of memory, but rather you have a bug in your code. Why not post a snippet so we can have a look?

  3. #3
    and the Hat of Guessing tabstop's Avatar
    Join Date
    Nov 2007
    Posts
    14,336
    The beauty of static memory allocation is that static memory can't touch that 0.5GB (or 3GB) of ram. Your static memory is usually limited to somewhere between 1MB and 4MB (depending on your settings). If you want more, you'll have to allocate it yourself (dynamically).

  4. #4
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    It does sound like you are trying to allocate too much on the stack.
    This would indeed crash the program. Especially too large static arrays; somewhat common more newbies.
    Still, as kermit says, it is best to show an example so we can tell you more instead of merely guessing.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  5. #5
    Registered User
    Join Date
    Oct 2008
    Location
    TX
    Posts
    2,059
    What platforms are you working on? Perhaps your pc is really a Unix workstation owing to references to top and shell and I assume your laptop is a Windows machine. The difference in memory utilization between the pc and laptop may very well be due to the different memory management schemes used by Unix and Windows.

    Not sure how a process is treated under Windows but on Unix it is divided into three logical sections: text, data and stack. If your array is allocated outside of any function then it falls into the data segment whose size is controlled on most Unix's by the maxdsize (max data size) kernel parameter and if it is declared inside a function then it goes into the stack and is controlled by the maxssize (max stack size) parameter inside the kernel. If you want to test it out you can bump one of the relevant kernel parameters and see if your array can now be allocated beyond "critical mass" without segfaulting.

    Is the 0.5 Gb on your pc virtual or physical memory and do you have any swap space allocated on it?

  6. #6
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Platform should not matter. If it does, then it will be platform specific and not portable which is bad.
    The solution to the problem, if indeed the size is the problem, is to use dynamic memory. End of story.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  7. #7
    Registered User
    Join Date
    Oct 2008
    Location
    TX
    Posts
    2,059
    Quote Originally Posted by Elysia View Post
    Platform should not matter. If it does, then it will be platform specific and not portable which is bad.
    The solution to the problem, if indeed the size is the problem, is to use dynamic memory. End of story.
    Size will be a problem even with dynamic memory as you can run out of space in the heap.

  8. #8
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Yes, naturally. But I was referring to hitting a limit with the static allocation.
    Dynamic memory is the way to get access to all of the memory available.
    Of course, if the problem is that it's allocating more memory than what is available... then I am afraid there is no solution but to get more ram or consume less.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  9. #9
    Registered User
    Join Date
    Dec 2008
    Posts
    10
    When I run the simulation with "smaller" arrays it works, and I get the results that I expect from it. The segfaults comes out when I raise the dimensions of the variables. That's why I don't think that there's a bug.
    I have more than 1000 lines of code and several files, that's why I didn't post it. Here's the declaration of the arrays

    int main(){

    Code:
    double U[Nsiti][Ndim][N][N][2]={};
      double Tarray0[Lato/d0][Lato*Lato][2][Rmax-Rmin][N*N][N*N][2]={};
      double PP[Lato*Lato][2][Rmax-Rmin]={};
    Here's a typical dimension setting

    double U [13824] [3] [3] [3] [2]={};
    double Tarray0[12] [576] [2] [1] [9] [9] [2]={};
    double PP[576] [2] [1]={};

    And then I have some others array which are created locally in some subroutines...

    I work on unix, here's what I get on my laptop with "top" on shell, when I say 0.5 Gb I refer to Mem.

    Mem: 3106372k total, 716412k used, 2389960k free, 18072k buffers
    Swap: 2080376k total, 0k used, 2080376k free, 305072k cached

  10. #10
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Only the first array consumes 5832 KB of memory, and seeing as the stack is limited to 1 MB...
    Last edited by Elysia; 12-20-2008 at 01:46 PM.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  11. #11
    ... kermit's Avatar
    Join Date
    Jan 2003
    Posts
    1,534
    Quote Originally Posted by Elysia View Post
    and seeing as the stack is limited to 1 MB...
    I am not quite following you here. Where do you get the 1MB from? (Not trying to nitpick, but rather trying to learn )

  12. #12
    Officially An Architect brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,396
    Quote Originally Posted by kermit View Post
    I am not quite following you here. Where do you get the 1MB from? (Not trying to nitpick, but rather trying to learn )
    From nowhere... The stack isn't guaranteed to be any particular size. For that matter, I'm not sure the C standard forces there to be any stack at all, so long as the language semantics work (i.e. the "stack" could be implemented as a linked list of activation records)
    Code:
    //try
    //{
    	if (a) do { f( b); } while(1);
    	else   do { f(!b); } while(1);
    //}

  13. #13
    ... kermit's Avatar
    Join Date
    Jan 2003
    Posts
    1,534
    I was just wondering if I had missed something pertinent in one of the posts. As noted, the stack size varies, but whatever the case is not suitable for really large static arrays.

    I suppose it does not matter much to the OP -- it looks like he is going to have to go ahead and use dynamic memory allocation -- , but if he wanted to get an idea of how much stack memory (in kbytes) he is allowed per process, he could do

    Code:
     ulimit -s
    on his machine. This setting differs from machine to machine, and this command may also vary (i.e., may not function as described). If it does exist, one can change the size with the same command. If the stack size is indeed the trouble for the OP, (as seems quite probable) then there is a very good chance that ulimit -s will yield different values on each machine he runs the program on.
    Last edited by kermit; 12-19-2008 at 03:57 PM.

  14. #14
    Officially An Architect brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,396
    Quote Originally Posted by kermit View Post
    I was just wondering if I had missed something pertinent in one of the posts. As noted, the stack size varies, but whatever the case is not suitable for really large static arrays.
    An array on the stack is by definition not "static."

    I suppose it does not matter much to the OP -- it looks like he is going to have to go ahead and use dynamic memory allocation
    I agree -- what's so hard about calling malloc()?
    Code:
    //try
    //{
    	if (a) do { f( b); } while(1);
    	else   do { f(!b); } while(1);
    //}

  15. #15
    Registered User
    Join Date
    Oct 2008
    Location
    TX
    Posts
    2,059
    Quote Originally Posted by p3rry View Post
    When I run the simulation with "smaller" arrays it works, and I get the results that I expect from it. The segfaults comes out when I raise the dimensions of the variables. That's why I don't think that there's a bug.
    I have more than 1000 lines of code and several files, that's why I didn't post it. Here's the declaration of the arrays
    That doesn't stem from a bug in your code but the fact that the bigger the array the more the memory consumed.
    Quote Originally Posted by p3rry View Post
    Code:
    int main(){
    
      double U[Nsiti][Ndim][N][N][2]={};
      double Tarray0[Lato/d0][Lato*Lato][2][Rmax-Rmin][N*N][N*N][2]={};
      double PP[Lato*Lato][2][Rmax-Rmin]={};
    As your arrays are defined inside of main() all your storage is coming from the stack segment.
    Quote Originally Posted by p3rry View Post
    Here's a typical dimension setting

    double U [13824] [3] [3] [3] [2]={};
    double Tarray0[12] [576] [2] [1] [9] [9] [2]={};
    double PP[576] [2] [1]={};

    And then I have some others array which are created locally in some subroutines...
    Considering that a double is 8 bytes on most machines, the storage requirements of those arrays would be "13824*3*3*3*2 + 576*12*2*1*9*9*2 + 576*2" times 8 bytes which is about 6Mb plus the size of the arrays defined in the other subroutines.
    Quote Originally Posted by p3rry View Post
    I work on unix, here's what I get on my laptop with "top" on shell, when I say 0.5 Gb I refer to Mem.

    Mem: 3106372k total, 716412k used, 2389960k free, 18072k buffers
    Swap: 2080376k total, 0k used, 2080376k free, 305072k cached
    Physical memory on your machine is about 3 Gb and swap is about 2 Gb; so total virtual memory is 5 Gb not 0.5 Gb; not sure how you get 0.5 Gb.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Memory allocation and deallocation
    By Micko in forum C++ Programming
    Replies: 3
    Last Post: 08-19-2005, 06:45 PM
  2. Replies: 1
    Last Post: 03-30-2004, 02:57 PM
  3. opengl program as win API menu item
    By SAMSAM in forum Game Programming
    Replies: 1
    Last Post: 03-03-2003, 07:48 PM
  4. static memory and dynamic memory
    By nextus in forum C++ Programming
    Replies: 1
    Last Post: 03-01-2003, 08:46 PM
  5. simulate Grep command in Unix using C
    By laxmi in forum C Programming
    Replies: 6
    Last Post: 05-10-2002, 04:10 PM

Tags for this Thread