Memory Fragmentation with Dynamic FIFO Queue

This is a discussion on Memory Fragmentation with Dynamic FIFO Queue within the Linux Programming forums, part of the Platform Specific Boards category; I have recently witten the code for a dynamic FIFO queue that has nodes that can have variable size. I ...

  1. #1
    Registered User
    Join Date
    Jun 2008
    Posts
    93

    Memory Fragmentation with Dynamic FIFO Queue

    I have recently witten the code for a dynamic FIFO queue that has nodes that can have variable size. I have thoroughly tested the code and the init/add/remove operations work flawlessly. I have even run it agains valgrind to verify that there are no memory leaks when allocating and freeing nodes. I am currently running the program I have written on Debian 5.0 kernel 2.6.26 with the default OS settings. I have not performed any memory tweaks to tune for performance etc. All of the allocations for each node in the queue are handled using the default malloc() and free().

    The problem I have with my queue occurs when I add a large number of small items to the queue. I have been watching my memory usage via the System Monitor in Debian and after adding 4000 items of size 1518 each to the queue my memory usage jumps just as expected. However, when I move through my queue and remove each item the memory usage does not appear to go down. When I run this test again without restarting my program the system does not allocate any further memory than was allocated in the first run. It appears as though it reuses memory that was allocated previously. I ran a second test and instead of adding a large number of small nodes to the queue I just added 30 items containing 10MB each and then emptied my queue again. This time the memory went up as expected and came back down as expected. I have been researching this problem for a few days now and I believe I have discovered the problem but have been unable to decide on reasonable solution.

    Basically what I have determined is that when i call free() to free my nodes the memory is not being returned to the operating system. I found a good explaination at the link below.

    When Linux Runs Out of Memory - O'Reilly Media

    It states:

    "The decision on whether to use brk() or mmap() requires one simple check. If the request is equal or larger than M_MMAP_THRESHOLD, the allocator uses mmap(). If it is smaller, the allocator calls brk(). By default, M_MMAP_THRESHOLD is 128KB, but you may freely change it by using mallopt().

    In the OOM context, how ptmalloc frees memory blocks is interesting. Blocks allocated via mmap() get freed via an unmap() call, and thus become completely released. Freeing blocks allocated via brk() means marking them as free, but they remain under the allocator's control."

    Because the small blocks I am allocating are smaller than the mmap threshold they are being allocated by brk() and brk does not release freed memory back to the OS right away. I am assuming it does this mostly for performance gains. Apparently it reuses the allocated memory for future requests.

    This is a big problem for me because in my software that uses my queue I also report on the amount of free memory the system currently has. Anomalies such as what I have just described above cause the memory usage updates sent from my program to give a completely inaccurate representaion of the amount of memory currently being used. In addition, what I have also noticed is that as I continue to add/remove more and more items I begin to notice a fairly large and rapid performance degredation with my software. I don't know if what I described above is actual "memory fragmentation". I have always thought that memory fragmentation occured as a result of blocks of allocated memory not being used at all by the software. I am using all of the blocks I allocate, they just dont appear to be freeing in the manner I would like them to. Furthermore, my queue is only a small portion of my software that will be performing these types of small memory allocations. I have other modules that will be performing very similar allocations and I expect to take even larger memory reporting inaccuracies and performance hits.

    I have thought of various solutions:

    1. Use a different memory allocator
    2. Allocate large chunks of memory ahead of time and manage the memory myself. (This is very difficult because my queue could become very large)
    3. Adjust the management parameters for malloc so that it does not use brk (have had no success with this)

    Any help or suggestions would be greatly appreciated.

  2. #2
    Captain Crash brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,270
    So you don't like the default implementation of malloc(). Big deal. Implement your own allocator and move on.
    Code:
    //try
    //{
    	if (a) do { f( b); } while(1);
    	else   do { f(!b); } while(1);
    //}

  3. #3
    Registered User
    Join Date
    Jun 2008
    Posts
    93
    That was less than helpful.

  4. #4
    Captain Crash brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,270
    You are working with a general-purpose operating system with many layers of complexity. If you want precise control over the allocation/deallocation of pages, you need to use mmap() and throw your own allocator on the top of it.

    You might be able to fix these "anomalies" for your own software, but every other program on the system is also affected. Trying to produce an accurate number by adding up a bunch of inaccurate numbers is a losing proposition.

    The very concept of how much memory is "free" isn't well-defined in a virtual memory environment. You're trying to measure something that can't be measured.
    Code:
    //try
    //{
    	if (a) do { f( b); } while(1);
    	else   do { f(!b); } while(1);
    //}

  5. #5
    Registered User
    Join Date
    Jun 2008
    Posts
    93
    This is pretty much the same conclusion I have come to after all the research I have done. However, it blows my mind that a user of the linux operating system has no way of knowing when their system is actually low on memory. The statistics returned by the System Monitor are completely inaccurate. You would think that at some point they would try to remedy this whole situation.

  6. #6
    {Jaxom,Imriel,Liam}'s Dad Kennedy's Avatar
    Join Date
    Aug 2006
    Location
    Alabama
    Posts
    1,065
    Nope, Linux users just know how to interpret that data.

  7. #7
    Registered User
    Join Date
    Jun 2008
    Posts
    93
    So, assuming you are a "Linux user", how would you interpret the exact (or as close as possible) amount of memory that your Linux system is currently using? Because your response above seems to negate what you have just posted. First you said that it is a losing battle to make this calculation and now you say that linux users know how to do it.

  8. #8
    Registered User
    Join Date
    Jun 2008
    Posts
    93
    didnt realize the last post was from diff person.. sorry

  9. #9
    Registered User
    Join Date
    Jun 2008
    Posts
    93
    I am getting conflicting answers here.. can it be done or not?

  10. #10
    Registered User
    Join Date
    Jun 2008
    Posts
    93
    Basically what I need to know now is how do i know when my system is going to be critically low on memory when the calculations I am performing are inaccurate. What calculation should I be performing because the System Monitor calculation is not correct for % mem free?

  11. #11
    {Jaxom,Imriel,Liam}'s Dad Kennedy's Avatar
    Join Date
    Aug 2006
    Location
    Alabama
    Posts
    1,065
    By looking at it with the naked eye -- using the human brain to gather the data and interpret it? YES. Using a computer program to figure it out -- probably not. There are way too many variables at play. The brain can look at all the data and go -- Hmm, I'm outta memory here. A program, however, can only see things as 1 or 0 -- SO, no, there is no way to do it in code.

  12. #12
    Registered User Codeplug's Avatar
    Join Date
    Mar 2003
    Posts
    4,699
    Just be consistent on whatever numbers you do report. See "ps", "top", and "vmstat" commands.

    As for getting malloc() to behave a certain way, I would consult the manual - The GNU C Library - GNU Project - Free Software Foundation (FSF) (also where mallopt is documented).

    >> Basically what I need to know now is how do i know when my system is going to be critically low on memory
    Why does your app "need" to know this? What does"critically low" actually means to you and your app?

    gg

  13. #13
    Captain Crash brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,270
    Quote Originally Posted by Codeplug View Post
    Why does your app "need" to know this? What does"critically low" actually means to you and your app?
    I agree. Let the operating system manage it. Maybe you add up a bunch of numbers and come to the conclusion that you're "out of memory" so you bail. But how do you know the kernel doesn't have some trick up its sleeve that will make everything a-okay? You just gave up for nothing.

    Besides, the fact that you have tons of memory right now doesn't mean you'll still have tons of memory 100 milliseconds from now. Some other program could suddenly allocate a huge chunk. It's like trying to measure the location of an electron.
    Code:
    //try
    //{
    	if (a) do { f( b); } while(1);
    	else   do { f(!b); } while(1);
    //}

  14. #14
    Registered User
    Join Date
    Jun 2008
    Posts
    93
    The application I have written allocates a very large amount of memory, and it is entirely possible that it will eventually consume it all. If I can deallocate this memory or stop allocating more when it reaches a certain threshold I can prevent this situation from occuring. The problem is that I have no way of actually determining when the system is low because Linux works so much magic behind the scenes.

  15. #15
    Registered User
    Join Date
    Jun 2008
    Posts
    93
    This software I am writing is the only major application on the machine so it essentially has all of the memory available to it and does not have to worry about other programs allocating a lot of memory. Because of this I may just begin managing blocks of free memory within my code using a ring buffer or something similar. Then I would not have to deal with the numerous amounts of malloc() and free() pairs that are causing me problems. However, this approach will involve more overhead on my cpu's which I am not too fond of given the performance I am trying to squeeze out of the machine.

Page 1 of 2 12 LastLast
Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Structs, dynamic memory, and phone book entries
    By wkfcs in forum C Programming
    Replies: 5
    Last Post: 10-09-2009, 04:57 PM
  2. POSIX Threads and dynamic memory allocation
    By PING in forum Linux Programming
    Replies: 1
    Last Post: 04-02-2009, 11:28 AM
  3. Copy .dat file to dynamic memory...
    By IndioDoido in forum C Programming
    Replies: 5
    Last Post: 05-28-2007, 05:36 PM
  4. dynamic memory deletion via function
    By starkhorn in forum C++ Programming
    Replies: 4
    Last Post: 08-25-2004, 10:11 AM
  5. queue help
    By Unregistered in forum C Programming
    Replies: 2
    Last Post: 10-29-2001, 09:38 AM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21