Memory Fragmentation with Dynamic FIFO Queue
I have recently witten the code for a dynamic FIFO queue that has nodes that can have variable size. I have thoroughly tested the code and the init/add/remove operations work flawlessly. I have even run it agains valgrind to verify that there are no memory leaks when allocating and freeing nodes. I am currently running the program I have written on Debian 5.0 kernel 2.6.26 with the default OS settings. I have not performed any memory tweaks to tune for performance etc. All of the allocations for each node in the queue are handled using the default malloc() and free().
The problem I have with my queue occurs when I add a large number of small items to the queue. I have been watching my memory usage via the System Monitor in Debian and after adding 4000 items of size 1518 each to the queue my memory usage jumps just as expected. However, when I move through my queue and remove each item the memory usage does not appear to go down. When I run this test again without restarting my program the system does not allocate any further memory than was allocated in the first run. It appears as though it reuses memory that was allocated previously. I ran a second test and instead of adding a large number of small nodes to the queue I just added 30 items containing 10MB each and then emptied my queue again. This time the memory went up as expected and came back down as expected. I have been researching this problem for a few days now and I believe I have discovered the problem but have been unable to decide on reasonable solution.
Basically what I have determined is that when i call free() to free my nodes the memory is not being returned to the operating system. I found a good explaination at the link below.
When Linux Runs Out of Memory - O'Reilly Media
"The decision on whether to use brk() or mmap() requires one simple check. If the request is equal or larger than M_MMAP_THRESHOLD, the allocator uses mmap(). If it is smaller, the allocator calls brk(). By default, M_MMAP_THRESHOLD is 128KB, but you may freely change it by using mallopt().
In the OOM context, how ptmalloc frees memory blocks is interesting. Blocks allocated via mmap() get freed via an unmap() call, and thus become completely released. Freeing blocks allocated via brk() means marking them as free, but they remain under the allocator's control."
Because the small blocks I am allocating are smaller than the mmap threshold they are being allocated by brk() and brk does not release freed memory back to the OS right away. I am assuming it does this mostly for performance gains. Apparently it reuses the allocated memory for future requests.
This is a big problem for me because in my software that uses my queue I also report on the amount of free memory the system currently has. Anomalies such as what I have just described above cause the memory usage updates sent from my program to give a completely inaccurate representaion of the amount of memory currently being used. In addition, what I have also noticed is that as I continue to add/remove more and more items I begin to notice a fairly large and rapid performance degredation with my software. I don't know if what I described above is actual "memory fragmentation". I have always thought that memory fragmentation occured as a result of blocks of allocated memory not being used at all by the software. I am using all of the blocks I allocate, they just dont appear to be freeing in the manner I would like them to. Furthermore, my queue is only a small portion of my software that will be performing these types of small memory allocations. I have other modules that will be performing very similar allocations and I expect to take even larger memory reporting inaccuracies and performance hits.
I have thought of various solutions:
1. Use a different memory allocator
2. Allocate large chunks of memory ahead of time and manage the memory myself. (This is very difficult because my queue could become very large)
3. Adjust the management parameters for malloc so that it does not use brk (have had no success with this)
Any help or suggestions would be greatly appreciated.