Thread: Memory Fragmentation with Dynamic FIFO Queue

  1. #16
    Registered User Codeplug's Avatar
    Join Date
    Mar 2003
    Posts
    4,981
    >> If I can ... I can prevent this situation from occurring.
    What "situation" are you trying to prevent?

    >> Then I would not have to deal with the numerous amounts of malloc() and free() pairs that are causing me problems.
    I'm not clear on what the "problems" are.

    Why not just check for a 0 return from malloc and react then? Zero from malloc is a good indicator that things are getting tight

    The malloc/free implementation in glibc has been optimized by a lot of smart folks. Adding "variable-sized chunk management" code on top of large blocks returned by malloc doesn't make much sense in my mind.

    gg

  2. #17
    Registered User
    Join Date
    Jun 2008
    Posts
    93
    OK. I have run tests and what I have found out is that the 128KB threshold mentioned in the above quote is either inaccurate or outdated. The allocator only holds onto a chunk of allocated memory when it is has a total size that is less than or equal to 32672 bytes. This is essentially 4 pages worth of memory with a 96 byte gap which I am assuming is tied up in some sort of overhead with the allocator. I have tried this across two different platforms and versions of Linux and it seems to hold true. Based upon this I think I will just write a mask for my malloc() and free() methods that will keep track of the amount of memory my program has allocated. In these methods I can make a decision based upon the size of the chunk of data that is being allocated as to how it will be represented within the total memory used for my program ( i.e. whether or not the allocator will hold onto it or hand it directly back to the OS). I would rather not go through the trouble of managing my own allocator due to some current time constraints. Furthermore, it is not absolutely imperative that I know the exact memory usage but I would like to be within approximately 1%-5% accuracy and I think this method will achieve that goal. What are you thoughts on this approach?

    By the way, I ran into an interesting scenario when attempting to determine the 32672 threshold I mentioned above. My test involved storing 50,000 items of 32673 bytes each into a queue. Each item was allocated using malloc() of course. What I noticed is that this scenario brings one of my cpu cores to 100% usage and takes approximately 5-10 times longer to allocate all of the items than the same scenario with 50,000 items of 32672 bytes each. I am assuming this has to do with the fact that each item that is allocated is forced to use 2 pages rather than one in this scenario with only one byte of data being allocated in each of the second pages, but I don't know why that would slow it down. I also tried this scenario at 32674 and received the same results so I guess there is a tremendous amount of overhead when the allocator is forced to put a very few number of bytes in the last page required to fill the allocation. Any thoughts on this scenario?

    Thanks for all of your replies.

  3. #18
    Registered User
    Join Date
    Nov 2008
    Posts
    75
    I'm not sure I understood the problem here, because I didn't read all the post, but, from what I read on the first post, did you think about using __libc_freeres?

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Structs, dynamic memory, and phone book entries
    By wkfcs in forum C Programming
    Replies: 5
    Last Post: 10-09-2009, 03:57 PM
  2. POSIX Threads and dynamic memory allocation
    By PING in forum Linux Programming
    Replies: 1
    Last Post: 04-02-2009, 10:28 AM
  3. Copy .dat file to dynamic memory...
    By IndioDoido in forum C Programming
    Replies: 5
    Last Post: 05-28-2007, 04:36 PM
  4. dynamic memory deletion via function
    By starkhorn in forum C++ Programming
    Replies: 4
    Last Post: 08-25-2004, 09:11 AM
  5. queue help
    By Unregistered in forum C Programming
    Replies: 2
    Last Post: 10-29-2001, 09:38 AM