Thread: the more new and delete operation, the slower they gets?

  1. #1
    Registered User
    Join Date
    Feb 2009
    Posts
    40

    the more new and delete operation, the slower they gets?

    Hi all. I just wondering a thing. If I have a program which call new/delete pretty often, will it become slower over time just because the memory become "unordered"? If yes, if I don't do any delete and new operation in some time. Will the memory then automatically become ordered again by the operating system?


    Thanks in advance.

  2. #2
    Officially An Architect brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,396
    Quote Originally Posted by idleman View Post
    Hi all. I just wondering a thing. If I have a program which call new/delete pretty often, will it become slower over time just because the memory become "unordered"? If yes, if I don't do any delete and new operation in some time. Will the memory then automatically become ordered again by the operating system?


    Thanks in advance.
    What you're describing is called "external fragmentation" and it is indeed a problem. In C++ at least, fragmentation will not be automatically corrected by either the OS or the language itself. Fragmentation is at its worst when there is a wide range of allocated sizes and the requests for allocation/deallocation overlap with each other over time. This can lead to memory being broken into many small blocks which are not large enough to satisfy commonly-requested allocation sizes. The result is the overall memory footprint of the program grows even though it is not using that much memory in a useful sense.

    Fragmentation can be addressed by implementing custom memory allocators which take advantage of domain knowledge to optimize the allocation strategy. Another technique is periodic "heap compaction" where in-use objects are identified and moved in memory to reduce the external fragmentation. This isn't trivial, however, since all the pointers to an object which has been moved must be updated.

    One solution which is acceptable in many situations is to save the relevant state of the program to disk, then terminate and restart. This eliminates the external fragmentation which has built up over time.
    Code:
    //try
    //{
    	if (a) do { f( b); } while(1);
    	else   do { f(!b); } while(1);
    //}

  3. #3
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    There's also better allocators and worse allocators. For example, the classical Linux allocator in glibc was pretty bad, but then they replaced it with a better one. The Win32 default allocator up to WinXP was also bad (Firefox gained a LOT from replacing it with jemalloc), but they implemented a new one in Vista that's considerably better.

    Memory fragmentation is usually only a problem for programs that run at least a few hours, though. If you have one that starts fragmenting memory long before that, you may want to look into your allocations - are they really all necessary? Maybe there's more stuff you can place on the stack.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  4. #4
    Registered User
    Join Date
    Feb 2009
    Posts
    40
    Thanks for you answers!

    I have read little about it now but, I have two more questions:

    - Why does not the operating system sort the heap then it can? It is impossible or would it suffer too much performance or it is even more complicated?

    - The stack cannot be fragmented just because it is determined in compile time right?

    By the way, it looks like I really need a memory pool or something like in my program then because my program most have the ability to run at least a week in a row..

  5. #5
    Officially An Architect brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,396
    Quote Originally Posted by idleman View Post
    - Why does not the operating system sort the heap then it can? It is impossible or would it suffer too much performance or it is even more complicated?
    How could the OS move objects in memory? That would break any pointers to those objects within the application program. The OS would have to be aware of each pointer to a heap-allocated object and adjust these pointers behind the program's back.

    Some garbage collectors actually do that, but having the OS do it itself is an unacceptable intrusion into application space.

    - The stack cannot be fragmented just because it is determined in compile time right?
    The stack can't be fragmented because it's a stack. It always grows or shrinks at one end. Pieces never disappear from the middle of the stack.

    By the way, it looks like I really need a memory pool or something like in my program then because my program most have the ability to run at least a week in a row..
    Although fragmentation is a problem, good allocators do quite a lot to manage the problem. Before blaming fragmentation, first prove that there are no memory leaks in the program.
    Code:
    //try
    //{
    	if (a) do { f( b); } while(1);
    	else   do { f(!b); } while(1);
    //}

  6. #6
    Registered User
    Join Date
    Feb 2009
    Posts
    40
    I am realize now how stupid questions I just asked ;D But you have right, no chance in the world the OS could do it


    Thanks for your answer.

  7. #7
    Registered User
    Join Date
    Oct 2006
    Posts
    3,445
    Quote Originally Posted by idleman View Post
    ...my program most have the ability to run at least a week in a row..
    a week at a time is nothing. I have a server application that has been running for months without a restart on one of my linux servers, and even though I don't take any special care to deal with memory fragmentation, I have no trouble with it. it handles hundreds of client connections, and well over a thousand requests per second.

  8. #8
    Registered User
    Join Date
    Feb 2009
    Posts
    40
    hehe, I know a week is nothing, but if depends extremely much on what the application do over that time too.

  9. #9
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    One of the things that can lead to problems is a program that does something like this:
    Code:
    for(i = 0; i < LARGE_NUMBER; i++)
    {
       ....
       ptr = realloc(ptr, i * sizeof(sometype));
       ptr2 = malloc(small_number);
       ...
    }
    Because every allocation requires a slightly larger size, the heap gets full of little pieces that are just a little bit too small for the next allocation. At the same time, the ptr2 allocations may not be choosen from the larger pieces freed by ptr's realloc [because the allocator doesn't want to cut up a nice big piece into smaller pieces and eventually end up with a load of fragments that aren't useful at all].

    Since the allocator has to walk through the entire free-list to find that there isn't enough space for the new allocation, it's takes longer each time.

    [This is not necessarily EXACTLY how the heap allocator works, but most often it has some such ideas].
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  10. #10
    Registered User
    Join Date
    Feb 2009
    Posts
    40
    I have understand that matsp, but thanks

    But I got a new one, how does the delete operator know how large a allocated block is? We say that we write new int[99] and then delete []. How does the delete operator know how much memory to deallocate?

    Thanks in advance

  11. #11
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    The implementation of new[] writes the number of objects to some special place (usually just before the pointer it returns) and the implementation of delete[] reads it.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  12. #12
    Registered User
    Join Date
    Feb 2009
    Posts
    40
    Ok, it explains it Thanks CornedBee

  13. #13
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Of course, this is implementation specific so you mustn't rely on it.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

Popular pages Recent additions subscribe to a feed