Thread: your opinion (not a help question)

  1. #1
    Registered User
    Join Date
    Sep 2011
    Posts
    39

    your opinion (not a help question)

    After dynamically allocating my arrays, I think I have seen some speed gains in my code down from 13sec to 11sec. Am I imagining this (could be that I am not listening to music as I am executing the program) or there is something to it. I just noticed it, was not looking to optimise my code.

  2. #2
    Registered User
    Join Date
    Jun 2005
    Posts
    6,815
    One of the features of code that does dynamic memory allocation is non-deterministic behaviour. So performance can vary over time, with code that does repeated allocations and deallocations, and sometimes there are performance differences between code that uses static and dynamic allocation.

    I wouldn't bet the advantage is totally in favour of code that does dynamic memory allocation though. There are many interacting factors that affect program performance: memory allocation and usage is only one of those factors, particularly with the memory hierarchy of modern systems (multiple caches in processors, available machine registers, amount of virtual memory versus RAM, etc). So it is not possible to make blanket statements about dynamic memory allocation being better (or worse) for performance than, say, static allocation.
    Right 98% of the time, and don't care about the other 3%.

    If I seem grumpy or unhelpful in reply to you, or tell you you need to demonstrate more effort before you can expect help, it is likely you deserve it. Suck it up, Buttercup, and read this, this, and this before posting again.

  3. #3
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by ali.franco95 View Post
    After dynamically allocating my arrays, I think I have seen some speed gains in my code down from 13sec to 11sec. Am I imagining this (could be that I am not listening to music as I am executing the program) or there is something to it. I just noticed it, was not looking to optimise my code.
    Here's something I learned recently here; it is specifically to do with the linux kernel, but it might apply in some way to all modern operating systems since it has to do with fundamental issues.

    11-13 seconds is a long time and implies the total size of these arrays is very large. What happens when you do this:

    Code:
    int *array_x = malloc(10000*sizeof(int));
    Is that the kernel assigns the program enough virtual address space to cover the array. This is not actual memory. There are a few reasons to do that:

    1) So that the OS can juggle numerous large applications simultaneously, that, when considered together, might exhaust all of the real physical memory.

    2) So that the OS can provide contiguous addresses (the C standard, eg, requires such for arrays) possibly using small, non-contiguous blocks if there is not actually such a large single chunk available.

    If you now examine the uninitialized contents of array_x, it is all zeros. WRT linux at least, these are fake. A read on uninitialized heap memory still does not involve any real physical memory.

    Physical memory (actual RAM) comes into play when you write something into the array. At that point, the kernel finds enough physical memory to cover the part written into. That's all. Ie, if the first thing you do is this:

    Code:
    array_x[1001] = 666;
    array_x[7123] = 3000000;
    The kernel will come up with 2 page sized blocks of RAM. A page is the smallest unit of memory the kernel will deal with (4096 bytes is common). At this point, presuming sizeof(int) == 4, array_x represents 40000 bytes of virtual address space, but only 8192 bytes of real memory.

    Every time a new page is needed to cover a write, there is some processor overhead. I believe this is why traditionally malloc is considered an expensive call; traditionally, this would all happen at once (which would probably be more efficient time-wise, but would be very inefficient, RAM usage wise). Instead, it happens a page at a time when allocated space is written into for the first time.

    Part of the cost here is that if the write does not cover the entire page, the kernel actually zeros out the rest of it.* Ie, calloc() calls are meaningless on linux (and I believe other modern OS's) because malloc'd memory is zero'd out anyway. One reason for that is security; if the memory were left in the state it was last used, you could get access to all kinds of information you maybe should not have access to just by allocating large chunks of memory and then reading the uninitialized data.

    So this might be your issue. There are ways to determine that and to alleviate the problem but they will be platform specific.

    * it could be that this happens when the previous user frees it.
    Last edited by MK27; 10-18-2011 at 07:38 AM.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  4. #4
    [](){}(); manasij7479's Avatar
    Join Date
    Feb 2011
    Location
    *nullptr
    Posts
    2,657
    I remember a thread some months ago in General Discussions(or Tech....can't find it now..unfortunately), in which someone presented some benchmarks on this(sort of) issue, showing some unusual results.
    Ultimately Salem figured out that it was due to cache misses(not complete sure though, if someone can dig it up, please do).

    <It turned out to be in the C forum, and for a different reason(the difference; I was ignorant of.)>
    Last edited by manasij7479; 10-18-2011 at 07:55 AM.

  5. #5
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by manasij7479 View Post
    I remember a thread some months ago in General Discussions(or Tech....can't find it now..unfortunately), in which someone presented some benchmarks on this(sort of) issue, showing some unusual results.
    Ultimately Salem figured out that it was due to cache misses(not complete sure though, if someone can dig it up, please do).
    If you are thinking of the thread I'm thinking of, it was minor page faults (a minor page fault being what I just described):

    Using less malloc and free makes code _slower_
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Just Want an opinion
    By Phoenix_Rebirth in forum C Programming
    Replies: 13
    Last Post: 01-29-2009, 07:55 AM
  2. GUI Design Question (Seeking an opinion)
    By Exile in forum C++ Programming
    Replies: 9
    Last Post: 02-10-2005, 07:47 AM
  3. Your opinion
    By Jaguar in forum C++ Programming
    Replies: 6
    Last Post: 03-08-2003, 03:23 AM
  4. design question: opinion
    By ggs in forum C Programming
    Replies: 2
    Last Post: 01-29-2003, 11:59 AM
  5. Yet Another Question but this envolves code not opinion
    By aresashura in forum Windows Programming
    Replies: 2
    Last Post: 12-02-2001, 12:26 PM