MAX Continious Dynamic Memory

This is a discussion on MAX Continious Dynamic Memory within the C++ Programming forums, part of the General Programming Boards category; Hi ALL, i really need some help with the following task. Can anyone tell me how to write a program ...

  1. #1
    Registered User
    Join Date
    Oct 2008
    Posts
    2

    Question MAX Continious Dynamic Memory

    Hi ALL,
    i really need some help with the following task.

    Can anyone tell me how to write a program that determines the maximum amount of continuous memory that you can allocate from the C runtime heap on your local system.

    So far i have this:

    Code:
    #include <stdlib.h>
    #include <stdio.h>
    #include <math.h>
    int main ()
    {
    int i;
    long size = powl(2, 30);
    long result = -1;
    for (i = 29; i >= 0; i--) {
    size += powl(2, i);
    printf("try %ld ", size);
    char *p;
    if (p = malloc(size)) {
    result = size;
    printf("ok\n");
    *(p + size - 1) = 'a';
    free(p);
    }
    else {
    size -= powl(2, i);
    printf("not ok\n");
    }
    }
    printf("result %ld\n", result);
    }

    but im not sure if the memory allocated here is all continuous.
    Thank you in advance,
    Prespa

  2. #2
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    First of all, it's not necessarily easy to determine the maximum amount of memory you can allocate, since once you have allocated a small amount, it may not be possible to merge that back into a larger block (or all of it) - that will depend on the implementation of the malloc/free functions amongst other things.

    And of course, "what else is going on in the system" will certainly affect how much memory you can allocate.

    The memory you get from allocating with malloc is always contiguous[1], so the largest size you can allocate in one go is the largest contiguous chunk of memory. In 32-bit Linux, it is 3GB, and in 32-bit Windows, it's 2GB normally, but you can tell Windows to repartition it so that you get 3GB - however, that means that large memory graphics cards may not work in that Windows system.

    By the way, I would use 1 << n instead of pow(2, n).

    [1] In a virtual memory sense. The actual physical memory that this is allocated from could be scattered all over the place, but since everything that happens within your application (and 99&#37; of the OS operations on the memory) only ever sees virtual memory, it won't make any difference to you. The only time when physical location, and physically contiguous memory is necessary is when dealing with hardware, and even then, modern hardware often have "scatter/gather" functionality, so you can give the hard-disk controller a list of memory addresses that it can read the data from.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  3. #3
    C++まいる!Cをこわせ! Elysia's Avatar
    Join Date
    Oct 2007
    Posts
    22,452
    And then the usual compliant:
    You need to indent your code. You cannot possibly expect us to read that unreadable code mess!
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  4. #4
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,893
    Of course, from your virtual address space you also have to subtract everything occupied by your program (exe size plus sum of DLL sizes plus alignment - and possibly more if some request a specific address), every thread's stack (1 MB per thread is standard), some more for global data (zero-initialized global data does not contribute to static program size), and who knows what else.
    Then you may get address space partitioning. I don't know how multiple heaps work under Windows, but you typically start out with a process heap, and the CRT creates its own CRT heap. This means an additional block of management data - where is it placed? It might just cut your available continuous address space in half.

    Don't forget, to get a continuous chunk of memory, you really need the continuous address space for it. Everything in there is your enemy.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  5. #5
    Algorithm Dissector iMalc's Avatar
    Join Date
    Dec 2005
    Location
    New Zealand
    Posts
    6,295
    There is no fixed answer. It can be different between programs, and between different invocations of a program. It depends on what else an app allocates memory for, and in what order because memory fragmentation can make a big difference too.
    You can only know how much you can get at once, at the time you want it, in the program that wants it.
    My homepage
    Advice: Take only as directed - If symptoms persist, please see your debugger

    Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"

  6. #6
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    It is correct that the 2-3GB of memory AVAILABLE to an application is not all available to "malloc", because some is taken up by other things, such as the stack, the program itself (and DLL's and other components dragged in as part of the program), other allocations that happen before you "find the largest" malloc, etc.

    To illustrate iMalc's point:
    Code:
    int main()
    {
        char *ptr = malloc(1000 * 1024 * 1024);
        char *pt2 = malloc(50000);
        free(ptr);
        char *pt3 = malloc(1500 * 1024 * 1024);
        ...
    }
    In windows (not configured for 3GB user-space), the memory allocation for pt3 will almost certainly fail, because there is a 50000 byte chunk sitting in the middle of memory, so even though there is a large portion of memory free after the call to free, there isn't a single 1.5GB chunk available.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  7. #7
    and the hat of sweating
    Join Date
    Aug 2007
    Location
    Toronto, ON
    Posts
    3,545
    I guess one thing you can try is to malloc() 4GB of memory, check if it failed, and if so, keep trying to malloc() smaller and smaller chunks of memory until it doesn't fail. Then that's how much you malloc() all at once. It should be fully portable since it doesn't call any OS API's, but your program may take many hours to find the upper limit.
    "I am probably the laziest programmer on the planet, a fact with which anyone who has ever seen my code will agree." - esbo, 11/15/2008

    "the internet is a scary place to be thats why i dont use it much." - billet, 03/17/2010

  8. #8
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    And if you start with something like 128 bytes under 4GB, and then reduce it by 4KB at a time, you will get there pretty. The reason I say that is that 128 bytes is almost certainly more than the overhead of the malloc() internal data structures, and 4KB is the size of a "block" of memory in most processor architecture's memory management, so the boundary will be a multiple of 4KB pages.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  9. #9
    Registered User
    Join Date
    Oct 2008
    Posts
    2

    Smile Thank You

    I would like to thank everyone for taking the time to
    answer my question.

  10. #10
    Registered User
    Join Date
    Oct 2008
    Posts
    5
    @Prespa

    Did u come to any conclusion for that? I am also looking for the same answer

  11. #11
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by sandylovesgnr View Post
    @Prespa

    Did u come to any conclusion for that? I am also looking for the same answer
    And what is missing from the answers givein in this thread?

    You need to write a program to determine this for any given OS and system configuration combination, since all sorts of different factors will affect it.

    I beleive I gave a pretty decent answer on how you would go about writing such an application.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  12. #12
    Registered User
    Join Date
    Oct 2008
    Posts
    5
    Your answer seems good. I had a question, malloc picks continous amount of memory from heap. BUt it takes only unsigned int as a parameter, so i cant do a big malloc at once.

    Should i malloc a buffer, another buffer and try to link dere addresses?

  13. #13
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    malloc takes a size_t type, which is generally "an unsigned tyep large enough to hold the largest amount of memory possible for the machine" [I'm sure the wording in the standard document is quite different, but I think my wording is sufficient for you to understand what size_t means]. So in a 32-bit machine, that would be an unsigned 32-bit value - which is fine, since that can hold 4G, and the maximum memory you can have in a single process in a 32-bit process [assuming no funny tricks] is 4GB. In a 64-bit system, size_t is generally 64 bits.

    Of course, if you are using for example Turbo C, then size_t may be a 16-bit unsigned, as 64KB is the largest amount of memory you could allocate at any given time (unless you call huge_alloc() or whatever it was called). But I hope/guess that your compiler isn't "nearly old enough to start to learn to drive".

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  14. #14
    Registered User
    Join Date
    Oct 2008
    Posts
    5
    Thanks for clarifying that again. IT helped a lot.

  15. #15
    Registered User
    Join Date
    Oct 2008
    Posts
    5
    I am sorry to bug u again. Is heap memory allocated from physical space? that would depend upon RAM. or does that depend upon physical + swap space and per process allocation is done on phy + swap space.
    I am getting some 2.3 GB on ubuntu, my RAM Is 2GB , so was just wondering.

Page 1 of 2 12 LastLast
Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Replies: 7
    Last Post: 02-06-2009, 11:27 AM
  2. Dynamic memory allocation.
    By HAssan in forum C Programming
    Replies: 3
    Last Post: 09-07-2006, 05:04 PM
  3. dynamic memory deletion via function
    By starkhorn in forum C++ Programming
    Replies: 4
    Last Post: 08-25-2004, 09:11 AM
  4. Dynamic memory allocation
    By amdeffen in forum C++ Programming
    Replies: 21
    Last Post: 04-29-2004, 08:09 PM
  5. Is it necessary to write a specific memory manager ?
    By Morglum in forum Game Programming
    Replies: 18
    Last Post: 07-01-2002, 01:41 PM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21