m/realloc returning ever 0 on modern systems?

This is a discussion on m/realloc returning ever 0 on modern systems? within the C++ Programming forums, part of the General Programming Boards category; As far I know modern operating systems from Win95 - Vista are using swap files if the real memory is ...

  1. #1
    Registered User
    Join Date
    Jul 2007
    Posts
    88

    m/realloc returning ever 0 on modern systems?

    As far I know modern operating systems from Win95 - Vista are using swap files if the real memory is full. I did try on windows xp to malloc a much bigger amount of memory then I have and the system was still stable. It never returned NULL.

    I am currently reading a C book and in one chapter it says that it may happen that the operating system can`t find a free block with that size and you should request less memory then for example 1/2. My problem is that this error catching adds loads of overhead to my code.

    In which case is this still needed these days? Programming embeded devices?

  2. #2
    Captain Crash brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,159
    Quote Originally Posted by sept View Post
    In which case is this still needed these days? Programming embeded devices?
    Checking the return value of malloc() is always needed. Assuming it will never return NULL is not appropriate. This isn't "overhead," it is what needs to be done.

  3. #3
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Your question falls apart into two parts:
    Is it likely that malloc will fail in a small application: no. Is it possible that it can fail under certain circumstances: Most certainly, here's why:

    What if your swapfile is also full to it's maximum size?

    What if you try to allocate more memory than the processor can physically address, or that a single process can hold (2GB in standard Windows setup)

    What if you ask for 512MB when there's no contiguous address-range left of 512MB in that process? [Imagine we have 2GB per process, you allocate 3 x 512MB, then release the "Middle" 512MB, allocate another 64MB, then try to allocate 512MB again?]

    There are multiple reasons why malloc/realloc could fail, and the above are just a few of those.

    Edit: As to the overhead of checking for zero: It's very easy for the compiler to check for zero. Compared to the work needed to allocate memory inside malloc, the code to check for null is very marginal. If it's a large portion of your code, then you are either doing something wrong (perhaps you should write a "safemalloc" that checks for NULL and reports the problem, assuming you don't wan to let the user try to recover from the situation). Also, if you are calling malloc in many different places, you there may be better ways to write your code - but I haven't seen your code, so I can't really say.

    --
    Mats
    Last edited by matsp; 09-10-2007 at 03:17 PM.
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  4. #4
    Registered User
    Join Date
    Jul 2007
    Posts
    88
    It`s from a open book i am currently reading http://www.galileocomputing.de/openb...00294F1F03C18C (german) I translated the text.

    Code:
    /* red_mem.c */
    #include <stdio.h>
    #include <stdlib.h>
    #define MIN_LEN 256
    int main(void) {
    	int *ptr;
    	char jn;
    	static size_t len = 8192;  /* requesting memory */
    	do {
    		ptr =(int *) malloc(len);
    		/* couldn`t alloc memory */
    		if(ptr == NULL) {
    			len /= 2;  /* let`s try with 1/2 */
    			ptr = (int *)malloc(len);
    			if(ptr == NULL) {
    				printf("Couldn`t alloc memory"
    					" Try again? (j/n): ");
    				scanf("%c", &jn);
    				fflush(stdin);
    			}
    		}
    		else
    			/* success. alloced  loop end */
    			break;
    		/* let`s try as long the user press n or it`s less then MIN_LEN */
    	} while(jn != 'n' && len > MIN_LEN);
    	if(len <= MIN_LEN)
    		printf("Aborted allocating!!\n");
    	return EXIT_SUCCESS;
    }
    It seamed a bit to much for me just to use alloc if I should use this every time.

    I think checking if it`s NULL is no overhead. In case it`s NULL I just save all data to disk, show an errormessage and teminate the program.

    Would this be a good way to react in case it`s NULL? I Just asked here because I have currently no idea how often this happens in real world programs on real world computers.

  5. #5
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    There's no right and wrong way generally to deal with "not able to allocate". If you are allocating a buffer that is "if it's smaller, performance will suffer, but otherwise no difference", then using half the size until you get something is better than crashing/erroring out because you can't allocate a large buffer. At the same time, it is very unlikely that 8K would ever fail to allocate, so if that fails, it's most likely also failing at 4K, 2K, 1K, 512B, 256B too - because there simply ain't any more memory.

    In other circumstances, you may need 8K, and allocating less simply won't do any good, since you need one contiguous section of 8K, that's it - 7.9K or nothing would be equally bad, because it's 8K of data you need to store! In which case, erroring out is just about the only solution (asking the user if he can stop something else in the system is perhaps an option).

    Note that fflush(stdin) is an undefined operation, and most likely will not do anything useful. See the FAQ for "Why shouldn't I use fflush(stdin)" and "How do I flush the input" subjects.

    The most likely scenario of failing to be able to malloc is when the system is simply low on memory, and it could happen for just about any reason. Not having free-space on the disk may also affect things, as if you have a "auto-grow" swap-file, and no more space on disk, then the swap-file may not be able to grow, even if you are below it's limit.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  6. #6
    Captain Crash brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,159
    Quote Originally Posted by sept View Post
    It seamed a bit to much for me just to use alloc if I should use this every time.
    Wow. Allowing the user the choice to re-try? That's almost unheard of at this level. I think we misunderstood your question, then.

    Is it really necessary to allow a re-try of a failed malloc()? No, not in most cases. But you STILL need to check for NULL.

  7. #7
    Registered User
    Join Date
    Jul 2007
    Posts
    88
    Ok, got it. Thanks alot.

  8. #8
    Captain Crash brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,159
    Quote Originally Posted by sept View Post
    I think checking if it`s NULL is no overhead. In case it`s NULL I just save all data to disk, show an errormessage and teminate the program.
    How are you going to save the data to disk? You are out of memory. You probably can't even open a file at that point, since fopen() needs to call malloc() to get a chunk of memory to create the FILE object.

    What if displaying the error message requires memory to be allocated? Etc...

    If you're trying to allocate, say, 20 megabytes and it fails, there might in theory be 19.99 megabytes available, but that's not something you can determine, or count on.

    One strategy is to allocate a piece of "panic memory" at the very beginning of the program. If you run out of memory, you can free() this chunk, which hopefully will give you back enough memory to clean up and exit gracefully.

  9. #9
    Registered User
    Join Date
    Jul 2007
    Posts
    88
    Oh, that`s a very good and valid point. Your suggested solution sounds fine.

  10. #10
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by brewbuck View Post
    One strategy is to allocate a piece of "panic memory" at the very beginning of the program. If you run out of memory, you can free() this chunk, which hopefully will give you back enough memory to clean up and exit gracefully.
    I agree that this is a reasonable solution, but if some other process is allocating memory like mad, then you may still not be able to allocate memory after freeing some up.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  11. #11
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,892
    If the panic memory is on the same page as some other memory you hold (which is pretty likely), the page won't be freed and another program can't steal it.
    Not that a memory page is very large ...
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  12. #12
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by CornedBee View Post
    If the panic memory is on the same page as some other memory you hold (which is pretty likely), the page won't be freed and another program can't steal it.
    Not that a memory page is very large ...
    True indeed. But if your panic memory is "big enough to make a difference", then it's probably a fair bit over 4K(page-size on most processors - some have larger pages, but at least all x86 processors use 4KB pages[1]) - but that depends on the use of spare memory.

    Thinking more about it, it's unlikely that a small chunk of memory freed would actually be given back to the OS - since asking the OS to reduce/enlarge the heap is fairly expensive, the runtime library will hold on to freed memory and re-use that, rather than reduce it's usage by calling the OS, unless there is a LARGE amount of free memory. This makes such a principle more workable.

    Unfortunately, if the freed memory area was never "committed", e.g. we just do
    Code:
    char *reserveMemory;
    ...
       reserveMemory = malloc(100000);
    ...
    without actually touching any of that 100K area of memory, the memory may well just be "reserved" for this process, and not actually backed by real memory. This would mean that if the swapfile is full, we're no better off, because we haven't freed space in the swapfile (this memory was never in the swapfile, so freeing just marks some memory range that has never been "given any real memory" as free).

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  13. #13
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,892
    Yeah, so you'd have to fill the memory right after allocating it, e.g. by using calloc.

    And it seems that it would be best not to free the memory at all, but instead use it directly for cleanup tasks.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  14. #14
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by CornedBee View Post
    Yeah, so you'd have to fill the memory right after allocating it, e.g. by using calloc.

    And it seems that it would be best not to free the memory at all, but instead use it directly for cleanup tasks.
    Yeah, that would work, except if you call something like fopen() it will call standard malloc, rather than use memory from our reserve - it would be fine if you only need memory for your own internal purposes (could replace all calls to malloc with your own "safeMallocWithReserve"). But that may not work right - in C++ you could replace new.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  15. #15
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,892
    On the other hand ... not all systems call malloc for fopen. I'm pretty sure the MS CRT instead has a number of pre-allocated FILE objects and just uses one of those. When it runs out, fopen fails.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Analyze code in large systems
    By Chrys in forum C Programming
    Replies: 5
    Last Post: 11-18-2008, 12:14 PM
  2. Recursion: base case returning 1, function returning 0
    By yougene in forum C Programming
    Replies: 5
    Last Post: 09-07-2007, 05:38 PM
  3. Please Help - Problem with Compilers
    By toonlover in forum C++ Programming
    Replies: 5
    Last Post: 07-23-2005, 10:03 AM
  4. APIs for windowing in other systems
    By sean in forum Tech Board
    Replies: 5
    Last Post: 08-23-2004, 11:45 AM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21