Thread: Explorations in Structs and sorting.

  1. #16
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    You misunderstand me.
    Using malloc/free will leave you open to lots of memory allocation bugs. That's just life. That said, that doesn't mean that you should avoid them if you need them. But if you can get away with storing things on the stack, then malloc/free is overkill. My point was just, don't use malloc/free unless you have to, because they will increase the complexity of the code. This applies to any programming language.

    Re: the last statement, I have no idea what you mean,
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  2. #17
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by Elysia View Post
    Re: the last statement, I have no idea what you mean,
    I meant that C++ in general and the STL in particular lead to significant memory bloat relative to C. Even the C++ "hello world" requires 30-50% more memory and this ratio does not reduce with scale. Personally, I don't care that much. "Inefficiency" can be in the mind of the beholder. It depends what is important and what you are willing to pay for ease of use. Memory is for using, after all.

    I agree with what you are saying about the use value of the stack, but the OP needs to understand the difference in order to exercise such a choice, which means learning how memory is allocated in C -- certainly not telling him/her not to bother! Without question, you cannot always put everything on the stack and you would be handicapping jwenting with such an attitude.

    Quote Originally Posted by Elysia View Post
    Using malloc/free will leave you open to lots of memory allocation bugs.
    Again, you are misrepresenting the truth. Using any command in any language can open you to lots of bugs -- this is no more true of malloc and free than anything else.

    Also: If you are not interesting in truthfully counciling about a language which you obviously do not use or like, then you are effectively trolling IMO.
    Last edited by MK27; 06-16-2010 at 09:40 AM.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  3. #18
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    I didn't say "don't bother." I said, it doesn't have to be on the heap. It can be on the stack. If you can put it on the stack in a real world application, then do it. If you haven't tried of dynamic memory allocation yet, then certainly trying it out is a good idea.
    Also, I am for efficiency. If it means using more memory to scale better and be faster, then so be it.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  4. #19
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by Elysia View Post
    Also, I am for efficiency. If it means using more memory to scale better and be faster, then so be it.
    Since a C++ STL program is unlikely to be significantly more (or less) efficient, speed wise, than an equivalent C program, I presume by efficiency you mean for the programmer (aka. "cost effectiveness" -- perhaps that is not "scale better and faster", but I'll leave you to your fantasy there). I can empathize, this is why I love perl -- which is way more efficient than either of them in this sense, but it does love using memory.

    Vis the preported inefficiencies of malloc/free, I just did a little test:
    Code:
    #include <stdio.h>
    #include <stdlib.h>
    
    int main(int argc, const char *argv[]) {
    	int i, items;
    	char **p;
    
    	items = atoi(argv[1]);
    	p = malloc(items*sizeof(char*));
    
    	for (i=0;i<items;i++) p[i] = malloc(1024);
    	for (i=0;i<items;i++) free(p[i]);
    	free(p);
    
    	return 0;
    }
    I used argv so that "items" must be determined at runtime. 2x2.2 Ghz linux 2.6:

    [root~/C/projects] time ./a.out 100000
    real 0m0.122s
    user 0m0.021s
    sys 0m0.101s


    Okay, so that allocated and freed 100,000 1k blocks (10 Mb) in about 1/10th of a second.

    [root~/C/projects] time ./a.out 10000
    real 0m0.016s
    user 0m0.002s
    sys 0m0.010s


    Or 10,000 1k blocks (1 Mb) in 1/100th of a second. So this is "inefficient"?

    I'm starting to believe you just talk straight off the top of your head a lot, Elysia.
    Last edited by MK27; 06-16-2010 at 12:36 PM.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  5. #20
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Sigh. Seriously now...
    What is more efficient? Deleting elements in the middle of an array or a linked list?
    The answer is going to be a linked list. It doesn't matter if it's 100 element or 10^100.
    It is always more efficient to use a linked list when deleting in the middle.
    That said, the speed difference if there's 10 elements will probably be negligible. So in terms of real world speed, it is acceptable. But if it scales, it will probably become unacceptable. There is a reason why we use Ordo to measure time for algorithms rather than empirical time.
    The same applies for malloc and free. If you're going to allocate lots and lots of small blocks, it's going to be inefficient. But perhaps the time it takes to perform allocates in real world time is negligible. But it doesn't change the fact that it's still inefficient.
    Try allocating 1 block of 100 MB and 10 * 2^20 blocks of 1 byte.

    EDIT:
    Case in point:
    Code:
    #include <stdlib.h>
    #include <Windows.h>
    #include <stdio.h>
    
    void AllocByStack()
    {
    	char Stack[500*1024];
    }
    
    void AllocByMalloc()
    {
    	void* pMemory = malloc(500 * 1024);
    	free(pMemory);
    }
    
    int main()
    {
    	DWORD StackStart = timeGetTime(), StackEnd;
    	for (int i = 0; i < 1000000; i++)
    		AllocByStack();
    	StackEnd = timeGetTime();
    	printf("Allocate by stack took: %d ms.\n", StackEnd - StackStart);
    
    	DWORD MallocStart = timeGetTime(), MallocEnd;
    	for (int i = 0; i < 1000000; i++)
    		AllocByMalloc();
    	MallocEnd = timeGetTime();
    	printf("Allocate by malloc took: %d ms.\n", MallocEnd - MallocStart);
    	printf("Allocate by malloc is %f times slower. That is, IT IS INEFFICIENT!\n", (float)(MallocEnd - MallocStart)  / (StackEnd - StackStart));
    }
    Output:
    Allocate by stack took: 375 ms.
    Allocate by malloc took: 86166 ms.
    Allocate by malloc is 229.776000 times slower. That is, IT IS INEFFICIENT!
    (Warning for debug mode.)
    Last edited by Elysia; 06-16-2010 at 01:09 PM.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  6. #21
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by Elysia View Post
    Sigh. Seriously now...
    What is more efficient? Deleting elements in the middle of an array or a linked list?
    The answer is going to be a linked list. It doesn't matter if it's 100 element or 10^100.
    It is always more efficient to use a linked list when deleting in the middle.
    Sure, but I don't see what that has to do with the "efficiency of malloc". Deleting an element in the middle of an array is awkward/inefficient regardless of whether it's on the stack or the heap.

    There is a reason why we use Ordo to measure time for algorithms rather than empirical time.
    Well, if you use empirical time and keep scaling up they tend to correspond. Eventually you will run out of memory and then your scale is complete Empirical time is more accurate than big O. But big O is a more "portable" measure and contains more information -- and less: if a function works in O(n) time, this tells us nothing about the duration of one "n".

    The same applies for malloc and free. If you're going to allocate lots and lots of small blocks, it's going to be inefficient.
    100, 000 1k blocks is lots of small blocks. Yes, of course 1 10 Mb allocation is more efficient. This is what I meant by knowing what you are doing. It does not mean "don't use malloc".

    But perhaps the time it takes to perform allocates in real world time is negligible. But it doesn't change the fact that it's still inefficient.
    Ah, but if what you need is lots of little blocks, there are no other choices, so that criticism is pointlessly absurd -- it amounts to saying don't use any memory or even turn the computer on because it will have to execute a kabillion little operations, which is very inefficient. It should just do one, which does everything.
    Last edited by MK27; 06-16-2010 at 01:25 PM.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  7. #22
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    This isn't about "don't use malloc." It's about use malloc when needed. Using it when not needed will hurt efficiency.
    Some algorithms are just darn slow, and that's the way it is. It cannot be changed. Now, say that our algorithm A is malloc. And our algorithm B is the stack. B is always faster than malloc, no matter what. But algorithm B cannot be used in all situations. When it can, it should be preferred. When it cannot be used, we must fall back on algorithm A. That is all I am saying.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  8. #23
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by Elysia View Post
    When it cannot be used, we must fall back on algorithm A. That is all I am saying.
    Okay, but you certainly did not explain that before:

    Quote Originally Posted by Elysia View Post
    But malloc/free are inefficient anyway. Plus they add bugs. So best avoid them if you can
    Which considering the OP has never used malloc and I, Bayint Naung, and nonoob all suggested s/he do, it kind of sound like you are trying to say that is a bad move?!??! Also, in context, the "inefficiency" of using malloc will be more than made up for by the swap address vs. copy struct detail.

    Anyway, I might have over-reacted but I have been trying to implement a mempool to beat malloc/free this week and so far failed , so your comment about the "inefficiency" of this C function seemed very off the mark IMO.

    Just noticed this lib tho:

    PJLIB Reference: Fast Memory Pool

    which claims "The result of PJLIB's memory design and careful implementation is a memory allocation strategy that can speed-up the memory allocations and deallocations by up to 30 times compared to standard malloc()/free() (more than 150 million allocations per second on a P4/3.0GHz Linux machine)." I guess applying "big O" notation here still implies some inefficiency because 150 million is 150 million, although in "empirical time" it's 1 second. Hmmm.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  9. #24
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Quote Originally Posted by MK27 View Post
    Also, in context, the "inefficiency" of using malloc will be more than made up for by the swap address vs. copy struct detail.
    No, the gain comes from using pointers, in any form
    Certainly, malloc over normal copying is faster, but you could put the data on the stack and store pointers to the stuff on the stack to gain more speed. It's just another way.

    which claims "The result of PJLIB's memory design and careful implementation is a memory allocation strategy that can speed-up the memory allocations and deallocations by up to 30 times compared to standard malloc()/free() (more than 150 million allocations per second on a P4/3.0GHz Linux machine)." I guess applying "big O" notation here still implies some inefficiency because 150 million is 150 million, although in "empirical time" it's 1 second. Hmmm.
    Well, I suppose the Big-Oh notation would be a poor use here since both malloc/free and the memory pool would simply be O(1).
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  10. #25
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by Elysia View Post
    Well, I suppose the Big-Oh notation would be a poor use here since both malloc/free and the memory pool would simply be O(1).
    Yep.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  11. #26
    ATH0 quzah's Avatar
    Join Date
    Oct 2001
    Posts
    14,826
    Quote Originally Posted by Elysia View Post
    Sigh. Seriously now...
    What is more efficient? Deleting elements in the middle of an array or a linked list?
    The answer is going to be a linked list. It doesn't matter if it's 100 element or 10^100.
    It is always more efficient to use a linked list when deleting in the middle.
    Efficient how?

    Efficient to find the element you want?
    -- Array wins.

    Efficient to remove the element you want?
    -- The array wins. You can't actually "delete" anything from an array, so the array is going to win here too, because all you have to do is set it to the value of whatever it is you consider 'deleted', and you're done.

    -- Oh, you meant moving all of the array elements around? Well that's not what you're saying. But it's what you meant. It's more efficient to write the deletion of an array element as well, even if you're talking about a doubly linked list.
    Code:
    void delele( type array[], type len, type d )
    {
        memmove( array + d, array + d + 1, (len - d) * sizeof( type ) );
    }
    
    void delele( node *n )
    {
        if( n )
        {
            if( n == head )
                head = n->next;
            if( n->prev )
                n->prev->next = n->next;
            if( n->next )
                n->next->prev = n->prev;
            free( n );
        }
    }
    The only real savings here are if you consider 'deleting' from an array as 'moving everything around, and keeping track of the current length'. But that's not the only way to consider an array element deleted.

    [Edit]
    Even if you are talking about actually moving things around, if you include the time it takes to get to where you're going (to find the node you want deleted), in a singly linked list, an array could be fairly close, and it gets even closer, the further you have to search to find your spot:
    Code:
    void delele( node *n )
    {
        if( n )
        {
            if( head == n )
                head = n->next;
            else
            {
                node * l;
                for( l = head; l && l->next != n; l = l->next );
    
                if( l )
                    l->next = n->next;
            }
            free( n );
        }
    }
    
    void delele( type array[], type len, type d )
    {
        /* this is all memmove does */
        type *des, *sou;
        for( des = array + d, sou = array + d + 1; des < array + len; *des++ = *sou++ );
    }
    So all you're doing is two pointer moves instead of one. Which means that the longer the search to find your node to remove, the better an array works out. It'll add up if you do this lots and lots of times, but if you're doing that lots of times, you probably just want to keep track of what's unused anyway, and not bother freeing/allocating/moving at all.

    [/Edit]

    Quzah.
    Last edited by quzah; 06-16-2010 at 03:16 PM.
    Hope is the first step on the road to disappointment.

  12. #27
    Registered User jimtuv's Avatar
    Join Date
    May 2010
    Location
    Sylvania, Ohio
    Posts
    94
    Ok only had a little time to code today so I fixed the lack of an initializing function I set up

    Code:
    void Init_person(personal_info *client, char name[NAME_SIZE],
                     int feet, double inches, int weight, int age);
    It's a little different then Bayint Naung suggested. Only in the fact I changed the name from a string literal to a string array (thank you MK27) and broke down the height into feet and inches. So the use would look like this now.

    Code:
    Init_person (&client[0],"Wooley, Bill", 5,11.5,185,26);
    Init_person (&client[1],"Smith, Jane",5,2.45,121,28);
    Quote Originally Posted by Elysia View Post
    You don't necessarily have to use malloc/calloc. So long as the data is still valid when you're searching, you can just have the actual data on the stack. Pointers can point anywhere - it just has to be a valid memory location.
    You only need a pointer to the actual struct. Then you implement some semantics on how the structs are actually sorted. You would only move the pointers around, so no data is copied anyway if you use pointers to the structs only.
    I have been thinking about this and the debate that it brought up. The exchange has been enlightening and I will have to take some time to study all the aspects that were brought up. But in the bottom line I think that for me right now the best bet would be to learn to use malloc/free and then worry about optimization once I have a working knowledge of exactly how memory management is handled.

    However my personality tends to drift toward taking risks so I am inclined to try out things that I probably shouldn't. And the idea of using the stack is tempting.

    but for now I will behave myself and stick to the more basic design techniques.

    Quote Originally Posted by MK27 View Post
    In fact it is not very different than your existing code:
    Code:
    	for (i=0;i<10;i++) {
    		client[i] = malloc(sizeof(personal_info));
    	/* error handling */
    		if (!client[i]) {
    			puts("OUT OF MEMORY!");
    			return -1;
    		}
    	}
    Wow that is simple! Thanks for the example.

    Quote Originally Posted by MK27 View Post
    but the OP needs to understand the difference in order to exercise such a choice, which means learning how memory is allocated in C

    .
    Trust me I am very aware of the need to know how memory is allocated. from what I have read I have learned so far that when the program first starts that memory is allocated for all the local variables and arrays that are found in main in a static block of memory and the "heap" or dynamic memory is just above that it is used for allocated memory and grows up. Now in high memory is an area "the stack" that holds things like return pointers for functions and the functions local variables.

    On the function call the return address is push onto the stack along with the arguments for the function. As the function begins to run the local variables are pushed onto the stack and when the function is complete they are popped off with the return address then the return value is pushed onto the stack and the program picks up from where it was called.

    the stack grows downward toward lower memory.

    I hope I didn't mess that up too bad

    Well I am very tired from a long day. I know I have not replied to all of what was said so far but I have a lot to study about what was brought up. I don't want anyone to think that I have ignored them. I am listening very intently to everyone and carefully going over each point. I do thank you all for your help and patients!!
    Last edited by jimtuv; 06-16-2010 at 10:36 PM.

  13. #28
    Registered User
    Join Date
    Sep 2006
    Posts
    8,868
    "... for your help and patients" << However ill they may be! < couldn't resist! >

  14. #29
    Registered User
    Join Date
    May 2010
    Location
    Naypyidaw
    Posts
    1,314
    Code:
    void Init_person(personal_info *client, char name[NAME_SIZE],
                     int feet, double inches, int weight, int age);
    // and 
    void Init_person(personal_info *client,char *name,int feet,double inches,int weight,int age);
    I hope you realize there's no difference.?

    >> What is more efficient? Deleting elements in the middle of an array or a linked list?
    It just depends.

  15. #30
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    It should be const char* if you're passing string literals, for reasons mentioned before.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Replies: 3
    Last Post: 03-31-2009, 12:34 PM
  2. Sorting an Array of Structs not working
    By ashcan1979 in forum C++ Programming
    Replies: 9
    Last Post: 09-28-2006, 03:07 PM
  3. sorting containers of structs
    By bigSteve in forum C++ Programming
    Replies: 2
    Last Post: 01-06-2004, 02:50 PM
  4. sorting an array of structs
    By jo_jo2222 in forum C++ Programming
    Replies: 2
    Last Post: 04-08-2002, 11:52 PM
  5. sorting an array of structs
    By singletrackmind in forum C++ Programming
    Replies: 8
    Last Post: 11-13-2001, 03:33 AM