Thread: Problem with linked list and shared memory

  1. #1
    Registered User
    Join Date
    Nov 2008
    Posts
    4

    Problem with linked list and shared memory

    Hello all!

    First time poster here. I have a problem and I'm at my wit's end on how to solve it. Basically I'm trying to put together a doubly linked list whose head is in a shared memory segment. I update its contents when the list is first initialized, and when a second node is introduced(the ->next field), but then never again, everything else is done by malloc.

    My problem is, although I can run through the list on my first thread(the one that initialized the list) referencing the shared memory segment, I cannot do it on my second thread. There is some kind of pointer/address issue that's stopping my second thread from finding further nodes down the list. All I'm able to find are the contents of the first node, which are put directly into the shared memory segment. This is what doesn't make sense to me, the int values I insert there I can access, but the pointer, which was pointing to a malloc'd node, points nowhere on the second thread.

    This is the code for my node insertion function:

    Code:
    void armazenar_ordenar_requisicoes (struct msgbuf_driver req,struct chs chs_passado) {
    
    	node_linked_list *new,*run,*aux;
    
    
    	if(head->exists == 0) {
    
    		printf("head is null\n");
    	head->next = 0;
    	head->previous = 0;
    	head->requisicao = req;
    	head->chs_rec = chs_passado;
    	end = head;
    	head->exists = 1;
    
    
    
    	} else {
    
    	run = head; 
    	if ( run != 0 ) {
    	while ( run->next != 0)
    	        {
    	            run = run->next;
    	        }
    	}
    
    	/* Creates a node at the end of the list */
    	run->next = malloc( sizeof(node_linked_list) );  
    	aux = run;
    	run = run->next; 
    
    	if ( run == 0 )
    	{
     	       printf( "Out of memory" );
    	}
    
    	run->next = 0;
    	run->previous = aux;
    	run->requisicao = req;
    	run->chs_rec = chs_passado;
    
    	end = run;
    
    	}
    
    	orderlist();
    
    
    }
    "head" is the shared memory segment, and is of type node_linked_list:

    Code:
    struct node {  
    
    struct node *next;
    struct node *previous;
    struct msgbuf_driver requisicao;
    struct chs chs_rec;
    int exists;
    
    
    };
    
    typedef struct node node_linked_list;
    I run through the list on both threads(which were created thru fork() and then started running each one procedure).

    This is the input I get from one thread:

    data on list:13 15 24 33 45

    And from the second:

    data on list:13 0

    Any help would be greatly appreciated.

  2. #2
    Registered User Codeplug's Avatar
    Join Date
    Mar 2003
    Posts
    4,981
    If you expect both processes to have access to the entire list, then the entire list needs to be fully created prior to fork (so it can be duplicated), otherwise the entire list needs to be in shared memory. After you fork, what you malloc in process A is not accessible by children or parent processes.

    gg

  3. #3
    Registered User
    Join Date
    Nov 2008
    Posts
    4
    Thanks for the reply. How would you do it if the list had to be altered after the processes forked, and fully accessed by both processes? Keep the whole thing in shared memory?

  4. #4
    Registered User Codeplug's Avatar
    Join Date
    Mar 2003
    Posts
    4,981
    Yes. If you want a single list that all processes can access, then the whole thing needs to be in shared memory.

    Remember that when you fork a process, it gets a *copy* of the parent processes malloc'd memory. Any shared memory segments (prior to fork) will get mapped into the child process - so that memory stays unique (not a copy).

    gg

  5. #5
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by Sirfabius View Post
    Thanks for the reply. How would you do it if the list had to be altered after the processes forked, and fully accessed by both processes? Keep the whole thing in shared memory?
    Yes. Any non-shared memory will "split" between the parent and child process as soon as it gets written to, so any change done by one side would not appear on the other side, because the other side gets it's own "original copy" when the OS detects that the memory is being written to.

    Also bear in mind that if the two processes are unrelated (that is, one isn't a direct fork of the other), you should not hold pointers in shared memory, as the OS may choose to map the shared memory to different addresses in different processes. You can solve that problem by using a linked list in an array and store the index to the next node, or store the OFFSET from the start of the shared memory as the link to the next element - that way, you do not rely on the shared memory sitting in the same place in both processes.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  6. #6
    Registered User
    Join Date
    Nov 2008
    Posts
    4
    I was able to overcome this problem mapping the linked list in shared memory, like so:

    Code:
        if ((shmid = shmget(keysharedmem, sizeof(struct node), IPC_CREAT | 0666)) < 0) {
            perror("shmget");
            exit(1);
        }
    
    head = shmat(shmid, (void *) 0, 0);
    
    head->previous = 0;
    head->next = 0;
    head->exists = 0;
    
    aux = head;
    
    for (i=0;i<2000;i++) {
    
    keysharedmem = keysharedmem + 1;
    
    if ((shmid2 = shmget(keysharedmem, sizeof(struct node), IPC_CREAT | 0666)) < 0) {
            perror("shmget");
            exit(1);
    }
    
    run = shmat(shmid2, (void *) 0, 0);
    
    aux->next = run;
    
    run->previous = aux;
    run->next = 0;
    run->exists = 0;
    
    aux = run;
    
    }
    Sure, it's not very classy, but it's getting the job done for this school project. I'm not good enough at C and deadline is close enough that I don't wanna be messing with offsets. Other than the obvious cluster of different shared memories created, any problems with this? So far it has tested OK.

  7. #7
    Registered User Codeplug's Avatar
    Join Date
    Mar 2003
    Posts
    4,981
    Each shared memory segment you create will map in at least 1 page of memory (typically 4K). Before you run out of memory though, you'll probably hit the system imposed max # of shared memory segments allowed per process.

    >> ...any problems with this?
    Other than wasting a whole lot of memory and not being able to support very many nodes in the list at one time - it should work.

    gg

  8. #8
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Not to mention that it doesn't solve the problem that a particular shared memory block is given a different address in a different process, so the "next" link created in process 1 is pointing into a shared library that process 2 happens to have located at that very address. That is why you need to use offsets or arrays to implement linked lists in shared memory (or you need to validate that the memory is actually at the same address as what you think it should be). Note that, according to Murphy's law, this "not at the same address" problem [1] is most likely to happen when you least want it: When you are demonstrating in front of a bit potential customer, when the software is running an important job.

    [1] On a more serious note, the OS will do it's best to keep the memory at the same address - but if for one reason or another, that virtual memory location is taken, then the OS has no other choice. Undoubtedly, this ALWAYS happens when the system is in a most complex situation (lots of memory allocated, lots of shared libraries loaded, etc, etc).

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  9. #9
    Registered User Codeplug's Avatar
    Join Date
    Mar 2003
    Posts
    4,981
    >> it doesn't solve the problem that a particular shared memory block is given a different address in a different process
    Good catch matsp. That was my first instinct but talked myself out of it somehow.

    >> aux->next = run
    So aux is actually "head", which is a shared memory address mapped into each process. The value assigned to "next" will be seen by all processes. The value being assigned is "run", which is another shared memory pointer. Now every process will see "head->next" as the shared memory address that was loaded into that one process. The problem is that any of the other processes could load that segment at a different address, making "head->next" an invalid pointer. Right now you are getting lucky that all processes are loading the segments into the same location for all processes.

    You can at least detect when this happens by using the second parameter to shmat().

    If an array would suit your needs as well as a linked list would - a shared memory array is a bit easier to implement.

    gg

  10. #10
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by Codeplug View Post
    >> it doesn't solve the problem that a particular shared memory block is given a different address in a different process
    Good catch matsp. That was my first instinct but talked myself out of it somehow.

    >> aux->next = run
    So aux is actually "head", which is a shared memory address mapped into each process. The value assigned to "next" will be seen by all processes. The value being assigned is "run", which is another shared memory pointer. Now every process will see "head->next" as the shared memory address that was loaded into that one process. The problem is that any of the other processes could load that segment at a different address, making "head->next" an invalid pointer. Right now you are getting lucky that all processes are loading the segments into the same location for all processes.
    Actually, the likely scenario is that the memory IS valid, but that it's not the memory you think it is because the reason the OS choose to use a different memory location is that there was another block of memory occupying the same space. It is to the OS's benefit to use the same address in all processes that share the same memory, as it means that (often) the OS can share the physically same [not just the same value, but actually THE SAME] page-table entry for all of those processes, and save on memory usage and admin handling.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  11. #11
    Registered User
    Join Date
    Nov 2008
    Posts
    4
    What I'm currently doing is creating the whole list before I fork, on main(). The two child processes, one of which adds nodes, while the other removes them, don't really free any memory, I'm simply using a flag that says if a particular node exists. Their memory mapping doesn't change after.

    Will the pointer issues you guys describe occur in this condition?

    And will malloc work instead of cluttering the memory area with thousands of shared memory segments since I'm doing the allocation before forking? Again, thanks for the replies.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Problems with shared memory shmdt() shmctl()
    By Jcarroll in forum C Programming
    Replies: 1
    Last Post: 03-17-2009, 10:48 PM
  2. BSD mmap for shared memory - am I right?
    By sean in forum Linux Programming
    Replies: 21
    Last Post: 03-09-2009, 01:57 PM