views:

451

answers:

5

I have two processes and i want to share a linked list between them. One of the processes is just going to read the list while other process is going to modify the list ( add/delete entries). Can you tell me how to achieve it?


Let me add more details to it the language is C and the platform is Linux. It seems that shared memory is one approach but i do not know how to implement it.

IF any one can tell me the way of achieving it then it will be of great help to me.

I have one idea as mentioned below: Can i do something like this where i create a segment of shared memory of size of the node. Then i simply deal with malloc?What i mean is i will create the shared memory with help of shmget(key, SHMSZ, IPC_CREAT|0666)where SHMSZ will be of size of struct node. So i only share head of list between two process. The first entry in list will have all values of 0 except link entry which will point to next entry in list and that entry is created with help of malloc since In my application since other process is going to read only while one process is going to add/delete entries in list.

I got one reply which tells me that i can not use malloc. I do not know why i can not use malloc. Could you please tell me why i can not use malloc?

Below is my code for the above mentioned purpose which i have been trying but getting segmentation fault.

struct node
{
    int val;
    struct node* next;
};

void append(struct node *q,int val);
main()
{

    key_t key = 5678;

    int shmid;

    struct node *head;

    if ((shmid = shmget(key, sizeof(struct node), IPC_CREAT | 0666)) < 0) {
        perror("shmget");
        exit(1);
    };

    head = struct node*(shmat(shmid, (void *) 0, 0));

    head->val = 0;
    head->next= 0;

    append(head,2);
    append(head,5);
    append(head,6);

    exit(0);
}

void append(struct node *q,int val)

{
    struct node *temp1,*temp2;

    if (q->next == 0)

    {
        temp1=malloc(sizeof(struct node));
        temp1->val = val;
        temp1->next = 0;
        q->next = temp1;
    }

    else

    {
        temp2=malloc(sizeof(struct node));
        temp2->val = val;

        temp1 = q->next;
        while(1)
        {
            if (temp1 == 0)
            {
                temp1=temp2;
                break;
            }
            else
                temp1=temp1->next;
        }
    }

    return;

}
+1  A: 

You've got a bunch of options:

  • use inter process communication (sockets, named pipes, whatever) to send a duplicate list to each process that needs a copy. You'll have duplicate lists which could mean you incur duplicate work, and you'll have a problem of keeping the lists in sync - this may or may not be relevant for your problem.
  • use a list server, and inter process communication to request or set list elements. You'll need to gracefully handle cases where the client never returns a list item (perhaps with a heartbeat and/or reservation time)
  • use shared memory to directly share the list between processes. You'll need to be very careful about locking, and it won't work across different systems.
  • use fork(), and let the child process inherit the list. Like the first solution, you'll wind up with duplicate lists with (potentially) duplicate work, or duplicate elements. Like the third solution, this doesn't work across systems.
  • use some combination of the above.
atk
+1  A: 

I don't know who Beej is but he writes excellent guides, and he's written one for Shared memory segments.

Ken
A: 

I would suggest you try and work with a fixed size array (if possible) to simulate the linked list instead of an actual linked list in a shared memory segment.

As others have suggested, you can used shared memory, but if you want good performance/not run into resource limits, you will have to reduce the number of share memory segments you allocate.

If you allocate a big chunk of shared memory intially itself, you will have a heck of a problem trying to manage a linked list within that memory. Also, even if you have a linked list in shared memory, what guarantees that the pointers will be the same across processes? (You could use offsets, but then why not use array).

If you allocate a new shared segment for each node on the fly, how would the reader process know about it? Managing that could turn out to be a headache.

Good luck!

Moron
Can i do something like this where i create a segment of shared memory of size of the node. Then i simply deal with malloc?What i mean is i will create the shared memory with help of shmget(key, SHMSZ, IPC_CREAT|0666)where SHMSZ will be of size of struct node. So i only share head of list between two process. The first entry in list will have all values of 0 except link entry which will point to next entry in list and that entry is created with help of malloc since In my application since other process is going to read only while one process is going to add/delete entries in list. Suggestions
bhavin
You can't use malloc. If you do want to work with pointers, you will have to work with offsets, in which case you might as well use an array to simulate the linked list.
Moron
I am novice to shared memrory so i do not know why i can not use malloc? Could you please tell me why i can not use malloc?
bhavin
@bhavin: Does malloc know where to allocate memory from? Do you specify it when you call malloc? How would it know that you have a shared memory segment. Also, if you already have memory allocated (shared memory) why do you even _need_ malloc?
Moron
Malloc Does not know where to allocate the memory from. I thought malloc does not bother abt shared memory segment. My idea was let malloc allocate space and return address and assign the returned address from malloc to one element of structure ( whose space is allocated by shmget function. I am also posting the code which i have tried with the querry i have asked since space at available in this comment section is not sufficient for that purpose. Thanks for you reply.
bhavin
Yes, malloc knows where to get it from. Look at the first three questions as one group. Memory allocated by malloc won't be shared, so it won't help.
Moron
In my case one process has to only read the list and other process will do the add/delete entry. I thought if we share only head of the linklist it will be ok. But i do not know that will it be sufficient to share only head of the list. Will it cause problem since except head all other entries are in other process's area then the one reading the list?
bhavin
Just passing pointer values around won't work. Pointer from process 1 is meaningless to Process 2, as they don't share either memory or the addressing. btw, is this homework?
Moron
yes it is a part of the academic project which i have been doing. The problem with simulation of linked list with array is that i have to allocate big chunk of shared memory since the size of the memory is not known to me. By allocating vary large chunk i may waste memory. The thing is these entries are dynamic so at some times they might be plenty and at other time very few entry in the list so i thought linked list is best suitable for this application. Is there any other way of doing the job? Thanks for your reply.
bhavin
So this is homework (i am going to tag it as such). The way you have stated it does not help give you another answer. Perhaps if you actually explain the problem you are trying to solve, instead of asking how to implement a particular solution you came up with...
Moron
The routing daemon is running which provides the control plane and it provides the routing entry which currently i have kept as a linked list and forwarding plane is separate program which will use this entries for forwarding the data. So i have to somehow communicate the entries made by controlling plane process to the forwarding place process.
bhavin
Look at atk's answer. Pipes might be a good option.
Moron
A: 

Simple answer: Don't.

The benefit of multiple processes is that you don't have shared memory. This gives you isolation from the other process. Having a "data-only" barrier in place gives you great benefits as far as design and robustness go.

Instead, have the producing process send the new data (as data) to the consuming process. This can be done via STDIN/STDOUT, or a socket, or even something as low tech as a file.

kyoryu
A: 

The reason you cannot use malloc for the individual nodes is because malloc returns a pointer into the heap for a given process. The pointer means nothing to another process, which has its own heap. The address might be valid in both processes, but the "physical" memory that it refers to will be different.

If you are using shmget, you would need to use it for each node (or create a larger chunk and then use offsets into it as was already suggested). If individual shmget calls were used for each node, then the key could be the next pointer in the list. That would be a "simple" way of implementing it, but it might not be a good idea. I believe that there is a system-wide limit on shared memory objects. I think it is defined by SHMMNI, which is 4096 on a system I just checked.

Another way of implementing it would be to simply make one process the owner of the list (the process that creates and deletes nodes). Then use a single shared memory object to communicate between the two processes to hand out individual nodes. Then used shared semaphores (semget, semop, etc.) to synchronize.

Mark Wilkins