a question regarding ipc
i have a link list of sturctures and these structures are created in one program and then these are passed to the other programs on linux.
Commonly it is said that pipes are the most efficient ipc I can pass the structure list to the other program but is it really efficient
should not the shared memory more fficient in my case
bcuz i think might be able to read and write the whole link list at once what do u people say
If you put them in shared memory, there's only one copy of them. If you have to pass them through a pipe, each program is going to get it's own copy, thus efficiency (in terms of access speed) is lost.
Your design is going to depend on your programs requirements/objectives.
my actual program
actually iam programming a flow based analyzer .The program should work at high speed. So One program matches the packets and updates the structures' linklist on the basis of packets. Then this details is to be passed to another program which is going to apply some algos to these statistics. So what do u say Pipes(Fifos) or Shared Memory.
Well if you're really that concerned about performance, then make them all part of the same program.
Then it's just a function call parameter.
Even if you use shared memory, there are going to be plenty of rather expensive context switches alternating between one process and another.
Not to mention all the semaphore / mutex overhead to make sure that one process doesn't trash the shared memory just as the other is looking at it.
actually both can't be the part of the same program bcuz they have to be seprate. It is in the design that statistics collector is a different program and the analyzer is different.
Now what do u recommend
pipes fifos shared memory or some thing else
Since you've already committed yourself to the millstone around your neck, the rest doesn't really matter.
Saying which is best relies on knowing details of your environment, and details of your competence at implementing the correct solution correctly (in short, it's not going to happen).
Try several approaches and learn something!
> It is in the design that statistics collector is a different program and the analyzer is different.
Designed by who? your tutor?
In a production environment, this would be challenged, or at least have to be justified with some pretty good arguments.
let's forget every thing and come to generality
if i say that the speed is what i require and i have to use ipc then what do u recommend
fifo or shared memory
i m going to try myself also the performance difference and do let u know i got confused bcuz the sams linux programming book says that shared memory is the fastest and another document on the internet says that fifo is the fastest
What do u think shared memory or fifo?