So in my program I have a server / parent process that has a number of client / child processes that need to be running concurrently. They need to share a stream of data (going from the parent to the children, I'm pretty sure I don't need any data going the other way). They also share a linked list of structs (where each node represents a child, and contains the file pointer for the FIFO to the client, the pid, a unique name, etc....). The first node is malloc'd before the first process fork's, and it's expanded as new children are created by other children processes.
So far, so good - does anyone see any problems with this design? Any thoughts or comments? It's my first larger project in Linux, so I'm running into a lot of new stuff.
My main question, however, is about fork'ing and exec'ing. Right now the code is tiny - under 100 lines, and I doubt it'll make it past 150 by the time I'm done. So when I fork, I just have the parent call server(), and the child call client(). Is that bad? Should I put those methods in separate processes and call them through exec()? Are there drawbacks one way or the other? Google only seems to yield man page entries and very simplistic tutorials that don't answer the question!
edit: If compiling them in separate files and exec'ing them results in a much smaller "footprint" - I'd definitely like to do it. But if it doesn't make a big different, I'd just rather contain everything to one small file - it's more my style... The code for the client process might grow a little bit, but the server code is going to stay under 20 lines I think...