Originally the plan was to have the client process open a log file, write data to it as actions were performed, and then close it when it was done. Of course, in the event that one of the client processes locked up and the server process had to kill it, I wanted the server to be able to fclose() the file without data being lost. Of course that was entirely "theoretical" up until this point, since I hadn't even got shared memory to work - so if files won't work between two processes, I'll have to find another work-around to that.
Here's an even crazier idea (although I can't express how hopeful I am for you; I even lit little candles today that say "sean--mmap" in cursive on them):
Start a local socket server (PF_LOCAL) and have all the children connect to it as clients. Everyone starts with a copy of the data relevant to them, then work out some signals for making sure everyone stays abreast of what they need to, via the cleverness of the parent. Once you have a few general functions worked out, "the local socket" will work as fast as the front side bus (or whatever that thing is called).
I mean, correct me if I'm wrong but there must be a gizzillion super-complex database setups that work continuously over the internet, which means nothing but what passes thru a socket.
I'm gonna have to start personally thinking of shared memory the same way as goto: if you think you need it, your program might need redesigning...
So I have an "input server", a number of "client processes", and whenever there's a client process created, there's an "output server" piped to it (with a named FIFO), which handles the logging and the outputting. So if the "client process" locks up, the input server knows the name of the FIFO, and can also send a specific message to the output server that closes the file and finishes up.
So the input server just keeps it's own linked list containing the process ID of the client and the name of the FIFO. Then I don't need shared memory - and I only have one process accessing the logs, panels and windows.
There are cases where shared memory is very much useful - a particular case is some sort of "database" where the data is rarely or never changed, that is shared between applications. A typical example would be a font-server. Font's aren't exactly changing rapidly (you may find that different sizes and typefaces are being used at different times, but the names of fonts and the ones available in a particular system tends to be pretty constant over time).
One could also consider "sharing one-writer many readers data", e.g. a multiplayer game could have a shared memory section where each player has a "slot" in a shared memory to update their own players position and actions with others. All players can read other players, but each player will only ever modify their own data (this may include data that informs the other players to update their player, e.g. transfer of ownership of items, fighting that results in damage, etc).
But for your average "I want this data to be transferred from here to there", then shared memory is probably not the right thing to do.
Yeah - when I took a good look at what HAD to be shared - it was WINDOW *'s and PANEL *'s for ncurses (plus that file pointer) - which if I'm not mistaken points to data that's been malloc'ed by ncurses. So that would've been a big problem..
So thanks for the responses - my apologies to MK 27 who was waiting for me to get mmap working!