First some background about my project. I'm writing a client server app for load testing another application on a Solaris box. The client is in VB6 and the server is in C. (these are the only tools available to me at the moment.) The server wil receive a request from the client which will consist of which "type" of load test to run (number of connections, which test scripts, and a number of other parameters.) The server will launch the test and provide status back to the client (load, number of current connections, failures, etc.)
BTW, currently this is all done with Perl and Korn shell scripts on the Solaris box. Status is tracked by tailing a number of log files.
Finally, my question. There seems to be a number of different strategies to writing a server (duh) and I don't know which is best for my project. I thought about having the server be nothing more than a listener which forks off a process for each connection which comunicate with the client. Then I thought maybe only the server should communicate back to the client and the server would use fork off the "test" processes and use shared memory to relay status information.
client <--TCP--> server <--shared mem--> forked test processes
Now I'm reading through my copy of Steven's UNIX Nework Programming and I see an example of a server that handles everything within a single process. I am thoroughly confused about which approach to take with this.
I'm not sure if I was clear enough in describing my applications goals but is there anything anyone can see that would make you pick one strategy over another?