I am currently writing some code which will be running on a remote Linux server. Basically it is an agent-based simulation, and the simulation code could run anywhere from a second to a day...pretty much as long as we feel like running the simulation. Since the simulation could run for awhile, we obviously want to run it in the background and just let it do its thing.

Recently I started designing a GUI front end for this simulation program which will allow us to play around with the simulation parameters before we actually start running the simulation. I have been faced with a couple hard decisions, however.

Essentially the desired behavior would be:
1. Start up the GUI, define some parameters, start the simulation.
2. The GUI spawns a process on the remote server which runs the simulation.
3. Close the GUI and come back later to see how the simulations is doing

I have thought about doing this in two different ways:
1. Client/Server model. Have a daemon running constantly in the background on the server machine, and then the GUI can be a desktop client. Let the GUI connect to the server and retrieve info about how the current simulation is doing or let it start up a new run of the simulation.
2. "Fork" a process model. Let the GUI app reside on the server. Open it up through X11 (since I doubt I will ever actually be physically at the server, I will always be running this over an SSH connection). Define the parameters in the GUI, and when I start the simulation, have it "spawn" a new process. The parameters can be passed into the new process as command line parameters. Have the simulation process output its results to a data file, and then when I come back and open the GUI again, I can have it load the data file and display the results to me.

I favor the second method rather than the first personally. My main reason is because it would seem silly to have a daemon constantly running on the server side for this kind of thing, and if for some reason the server got rebooted I would have to worry about setting it up again. Besides, the server I am running on is behind a VPN...and honestly I don't know much about how VPN works.

Anyways, let's say I go with method 2. I actually want this to be as cross-platform as possible, even though I know its going to be running on a Linux server. I am developing the simulation on a Windows machine...so it kind of has to be cross platform anyways.

What options do I have for spawning processes? Fork is out of the question because it just spawns the same process...which I don't really want to do.

I know these functions exist:

execl, execle, execlp, execlpe, execv, execve, execvp, execvpe, spawnl, spawnle, spawnlp, spawnlpe, spawnv, spawnve, spawnvp, spawnvpe

I don't think any of the exec functions are what I want because they spawn a new process in the same process space. The "spawn" family of functions sounds like what I want, but I still have some things I am unsure about, namely:

1. None of these functions are standard C++. I know they are supported on MingW/gcc in Windows, but are they supported on gcc/g++ in Linux?
2. If the "spawn" family of functions spawns a child process, will that child process automatically close when its parent process closes? I don't want that to happen. I want to be able to close the GUI and let the simulation continue to run in the background. If closing the parent process does indeed close the child process, then what options do I have?

(The GUI that I am using is wxWidgets...thus it is cross platform and can run on Windows, Linux, Mac, etc....so the GUI isn't a problem).

Anyways, what do you guys think?