I've written a threaded process logger (in C++) and I'm kind of disappointed with the way it eats memory. I guess I was presuming something (wrongly) about threads and resource sharing.

The idea is simple: the logger runs as a daemon server. When it receives a request to monitor a new process, it launches a thread to do so. So all the threads are identical as far as code goes -- they are the same function.

When it starts, it has a respectably small virtual size, ~15mB, with no threads, just the main server process waiting. The stack is less than 1mb. Then each thread adds 70mb to the total stack size. Pretty soon, the process logger is using more memory than most of the processes it is monitoring, lol.

What's the deal with that? I might as well just run a whole bunch of independent processes. I thought part of the point of multi-threading was to save resources vs. using separate processes, but I seem to be wrong.

I guess the best model would be to use a single process server without threads that handles multiple requests -- the reason I didn't do that in the first place was laziness, just launching threads seemed easier. So there is not much code to change, but thought I'd get some input first.