-
Way Around Threads
Still on my server program...
The original flow chart consisted of the following:
- Setting up a connection and listen()ing for requests.
- In a loop, accept()ing requests and sending them to a new thread for processing.
I've spent the last couple of days researching my options for threading, and it seemed as though the best option would be to include a third-party library, which I really wanted to avoid, so I had another idea:
- Setting up a connection and listen()ing for requests.
- In a loop, accept()ing requests and processing them right there in the same thread.
I would of course specify a much larger number of connections to be allowed to wait to be accepted to make up for having one thread of execution.
My logic is that even though only one request could be processed at a time, it wouldn't affect performance because not having to share processor time with any other thread would make up for that. I also wouldn't have to worry about thread-safety with any resources that I use.
I am of course, not only really tired, but also an idiot. Is my logic flawed? Do you have any advice? This solution just seems to good to be true.
Thanks in advance,
Sean.
-
This is common. Perhaps not for a web server or whatever it is you were doing, but having a single thread which does everything is. Most MUDs do this.
Code:
set up socket
while not done
select
accept new connections
read from those sockets which have stuff to read
process info
write to sockets which have stuff to be sent
Quzah.
-
-
the performance hit with one thread is that when you block for IO, every request is waiting. With multiple threads, one request can be processed while another is blocked. (note that IO can be anything from reading from a socket to reading from disk)
Your solution is called Time Based Multiplexing and is a common approach to servers but will be noticably slower than a multithreaded solution in high traffic situations.