Get socket queue length
I have a TCP/IP (UDP) app which is expected to be able to scale under heavy load spikes. (normal load is 100-500 tps, but spikes can reach 50-80 ktps or more) Currently I have a master thread which spawns off a thread pool to handle the UDP requests. What I need to be able to do is to monitor the load, so I can determine if more threads are needed in the pool. I was looking for a way to determine how many requests are pending in the UDP socket queue, but all I can find is an ioctl that will tell me how big the next packet is, which is useless to me.
Is there a way to tell how many packets are in the socket queue? Or does someone know of another way to determine if spawning more threads would improve throughput under a load spike?
I cannot see why you would need a thread pool to handle UDP protocols. Most UDP protocols are stateless, which means there should be no need for separate threads. There is no stream to contend with, no buffering to deal with. You should be able to process packets as they arrive, no?
In other words, how many sockets do you have open? For UDP, it's usually just one, meaning you really only need one thread (or at most, a number of threads equal to the number of cores).
There are multiple threads per socket so that when a packet arrives, there are still threads listening while that packet is being handled.
And yes, we have actually tested it. On a dev machine, a single thread can handle approximately 18,000 packets/transactions per second, adding a second thread increases capacity to approx 35k, a third thread has the dev machine maxed at 45k, but that's because at that point I'm CPU-bound. The production machines have much more CPU to spare.
On a single-core, single-CPU machine, you might be close to the truth. But even cheap desktops these days are at least multi-core.
And even if you were right, I *still* would need to know if the queue buffer is backing up under load. If for no other reason than to alert the NOC of a load problem.
So, you're saying that the optimum number of threads is more than the number of cores. I guess I can buy that -- can't argue with empirical measurement.
Originally Posted by msheldon
I think you can derive the information you need from the packet timings themselves rather than querying the kernel. By applying some queueing theory you can see that the per-thread packet wait queues are growing over time, and launch more threads into the pool. If you really need instant response to sudden load spikes, you need to have threads in reserve at any rate.
After looking for about 15 minutes, I do not think there is a way from userspace to determine the number of packets ready to be read from a UDP socket.
Yeah, was afraid of that.
I'm fully planning on having more threads running than required. Normal load would barely stress one thread. Problem is that a heavy spike could increase load by a factor of 500 easily. I'm not keen on having 10 or more threads idling when 98% of the time one thread would be enough. My plan was to have 3 running to handle "normal" spikes, then fire up more if needed.