Hi,
I was writing/upgrading a network program that sends a synflood ( Series of TCP SYN packets). This is being used for the testing. Now I send a series of SYN packets without any delay from a linux machine. It sends it at approx 14-150000 packets per seconds. This I have implemented using setitimer() by setting alarm for 1 sec and counted the total packets sent for the whole time, till i hit Control+C to stop it, where I used SIGINT and a interrupt handler to avg them out.
Now my question is when I add a user configurable delay between 2 successive sendto() calls using usleep(),the rate drops to 50 packets per seconds. Well I am not sure if this is right. How can a delay of microseconds, drop the rate so much.? Morover whatever delay I provide, rate remains the same 50 pps.
Here is snippet of this code where I call usleep().
Code:
send_syn(so, pPkt, sip, sport, dip, dport, pktid, seq);
pktcount++;
if((lmt != 0) && (pktcount == lmt)) goto done;
signal(SIGINT,exit_handler);
if(usl) {usleep(usl);} /* If delay is provided in command, add delay */
Can someone clarify this for me ?
Thanks,