Hi,
I'm looking at some ancient network code, and I'm wondering if there's a valid reason why it's using writev() to send data instead of just write()? For receiving data it uses read(), not readv(), so what's so special about sending?

The reason I'm asking this is because I'm wondering if it could cause any problems if I change it to write()? When the receiving end closes the socket, the writev() isn't failing, so I'm hoping write() might fail like it says in "Pitfall 2" of this article:
http://www.ibm.com/developerworks/li...pit/index.html