I'm writing a simple program that makes multiple connections to different servers for status check. All these connections are constructed on-demand; up to 10 connections can be created simultaneously. I don't like the idea of one-thread-per-socket, so I made all these client sockets Non-Blocking, and throw them into a select() pool.
It worked great, until my client complained that the waiting time is too long before they can get the error report when target servers stopped responding.
I've checked several topics in the forum. Some had suggested that one can use alarm() signal or set a timeout in the select() function call. But I'm dealing with multiple connections, instead of one. When a process wide timeout signal happens, I've no way to distinguish the timeout connection among all the other connections.
Is there anyway to change the system-default timeout duration ?
You can use the SO_RCVTIMEO and SO_SNDTIMEO socket options to set timeouts for any socket operations, like so:
struct timeval timeout;
timeout.tv_sec = 10;
timeout.tv_usec = 0;
if (setsockopt (sockfd, SOL_SOCKET, SO_RCVTIMEO, (char *)&timeout,
sizeof(timeout)) < 0)
error("setsockopt failed\n");
if (setsockopt (sockfd, SOL_SOCKET, SO_SNDTIMEO, (char *)&timeout,
sizeof(timeout)) < 0)
error("setsockopt failed\n");
Edit: from the setsockopt
man page:
SO_SNDTIMEO
is an option to set a timeout value for output operations. It accepts a struct timeval parameter with the number of seconds and microseconds used to limit waits for output operations to complete. If a send operation has blocked for this much time, it returns with a partial count or with the error EWOULDBLOCK if no data were sent. In the current implementation, this timer is restarted each time additional data are delivered to the protocol, implying that the limit applies to output portions ranging in size from the low-water mark to the high-water mark for output.
SO_RCVTIMEO
is an option to set a timeout value for input operations. It accepts a struct timeval parameter with the number of seconds and microseconds used to limit waits for input operations to complete. In the current implementation, this timer is restarted each time additional data are received by the protocol, and thus the limit is in effect an inactivity timer. If a receive operation has been blocked for this much time without receiving additional data, it returns with a short count or with the error EWOULDBLOCK if no data were received. The struct timeval parameter must represent a positive time interval; otherwise, setsockopt() returns with the error EDOM.