I have written two pair of codes(server.c
and client.c
) in Linux. One for UNIX-domain AF_UNIX
other for INTERNET-domain AF_INET
. Both are working fine!
listen()
is called for backlog queue length = 3 in both servers
listen(sockfd, 3);
In UNIX domain (AF_UNIX): While one client is connected with server, If I try to connect more clients to server. Three are kept in queue, and request of fourth is declined. (as I desired - 3 in waiting queue).
In INTERNET domain (AF_INET): Request of more than three are kept in a pending queue.
Why isn't a request from a fourth client rejected, even when the backlog queue length is three? And why is the behavior of listen()
(and others) protocol dependent?
Operating systems actually use larger queues for incoming TCP connections than the one specified to listen()
. How much larger depends on the operating system.
listen(int socket_fd, int backlog)
For a given listening socket kernal maintains two queue.
backlog argument historically specify sum of both queues. But there is no formal definition of what backlog means.
Berkeley-derived implementation add a fudge factor to the backlog. So total queue length = factor * backlog
.
A very detailed and deep explanation given in a book by W. Richard Stevens. Also a table showing the values for seven operating systems can be found in Stevens, Fenner, Rudoff, "Unix Network Programming: The Sockets Network API", Volume 1, Third Edition, Page 108.