I am writing an application in C# that needs to handle incoming connections and I've never done server side programming before. This leads me to these following questions:
Thanks in advance.
The listen backlog is, as Pieter said, a queue which is used by the operating system to store connections that have been accepted by the TCP stack but not, yet, by your program. Conceptually, when a client connects it's placed in this queue until your Accept()
code removes it and hands it to your program.
As such, the listen backlog is a tuning parameter that can be used to help your server handle peaks in concurrent connection attempts. Note that this is concerned with peaks in concurrent connection attempts and in no way related to the maximum number of concurrent connections that your server can maintain. For example, if you have a server which receives 10 new connections per second then it's unlikely that tuning the listen backlog will have any affect even if these connections are long lived and your server is supporting 10,000 concurrent connections (assuming your server isn't maxing out the CPU serving the existing connections!). However, if a server occasionally experiences short periods when it is accepting 1000 new connections per second then you can probably prevent some connections from being rejected by tuning the listen backlog to provide a larger queue and therefore give your server more time to call Accept()
for each connection.
As for pros and cons, well the pros are that you can handle peaks in concurrent connection attempts better and the corresponding con is that the operating system needs to allocate more space for the listen backlog queue because it is larger. So it's a performance vs resources trade off.
Personally I make the listen backlog something that can be externally tuned via a config file.
How and when you call listen and accept depends upon the style of sockets code that you're using. With synchronous code you'd call Listen()
once with a value, say 10, for your listen backlog and then loop calling Accept()
. The call to listen sets up the end point that your clients can connect to and conceptually creates the listen backlog queue of the size specified. Calling Accept()
removes a pending connection from the listen backlog queue, sets up a socket for application use and passes it to your code as a newly established connection. If the time taken by your code to call Accept()
, handle the new connection, and loop round to call Accept()
again is longer than the gap between concurrent connection attempts then you'll start to accumulate entries in the listen backlog queue.
With asynchronous sockets it can be a little different, if you're using async accepts you will listen once, as before and then post several (again configurable) async accepts. As each one of these completes you handle the new connection and post a new async accept. In this way you have a listen backlog queue and a pending accept 'queue' and so you can accept connections faster (what's more the async accepts are handled on thread pool threads so you don't have a single tight accept loop). This is, usually, more scalable and gives you two points to tune to handle more concurrent connection attempts.