I'm experimenting using fastcgi on nginx, but I've run into some problems. Nginx doesn't reuse connections, it gives 0 in BeginRequest flags, so the application should close the connection after the request has finished.
I have the following code for closing:
socket.shutdown(SocketShutdown.BOTH);
socket.close();
The problem is that the connections are not actually closed.. They linger on as TIME_WAIT, and nginx (or something) wont't keep opening new connections. My guess is I'm doing something wrong when closing the sockets, but I don't know what.. On a related note - how can I get nginx to keep connections open?
This is using nginx 1.0.6 and D 2.055
EDIT: Haven't gotten any closer, but I also checked the linger option, and it's off:
linger l;
socket.getOption(SocketOptionLevel.SOCKET, SocketOption.LINGER, l);
assert(l.on == 0); // off
getOption
returns 4 though.. No idea what that means. The return value is undocumented.
EDIT: I've also tried using TCP_NODELAY on the last message sent, but this didn't have any effect either:
socket.setOption(SocketOptionLevel.SOCKET, SocketOption.TCP_NODELAY, 1);
EDIT: nginx 1.1.4 supportes keep alive connections. This doesn't work as expected though.. Is correctly report that the server is responsible for connection lifetime management, but it still creates a new socket for each request.
Regarding nginx (v1.1) keepalive for fastcgi. The proper way to configure it is as follows:
upstream fcgi_backend {
server localhost:9000;
keepalive 32;
}
server {
...
location ~ \.php$ {
fastcgi_keep_conn on;
fastcgi_pass fcgi_backend;
...
}
}
TCP TIME_WAIT connection state has nothing to do with lingers, tcp_no_delays, timeouts, and so on. It is completely managed by the OS kernel and can only be influenced by system-wide configuration options. Generally it is unavoidable. It is just the way TCP protocol works. Read about it here and here.
The most radical way to avoid TIME_WAIT is to reset (send RST packet) TCP connection on close by setting linger=ON and linger_timeout=0. But doing it this way is not recommended for normal operation as you might loose unsent data. Only reset socket under error conditions (timeouts, etc.).
What I would try is the following. After you send all your data call socket.shutdown(WRITE) (this will send FIN packet to the other party) and do not close the socket yet. Then keep on reading from the socket until you receive indication that the connection is closed by the other end (in C that is typically indicated by 0-length read()). After receiving this indication, close the socket. Read more about it here.
If you are developing any sort of network server you must study these options as they do affect performance. Without using these you might experience ~20ms delays (Nagle delay) on every packet sent over. Although these delays look small they can adversely affect your requests-per-second statistics. A good reading about it is here.
Another good reference to read about sockets is here.
I would agree with other commenters that using FastCGI protocol to your backend might not be very good idea. If you care about performance then you should implement your own nginx module (or, if that seems too difficult, a module for some other server such as NXWEB). Otherwise use HTTP. It is easier to implement and is far more versatile than FastCGI. And I would not say HTTP is much slower than FastCGI.