suppose I write data really fast [I have all the data in memory] to a blocking socket. further suppose the other side will read data very slow [like sleep 1 second between each read].
what is the expected behavior on the writing side in this case? would the write operation block until the other side reads enough data, or will the write return an error like connection reset?
For a blocking socket, the send()
call will block until all the data has been copied into the networking stack's buffer for that connection. It does not have to be received by the other side. The size of this buffer is implementation dependent.
Data is cleared from the buffer when the remote side acknowledges it. This is an OS thing and is not dependent upon the remote application actually reading the data. The size of this buffer is also implementation dependent.
When the remote buffer is full, it tells your local stack to stop sending. When data is cleared from the remote buffer (by being read by the remote application) then the remote system will inform the local system to send more data.
In both cases, small systems (like embedded systems) may have buffers of a few KB or smaller and modern servers may have buffers of a few MB or larger.
Once space is available in the local buffer, more data from your send()
call will be copied. Once all of that data has been copied, your call will return.
You won't get a "connection reset" error (from the OS -- libraries may do anything) unless the connection actually does get reset.
So... It really doesn't matter how quickly the remote application is reading data until you've sent as much data as both local & remote buffer sizes combined. After that, you'll only be able to send()
as quickly as the remote side will recv()
.