I want to create a C++ server/client that maximizes the throughput over TCP socket communication on my localhost. As a preparation, I used iperf to find out what the maximum bandwidth is on my i7 MacBookPro.
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 256 KByte (default)
------------------------------------------------------------
[ 4] local 127.0.0.1 port 5001 connected with 127.0.0.1 port 51583
[ 4] 0.0-120.0 sec 329 GBytes 23.6 Gbits/sec
Without any tweaking, ipref showed me that I can reach at least 23.2 GBit/s. Then I did my own C++ server/client implementation, you can find the full code here: https://gist.github.com/1116635
I that code I basically transfer a 1024bytes int array with each read/write operation. So my send loop on the server looks like this:
int n;
int x[256];
//fill int array
for (int i=0;i<256;i++)
{
x[i]=i;
}
for (int i=0;i<(4*1024*1024);i++)
{
n = write(sock,x,sizeof(x));
if (n < 0) error("ERROR writing to socket");
}
My receive loop on the client looks like this:
int x[256];
for (int i=0;i<(4*1024*1024);i++)
{
n = read(sockfd,x,((sizeof(int)*256)));
if (n < 0) error("ERROR reading from socket");
}
As mention in the headline, running this (compiled with -O3) results in the following execution time which is about 3 GBit/s:
./client 127.0.0.1 1234
Elapsed time for Reading 4GigaBytes of data over socket on localhost: 9578ms
Where do I loose the bandwidth, what am I doing wrong? Again, the full code can be seen here: https://gist.github.com/1116635
Any help is appreciated!