How does one choose the size of a buffer (bytes I read from or write to socket) for the maximum throughput when implementing a low-level HTTP and FTP transfer? My application should transfer data with HTTP or FTP on connections varying from 130 Kbps to 3 Mbps (I know the expected speed beforehand). Sometimes it's a one way transfer, sometimes it goes in both directions. Should I stick with some average buffer size or I must vary it depending on the connection speed?
Thanks.
Choose a buffer size over 8KB. 9000 is typically the largest MTU (maximum transmission unit) size used in even the fastest networks.
When you use a buffer larger than the MTU of the connection, the operating system will break it down in to MTU sized pieces as needed, and thus anything you use over the MTU will have little effect on network performance.
However, using a large buffer will likely have other effect on performance, if you're transferring files, then using large buffers may increase the read performance, thus improving the speed of your application.
So, Usually picking a nice round number like 16KB is a good idea. Definitely don't go under 1500, as this can negatively effect network performance (causing the operating system to sometimes send small packets, which decrease performance on the network).