FTP is a pure TCP-connect protocol, and thus AFAIK "as fast as it gets" when considering TCP file transfer options.
However, there are some other products that do not run over TCP - examples are the commercial products BI.DAN-GUN, fasp and FileCatalyst. The latter product points out problems with pure TCP, and one can read more on Wikipedia, e.g. starting from Network Congestion.
What other alternatives are there? .. in particular Open Source ones? Also, one would think that this should be an RFC of sorts - a standard largish-file-transfer-specific protocol, probably running over UDP. Anyone know of such a protocol, or an initiative? (The Google SPDY is interesting, but doesn't directly address fast large-file-transfer)
Why do you think using TCP makes the transfer slower? TCP is usually able to use all available bandwidth. Using UDP instead is unlikely to be faster. In fact, if you tried to make a reliable UDP based file transfer, you'd likely end up implementing an inferior alternative to TCP - since you'd have to implement the reliability yourself.
What is problematic about FTP is that it performs multiple synchronous request-response commands for every file you transfer, and opens a new data connection for every file. This results in extremely inefficient transfers when a lot of smaller files are being transferred, because much of the time is spent waiting requests/responses and establishing data connections instead of actually transferring data.
A simple way to get around this issue is to pack the files/folders into an archive. While you can, of course, just make the archive, send it using FTP or similar, and unpack it on the other side, the time spent packing and unpacking may be unacceptable. You can avoid this delay by doing the packing and unpacking on-line. I'm not aware of any software that integrates such on-line packing/unpacking. You can, however, simply use the nc
and tar
programs in a pipeline (Linux, on Windows use Cygwin):
First run on the receiver:
nc -l -p 7000 | tar x -C <destination_folder>
This will make the receiver wait for a connection on port number 7000. Then run on the sender:
cd /some/folder
tar c ./* | nc -q0 <ip_address_of_receiver>:7000
This will make the sender connect to the receiver, starting the transfer. The sender will creating the tar archive, sending it to the receiver, which will be extracting it - all at the same time. If you need, you can reverse the roles of sender and receiver (by having the receiver connect to the sender).
This online-tar approach has none of the two performance issues of FTP; it doesn't perform any request-response commands, and uses only a single TCP connections.
However, note that this is not secure; anybody could connect to the receiver before our sender does, send him his own tar archive. If this is an issue, a VPN can be used, in combination with appropriate firewall rules.
EDIT: you mentioned packet loss as a problem with TCP performance, which is a significant problem, if the FileCatalyst page is to be believed. It is true that TCP may perform non-optimally with high packet loss links. This is because TCP usually reacts aggressively to packet loss, because it assumes loss is due to congestion; see Additive_increase/multiplicative_decrease. I'm not aware of any free/open source file transfer programs that would attempt to overcome this with custom protocols. You may however try out different TCP congestion avoidance algorithms. In particular, try Vegas, which does not use packet loss as a signal to reduce transmission rate.