I am looking for quantitative estimates on clock offsets between VMs on Windows Azure - assuming that all VMs are hosted in the same datacenter. I am guesstimating that average clock offset between one VM and another is below 10 seconds, but I am not even sure it's guaranteed property of the Azure cloud.
Has anybody some quantitative measurements on that matter?
I have finally settled to do some experiments on my own.
A few facts concerning the experiment protocol:
Stopwatch
was always lower than 1ms for minimalistic unauthenticated requests (basically HTTP requests were coming back with 400 errors, but still with Date:
available in the HTTP headers).Results:
So technically, we are not too far from the 2s tolerance target, although for intra-data-center sync, you don't have to push the experiment far to observe close to 4s offset. If we assume a normal (aka Gaussian) distribution for the clock offsets, then I would say that relying on any clock threshold lower than 6s is bound to lead to scheduling issues.
/// <summary>
/// Substitute for proper NTP (Network Time Protocol)
/// when UDP is not available, as on Windows Azure.
/// </summary>
public class HttpTimeChecker
{
public static DateTime GetUtcNetworkTime(string server)
{
// HACK: we can't use WebClient here, because we get a faulty HTTP response
// We don't care about HTTP error, the only thing that matter is the presence
// of the 'Date:' HTTP header
var tc = new TcpClient();
tc.Connect(server, 80);
string response;
using (var ns = tc.GetStream())
{
var sw = new StreamWriter(ns);
var sr = new StreamReader(ns);
string req = "";
req += "GET / HTTP/1.0\n";
req += "Host: " + server + "\n";
req += "\n";
sw.Write(req);
sw.Flush();
response = sr.ReadToEnd();
}
foreach(var line in response.Split(new[] { '\r', '\n' }, StringSplitOptions.RemoveEmptyEntries))
{
if(line.StartsWith("Date: "))
{
return DateTime.Parse(line.Substring(6)).ToUniversalTime();
}
}
throw new ArgumentException("No date to be retrieved among HTTP headers.", "server");
}
}