Quite often on SO I find myself benchmarking small chunks of code to see which implemnetation is fastest.
Quite often I see comments that benchmarking code does not take into account jitting or the garbage collector.
I have the following simple benchmarking function which I have slowly evolved:
static void Profile(string description, int iterations, Action func) {
// warm up
func();
// clean up
GC.Collect();
var watch = new Stopwatch();
watch.Start();
for (int i = 0; i < iterations; i++) {
func();
}
watch.Stop();
Console.Write(description);
Console.WriteLine(" Time Elapsed {0} ms", watch.ElapsedMilliseconds);
}
Usage:
Profile("a descriptions", how_many_iterations_to_run, () =>
{
// ... code being profiled
});
Does this implementation have any flaws? Is it good enough to show that implementaion X is faster than implementation Y over Z iterations? Can you think of any ways you would improve this?
EDIT Its pretty clear that a time based approach (as opposed to iterations), is preferred, does anyone have any implementations where the time checks do not impact performance?
Here is the modified function: as recommended by the community, feel free to amend this its a community wiki.
static double Profile(string description, int iterations, Action func) {
//Run at highest priority to minimize fluctuations caused by other processes/threads
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.High;
Thread.CurrentThread.Priority = ThreadPriority.Highest;
// warm up
func();
var watch = new Stopwatch();
// clean up
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
watch.Start();
for (int i = 0; i < iterations; i++) {
func();
}
watch.Stop();
Console.Write(description);
Console.WriteLine(" Time Elapsed {0} ms", watch.Elapsed.TotalMilliseconds);
return watch.Elapsed.TotalMilliseconds;
}
Make sure you compile in Release with optimizations enabled, and run the tests outside of Visual Studio. This last part is important because the JIT stints its optimizations with a debugger attached, even in Release mode.