Are doubles faster than floats in C#?

Trap picture Trap · Oct 1, 2008 · Viewed 16.6k times · Source

I'm writing an application which reads large arrays of floats and performs some simple operations with them. I'm using floats, because I thought it'd be faster than doubles, but after doing some research I've found that there's some confusion about this topic. Can anyone elaborate on this?

Answer

user7116 picture user7116 · Oct 1, 2008

The short answer is, "use whichever precision is required for acceptable results."

Your one guarantee is that operations performed on floating point data are done in at least the highest precision member of the expression. So multiplying two float's is done with at least the precision of float, and multiplying a float and a double would be done with at least double precision. The standard states that "[floating-point] operations may be performed with higher precision than the result type of the operation."

Given that the JIT for .NET attempts to leave your floating point operations in the precision requested, we can take a look at documentation from Intel for speeding up our operations. On the Intel platform your floating point operations may be done in an intermediate precision of 80 bits, and converted down to the precision requested.

From Intel's guide to C++ Floating-point Operations1 (sorry only have dead tree), they mention:

  • Use a single precision type (for example, float) unless the extra precision obtained through double or long double is required. Greater precision types increase memory size and bandwidth requirements. ...
  • Avoid mixed data type arithmetic expressions

That last point is important as you can slow yourself down with unnecessary casts to/from float and double, which result in JIT'd code which requests the x87 to cast away from its 80-bit intermediate format in between operations!

1. Yes, it says C++, but the C# standard plus knowledge of the CLR lets us know the information for C++ should be applicable in this instance.