When should I use double instead of decimal?

Jamie Ide picture Jamie Ide · Apr 29, 2009 · Viewed 55.1k times · Source

I can name three advantages to using double (or float) instead of decimal:

  1. Uses less memory.
  2. Faster because floating point math operations are natively supported by processors.
  3. Can represent a larger range of numbers.

But these advantages seem to apply only to calculation intensive operations, such as those found in modeling software. Of course, doubles should not be used when precision is required, such as financial calculations. So are there any practical reasons to ever choose double (or float) instead of decimal in "normal" applications?

Edited to add: Thanks for all the great responses, I learned from them.

One further question: A few people made the point that doubles can more precisely represent real numbers. When declared I would think that they usually more accurately represent them as well. But is it a true statement that the accuracy may decrease (sometimes significantly) when floating point operations are performed?

Answer

Noldorin picture Noldorin · Apr 29, 2009

I think you've summarised the advantages quite well. You are however missing one point. The decimal type is only more accurate at representing base 10 numbers (e.g. those used in currency/financial calculations). In general, the double type is going to offer at least as great precision (someone correct me if I'm wrong) and definitely greater speed for arbitrary real numbers. The simple conclusion is: when considering which to use, always use double unless you need the base 10 accuracy that decimal offers.

Edit:

Regarding your additional question about the decrease in accuracy of floating-point numbers after operations, this is a slightly more subtle issue. Indeed, precision (I use the term interchangeably for accuracy here) will steadily decrease after each operation is performed. This is due to two reasons:

  1. the fact that certain numbers (most obviously decimals) can't be truly represented in floating point form
  2. rounding errors occur, just as if you were doing the calculation by hand. It depends greatly on the context (how many operations you're performing) whether these errors are significant enough to warrant much thought however.

In all cases, if you want to compare two floating-point numbers that should in theory be equivalent (but were arrived at using different calculations), you need to allow a certain degree of tolerance (how much varies, but is typically very small).

For a more detailed overview of the particular cases where errors in accuracies can be introduced, see the Accuracy section of the Wikipedia article. Finally, if you want a seriously in-depth (and mathematical) discussion of floating-point numbers/operations at machine level, try reading the oft-quoted article What Every Computer Scientist Should Know About Floating-Point Arithmetic.