Determine the decimal precision of an input number

Quinten picture Quinten · Jul 19, 2010 · Viewed 18.8k times · Source

We have an interesting problem were we need to determine the decimal precision of a users input (textbox). Essentially we need to know the number of decimal places entered and then return a precision number, this is best illustrated with examples:

4500 entered will yield a result 1
4500.1 entered will yield a result 0.1
4500.00 entered will yield a result 0.01
4500.450 entered will yield a result 0.001

We are thinking to work with the string, finding the decimal separator and then calculating the result. Just wondering if there is an easier solution to this.

Answer

Daniel Brückner picture Daniel Brückner · Jul 19, 2010

I think you should just do what you suggested - use the position of the decimal point. Obvious drawback might be that you have to think about internationalization yourself.

var decimalSeparator = NumberFormatInfo.CurrentInfo.CurrencyDecimalSeparator;

var position = input.IndexOf(decimalSeparator);

var precision = (position == -1) ? 0 : input.Length - position - 1;

// This may be quite unprecise.
var result = Math.Pow(0.1, precision);

There is another thing you could try - the Decimal type stores an internal precision value. Therefore you could use Decimal.TryParse() and inspect the returned value. Maybe the parsing algorithm maintains the precision of the input.

Finally I would suggest not to try something using floating point numbers. Just parsing the input will remove any information about trailing zeros. So you have to add an artifical non-zero digit to preserve them or do similar tricks. You might run into precision issues. Finally finding the precision based on a floating point number is not simple, too. I see some ugly math or a loop multiplying with ten every iteration until there is no longer any fractional part. And the loop comes with new precision issues...

UPDATE

Parsing into a decimal works. Se Decimal.GetBits() for details.

var input = "123.4560";

var number = Decimal.Parse(input);

// Will be 4.
var precision = (Decimal.GetBits(number)[3] >> 16) & 0x000000FF;

From here using Math.Pow(0.1, precision) is straight forward.

UPDATE 2

Using decimal.GetBits() will allocate an int[] array. If you want to avoid the allocation you can use the following helper method which uses an explicit layout struct to get the scale directly out of the decimal value:

static int GetScale(decimal d)
{
    return new DecimalScale(d).Scale;
}

[StructLayout(LayoutKind.Explicit)]
struct DecimalScale
{
    public DecimalScale(decimal value)
    {
        this = default;
        this.d = value;
    }

    [FieldOffset(0)]
    decimal d;

    [FieldOffset(0)]
    int flags;

    public int Scale => (flags >> 16) & 0xff;
}