I am encrypting the user's input for generating a string for password. But a line of code gives different results in different versions of the framework. Partial code with value of key pressed by user:
Key pressed: 1. Variable ascii
is 49. Value of 'e' and 'n' after some calculation:
e = 103,
n = 143,
Math.Pow(ascii, e) % n
Result of above code:
In .NET 3.5 (C#)
Math.Pow(ascii, e) % n
gives 9.0
.
In .NET 4 (C#)
Math.Pow(ascii, e) % n
gives 77.0
.
Math.Pow()
gives the correct (same) result in both versions.
What is the cause, and is there a solution?
Math.Pow
works on double-precision floating-point numbers; thus, you shouldn't expect more than the first 15–17 digits of the result to be accurate:
All floating-point numbers also have a limited number of significant digits, which also determines how accurately a floating-point value approximates a real number. A
Double
value has up to 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.
However, modulo arithmetic requires all digits to be accurate. In your case, you are computing 49103, whose result consists of 175 digits, making the modulo operation meaningless in both your answers.
To work out the correct value, you should use arbitrary-precision arithmetic, as provided by the BigInteger
class (introduced in .NET 4.0).
int val = (int)(BigInteger.Pow(49, 103) % 143); // gives 114
Edit: As pointed out by Mark Peters in the comments below, you should use the BigInteger.ModPow
method, which is intended specifically for this kind of operation:
int val = (int)BigInteger.ModPow(49, 103, 143); // gives 114