How is the derivative of a f(x)
typically calculated programmatically to ensure maximum accuracy?
I am implementing the Newton-Raphson method, and it requires taking of the derivative of a function.
I agree with @erikkallen that (f(x + h) - f(x - h)) / 2 * h
is the usual approach for numerically approximating derivatives. However, getting the right step size h is a little subtle.
The approximation error in (f(x + h) - f(x - h)) / 2 * h
decreases as h
gets smaller, which says you should take h
as small as possible. But as h
gets smaller, the error from floating point subtraction increases since the numerator requires subtracting nearly equal numbers. If h
is too small, you can loose a lot of precision in the subtraction. So in practice you have to pick a not-too-small value of h
that minimizes the combination of approximation error and numerical error.
As a rule of thumb, you can try h = SQRT(DBL_EPSILON)
where DBL_EPSILON
is the smallest double precision number e
such that 1 + e != 1
in machine precision. DBL_EPSILON
is about 10^-15
so you could use h = 10^-7
or 10^-8
.
For more details, see these notes on picking the step size for differential equations.