I have been using scipy.optimize.leastsq
to fit some data. I would like to get some confidence intervals on these estimates so I look into the cov_x
output but the documentation is very unclear as to what this is and how to get the covariance matrix for my parameters from this.
First of all it says that it is a Jacobian, but in the notes it also says that "cov_x
is a Jacobian approximation to the Hessian" so that it is not actually a Jacobian but a Hessian using some approximation from the Jacobian. Which of these statements is correct?
Secondly this sentence to me is confusing:
This matrix must be multiplied by the residual variance to get the covariance of the parameter estimates – see
curve_fit
.
I indeed go look at the source code for curve_fit
where they do:
s_sq = (func(popt, *args)**2).sum()/(len(ydata)-len(p0))
pcov = pcov * s_sq
which corresponds to multiplying cov_x
by s_sq
but I cannot find this equation in any reference. Can someone explain why this equation is correct?
My intuition tells me that it should be the other way around since cov_x
is supposed to be a derivative (Jacobian or Hessian) so I was thinking:
cov_x * covariance(parameters) = sum of errors(residuals)
where sigma(parameters)
is the thing I want.
How do I connect the thing curve_fit is doing with what I see at eg. wikipedia: http://en.wikipedia.org/wiki/Propagation_of_uncertainty#Non-linear_combinations
OK, I think I found the answer. First the solution: cov_x*s_sq is simply the covariance of the parameters which is what you want. Taking sqrt of the diagonal elements will give you standard deviation (but be careful about covariances!).
Residual variance = reduced chi square = s_sq = sum[(f(x)-y)^2]/(N-n), where N is number of data points and n is the number of fitting parameters. Reduced chi square.
The reason for my confusion is that cov_x as given by leastsq is not actually what is called cov(x) in other places rather it is the reduced cov(x) or fractional cov(x). The reason it does not show up in any of the other references is that it is a simple rescaling which is useful in numerical computations, but is not relevant for a textbook.
About Hessian versus Jacobian, the documentation is poorly worded. It is the Hessian that is calculated in both cases as is obvious since the Jacobian is zero at a minimum. What they mean is that they are using an approximation to the Jacobian to find the Hessian.
A further note. It seems that the curve_fit result does not actually account for the absolute size of the errors, but only take into account the relative size of the sigmas provided. This means that the pcov returned doesn't change even if the errorbars change by a factor of a million. This is of course not right, but seems to be standard practice ie. Matlab does the same thing when using their Curve fitting toolbox. The correct procedure is described here: https://en.wikipedia.org/wiki/Linear_least_squares_(mathematics)#Parameter_errors_and_correlation
It seems fairly straightforward to do this once the optimum has been found, at least for Linear Least squares.