The array the numpy.gradient
function returns depends on the number of data-points/spacing of the data-points. Is this expected behaviour? For example:
y = lambda x: x
x1 = np.arange(0,10,1)
x2 = np.arange(0,10,0.1)
x3 = np.arange(0,10,0.01)
plt.plot(x1,np.gradient(y(x1)),'r--o')
plt.plot(x2,np.gradient(y(x2)),'b--o')
plt.plot(x3,np.gradient(y(x3)),'g--o')
returns the plot.
Only the gradient of y(x1) returns the correct result. What is going on here? Is there a better way to compute the numerical derivative using numpy?
Cheers
In np.gradient
you should tell the sample distance. To get the same results you should type:
plt.plot(x1,np.gradient(y(x1),1),'r--o')
plt.plot(x2,np.gradient(y(x2),0.1),'b--o')
plt.plot(x3,np.gradient(y(x3),0.01),'g--o')
The default sample distance is 1 and that's why it works for x1.
If the distance is not even you have to compute it manually. If you use the forward difference you can do:
d = np.diff(y(x))/np.diff(x)
If you are interested in computing central difference as np.gradient does you could do something like this:
x = np.array([1, 2, 4, 7, 11, 16], dtype=np.float)
y = lambda x: x**2
z1 = np.hstack((y(x[0]), y(x[:-1])))
z2 = np.hstack((y(x[1:]), y(x[-1])))
dx1 = np.hstack((0, np.diff(x)))
dx2 = np.hstack((np.diff(x), 0))
d = (z2-z1) / (dx2+dx1)