This seems like a very straight forward problem but I can't figure out a solution. Suppose I have a sine function y
with 8000 samples:
import numpy as np
Fs = 8000
f = 1
npts = 8000
x = np.arange(npts)
y = np.sin(2 * np.pi * f * x / Fs)
I want to downsample this function to 6000 samples, so I tried the method of this answer to a similar question...
import math
from scipy import nanmean
#number of samples I want to downsample to
npts2 = 6000
#calculating the number of NaN values to pad to the array
n = math.ceil(float(y.size)/npts2)
pad_size = n*npts2 - len(y)
padded = np.append(y, np.zeros(int(pad_size))*np.NaN)
#downsampling the reshaped padded array with nanmean
downsampled = nanmean(padded.reshape((npts2, int(n))), axis = 1)
This gives me an array of the correct length (6000) but the last 2000 samples (i.e. the difference between the original npts
and npts2
) are NaN
, and the function itself only occupies the first 4000 samples.
Is there a better way I can make this sine function 6000 samples in length? Thanks!
Edit
Thanks for the replies - I realize now I was attacking this the wrong way. I decided to use the scipy.interpolate.interp1d
function on the y
function, and then pass it an np.linspace
array generated with the desired number of points to interpolate to. This gives me the correctly scaled output.
from scipy.interpolate import interp1d
def downsample(array, npts):
interpolated = interp1d(np.arange(len(array)), array, axis = 0, fill_value = 'extrapolate')
downsampled = interpolated(np.linspace(0, len(array), npts))
return downsampled
downsampled_y = downsample(y, 6000)