How to correctly use scikit-learn's Gaussian Process for a 2D-inputs, 1D-output regression?

Julie picture Julie · Jan 10, 2017 · Viewed 10k times · Source

Prior to posting I did a lot of searches and found this question which might be exactly my problem. However, I tried what is proposed in the answer but unfortunately this did not fix it, and I couldn't add a comment to request further explanation, as I am a new member here.

Anyway, I want to use the Gaussian Processes with scikit-learn in Python on a simple but real case to start (using the examples provided in scikit-learn's documentation). I have a 2D input set (8 couples of 2 parameters) called X. I have 8 corresponding outputs, gathered in the 1D-array y.

#  Inputs: 8 points 
X = np.array([[p1, q1],[p2, q2],[p3, q3],[p4, q4],[p5, q5],[p6, q6],[p7, q7],[p8, q8]])

# Observations: 8 couples
y = np.array([r1,r2,r3,r4,r5,r6,r7,r8])

I defined an input test space x:

# Input space
x1 = np.linspace(x1min, x1max) #p
x2 = np.linspace(x2min, x2max) #q
x = (np.array([x1, x2])).T

Then I instantiate the GP model, fit it to my training data (X,y), and make the 1D prediction y_pred on my input space x:

from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, ConstantKernel as C

kernel = C(1.0, (1e-3, 1e3)) * RBF([5,5], (1e-2, 1e2))
gp = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=15)
gp.fit(X, y)
y_pred, MSE = gp.predict(x, return_std=True)

And then I make a 3D plot:

fig = pl.figure()
ax = fig.add_subplot(111, projection='3d')
Xp, Yp = np.meshgrid(x1, x2)
Zp = np.reshape(y_pred,50)

surf = ax.plot_surface(Xp, Yp, Zp, rstride=1, cstride=1, cmap=cm.jet,
linewidth=0, antialiased=False)
pl.show()

This is what I obtain:

RBF[5,5]

When I modify the kernel parameters I get something like this, similar to what the poster I mentioned above got:

RBF[10,10]

These plots don't even match the observation from the original training points (the lower response is obtained for [65.1,37] and the highest for [92.3,54]).

I am fairly new to GPs in 2D (also started Python not long ago) so I think I'm missing something here... Any answer would be helpful and greatly appreciated, thanks!

Answer

Argantonio65 picture Argantonio65 · Apr 14, 2017

I'm also fairly new using scikit-learn gaussian process. But after some effort, I managed to implement a 3-d gaussian process regression successfully. There are a lot of examples of 1-d regression but nothing on higher input dimensions.

Perhaps you could show the values that you are using.

I found that sometimes the format in which you send the inputs can produce some issues. Try formatting input X as:

X = np.array([param1, param2]).T

and format the output as:

gp.fit(X, y.reshape(-1,1))

Also, as I understood, the implementation assumes a mean function m=0. If the output you are trying to regress presents an average value which differs significantly from 0 you should normalize it (that will probably solve your problem). Standardizing the parameter space will help as well.