How to apply standardization to SVMs in scikit-learn?

pemistahl picture pemistahl · Feb 4, 2013 · Viewed 19.5k times · Source

I'm using the current stable version 0.13 of scikit-learn. I'm applying a linear support vector classifier to some data using the class sklearn.svm.LinearSVC.

In the chapter about preprocessing in scikit-learn's documentation, I've read the following:

Many elements used in the objective function of a learning algorithm (such as the RBF kernel of Support Vector Machines or the l1 and l2 regularizers of linear models) assume that all features are centered around zero and have variance in the same order. If a feature has a variance that is orders of magnitude larger that others, it might dominate the objective function and make the estimator unable to learn from other features correctly as expected.

Question 1: Is standardization useful for SVMs in general, also for those with a linear kernel function as in my case?

Question 2: As far as I understand, I have to compute the mean and standard deviation on the training data and apply this same transformation on the test data using the class sklearn.preprocessing.StandardScaler. However, what I don't understand is whether I have to transform the training data as well or just the test data prior to feeding it to the SVM classifier.

That is, do I have to do this:

scaler = StandardScaler()
scaler.fit(X_train)                # only compute mean and std here
X_test = scaler.transform(X_test)  # perform standardization by centering and scaling

clf = LinearSVC()
clf.fit(X_train, y_train)
clf.predict(X_test)

Or do I have to do this:

scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)  # compute mean, std and transform training data as well
X_test = scaler.transform(X_test)  # same as above

clf = LinearSVC()
clf.fit(X_train, y_train)
clf.predict(X_test)

In short, do I have to use scaler.fit(X_train) or scaler.fit_transform(X_train) on the training data in order to get reasonable results with LinearSVC?

Answer

Andreas Mueller picture Andreas Mueller · Feb 4, 2013

Neither.

scaler.transform(X_train) doesn't have any effect. The transform operation is not in-place. You have to do

X_train = scaler.fit_transform(X_train)

X_test = scaler.transform(X_test)

or

X_train = scaler.fit(X_train).transform(X_train)

You always need to do the same preprocessing on both training or test data. And yes, standardization is always good if it reflects your believe for the data. In particular for kernel-svms it is often crucial.