For improving Support Vector Machine outcomes i have to use grid search for searching better parameters and cross validation. I'm not sure how combining them in scikit-learn. Grid search search best parameters (http://scikit-learn.org/stable/modules/grid_search.html) and cross validation avoid overfitting (http://scikit-learn.org/dev/modules/cross_validation.html)
#GRID SEARCH
from sklearn import grid_search
parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]}
svr = svm.SVC()
clf = grid_search.GridSearchCV(svr, parameters)
#print(clf.fit(X, Y))
#CROSS VALIDATION
from sklearn import cross_validation
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, Y, test_size=0.4, random_state=0)
clf = svm.SVC(kernel='linear', C=1).fit(X_train, y_train)
print("crossvalidation")
print(clf.score(X_test, y_test))
clf = svm.SVC(kernel='linear', C=1)
scores = cross_validation.cross_val_score(clf, X, Y, cv=3)
print(scores )
results:
GridSearchCV(cv=None,
estimator=SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0,
kernel=rbf, probability=False, shrinking=True, tol=0.001, verbose=False),
estimator__C=1.0, estimator__cache_size=200,
estimator__class_weight=None, estimator__coef0=0.0,
estimator__degree=3, estimator__gamma=0.0, estimator__kernel=rbf,
estimator__probability=False, estimator__shrinking=True,
estimator__tol=0.001, estimator__verbose=False, fit_params={},
iid=True, loss_func=None, n_jobs=1,
param_grid={'kernel': ('linear', 'rbf'), 'C': [1, 10]},
pre_dispatch=2*n_jobs, refit=True, score_func=None, verbose=0)
crossvalidation
0.0
[ 0.11111111 0.11111111 0. ]
You should do a development / evaluation split first, run the grid search on the development part and measure a unique final score on the evaluation part at the end:
There is an example in the documentation.