Scoring in Gridsearch CV

KMittal picture KMittal · Sep 27, 2018 · Viewed 9.2k times · Source

I just started with GridSearchCV in Python, but I am confused what is scoring in this. Somewhere I have seen

scorers = {
    'precision_score': make_scorer(precision_score),
    'recall_score': make_scorer(recall_score),
    'accuracy_score': make_scorer(accuracy_score)
}

grid_search = GridSearchCV(clf, param_grid, scoring=scorers, refit=refit_score,
                       cv=skf, return_train_score=True, n_jobs=-1)

What is the intent of using these values, i.e. precision, recall, accuracy in scoring?

Is this used by gridsearch in giving us the optimized parameters based on these scoring values.... like for the best precision score it finds the best parameters or something like that?

It calculate precision, recall, accuracy for the possible parameters and gives the result, now the question is if this is true, then it select best parameters based on precision, recall or accuracy? Is the above statement true?

Answer

G. Anderson picture G. Anderson · Sep 27, 2018

You are basically correct in your assumptions. This parameter dictionary allows the gridsearch to optimize across each scoring metric and find the best parameters for each score.

However, you can't then have the gridsearch automatically fit and return the best_estimator_, without choosing which score to use for the refit, it will instead throw the following error:

ValueError: For multi-metric scoring, the parameter refit must be set to a scorer 
key to refit an estimator with the best parameter setting on the whole data and make
the best_* attributes available for that metric. If this is not needed, refit should 
be set to False explicitly. True was passed.