Distinguishing overfitting vs good prediction

Zach picture Zach · Sep 3, 2012 · Viewed 11.2k times · Source

These are questions on how to calculate & reduce overfitting in machine learning. I think many new to machine learning will have the same questions, so I tried to be clear with my examples and questions in hope that answers here can help others.

I have a very small sample of texts and I'm trying to predict values associated with them. I've used sklearn to calculate tf-idf, and insert those into a regression model for prediction. This gives me 26 samples with 6323 features - not a lot.. I know:

>> count_vectorizer = CountVectorizer(min_n=1, max_n=1)
>> term_freq = count_vectorizer.fit_transform(texts)
>> transformer = TfidfTransformer()
>> X = transformer.fit_transform(term_freq) 
>> print X.shape

(26, 6323)

Inserting those 26 samples of 6323 features (X) and associated scores (y), into a LinearRegression model, gives good predictions. These are obtained using leave-one-out cross validation, from cross_validation.LeaveOneOut(X.shape[0], indices=True) :

using ngrams (n=1):
     human  machine  points-off  %error
      8.67    8.27    0.40       1.98
      8.00    7.33    0.67       3.34
      ...     ...     ...        ...
      5.00    6.61    1.61       8.06
      9.00    7.50    1.50       7.50
mean: 7.59    7.64    1.29       6.47
std : 1.94    0.56    1.38       6.91

Pretty good! Using ngrams (n=300) instead of unigrams (n=1), similar results occur, which is obviously not right. No 300-words occur in any of the texts, so the prediction should fail, but it doesn't:

using ngrams (n=300):
      human  machine  points-off  %error
       8.67    7.55    1.12       5.60
       8.00    7.57    0.43       2.13
       ...     ...     ...        ...
mean:  7.59    7.59    1.52       7.59
std :  1.94    0.08    1.32       6.61

Question 1:

Answer

Fred Foo picture Fred Foo · Sep 4, 2012

how would you normally tell that the model is over-fitting?

One useful rule of thumb is that you may be overfitting when your model's performance on its own training set is much better than on its held-out validation set or in a cross-validation setting. That's not all there is to it, though.

The blog entry I linked to describes a procedure for testing for overfit: plot training set and validation set error as a function of training set size. If they show a stable gap at the right end of the plot, you're probably overfitting.

What is the best way of preventing over-fitting (in this situation) to be sure that the prediction results are good or not?

Use a held-out test set. Only do evaluation on this set when you're completely done with model selection (hyperparameter tuning); don't train on it, don't use it in (cross-)validation. The score you get on the test set is the model's final evaluation. This should show whether you've accidentally overfit the validation set(s).

[Machine learning conferences are sometimes set up like a competition, where the test set is not given to the researchers until after they've delivered their final model to the organisers. In the meanwhile, they can use the training set as they please, e.g. by testing models using cross-validation. Kaggle does something similar.]

If LeaveOneOut cross validation is used, how can the model possibly over-fit with good results?

Because you can tune the model as much as you want in this cross-validation setting, until it performs nearly perfectly in CV.

As an extreme example, suppose that you've implemented an estimator that is essentially a random number generator. You can keep trying random seeds until you hit a "model" that produces very low error in cross-validation, but that doesn't you've hit the right model. It means you've overfit to the cross-validation.

See also this interesting warstory.