How to use the a k-fold cross validation in scikit with naive bayes classifier and NLTK

user2284345 picture user2284345 · May 4, 2013 · Viewed 53.2k times · Source

I have a small corpus and I want to calculate the accuracy of naive Bayes classifier using 10-fold cross validation, how can do it.

Answer

Jared picture Jared · May 5, 2013

Your options are to either set this up yourself or use something like NLTK-Trainer since NLTK doesn't directly support cross-validation for machine learning algorithms.

I'd recommend probably just using another module to do this for you but if you really want to write your own code you could do something like the following.

Supposing you want 10-fold, you would have to partition your training set into 10 subsets, train on 9/10, test on the remaining 1/10, and do this for each combination of subsets (10).

Assuming your training set is in a list named training, a simple way to accomplish this would be,

num_folds = 10
subset_size = len(training)/num_folds
for i in range(num_folds):
    testing_this_round = training[i*subset_size:][:subset_size]
    training_this_round = training[:i*subset_size] + training[(i+1)*subset_size:]
    # train using training_this_round
    # evaluate against testing_this_round
    # save accuracy

# find mean accuracy over all rounds