How can I handle unknown values for label encoding in sk-learn? The label encoder will only blow up with an exception that new labels were detected.
What I want is the encoding of categorical variables via one-hot-encoder. However, sk-learn does not support strings for that. So I used a label encoder on each column.
My problem is that in my cross-validation step of the pipeline unknown labels show up.
The basic one-hot-encoder would have the option to ignore such cases.
An apriori pandas.getDummies /cat.codes
is not sufficient as the pipeline should work with real-life, fresh incoming data which might contain unknown labels as well.
Would it be possible to use a CountVectorizer
for this purpose?
EDIT:
A more recent simpler/better way of handling this problem with scikit-learn is using the class sklearn.preprocessing.OneHotEncoder
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder(handle_unknown='ignore')
enc.fit(train)
enc.transform(train).toarray()
Old answer:
There are several answers that mention pandas.get_dummies
as a method for this, but I feel the labelEncoder
approach is cleaner for implementing a model.
Other similar answers mention using DictVectorizer
for this, but again converting the entire DataFrame
to dict is probably not a great idea.
Let's assume the following problematic columns:
from sklearn import preprocessing
import numpy as np
import pandas as pd
train = {'city': ['Buenos Aires', 'New York', 'Istambul', 'Buenos Aires', 'Paris', 'Paris'],
'letters': ['a', 'b', 'c', 'd', 'a', 'b']}
train = pd.DataFrame(train)
test = {'city': ['Buenos Aires', 'New York', 'Istambul', 'Buenos Aires', 'Paris', 'Utila'],
'letters': ['a', 'b', 'c', 'a', 'b', 'b']}
test = pd.DataFrame(test)
Utila is a rarer city, and it isn't present in the training data but in the test set, that we can consider new data at inference time.
The trick is converting this value to "other" and including this in the labelEncoder object. Then we can reuse it in production.
c = 'city'
le = preprocessing.LabelEncoder()
train[c] = le.fit_transform(train[c])
test[c] = test[c].map(lambda s: 'other' if s not in le.classes_ else s)
le_classes = le.classes_.tolist()
bisect.insort_left(le_classes, 'other')
le.classes_ = le_classes
test[c] = le.transform(test[c])
test
city letters
0 1 a
1 3 b
2 2 c
3 1 a
4 4 b
5 0 b
To apply it to new data all we need is to save a le
object for each column which can be easily done with Pickle.
This answer is based on this question which I feel wasn't totally clear to me, therefore added this example.