I'm trying to understand how feature importance is calculated for decision trees in sci-kit learn. This question has been asked before, but I am unable to reproduce the results the algorithm is providing.
For example:
from StringIO import StringIO
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree.export import export_graphviz
from sklearn.feature_selection import mutual_info_classif
X = [[1,0,0], [0,0,0], [0,0,1], [0,1,0]]
y = [1,0,1,1]
clf = DecisionTreeClassifier()
clf.fit(X, y)
feat_importance = clf.tree_.compute_feature_importances(normalize=False)
print("feat importance = " + str(feat_importance))
out = StringIO()
out = export_graphviz(clf, out_file='test/tree.dot')
results in feature importance:
feat importance = [0.25 0.08333333 0.04166667]
and gives the following decision tree:
Now, this answer to a similar question suggests the importance is calculated as
Where G is the node impurity, in this case the gini impurity. This is the impurity reduction as far as I understood it. However, for feature 1 this should be:
This answer suggests the importance is weighted by the probability of reaching the node (which is approximated by the proportion of samples reaching that node). Again, for feature 1 this should be:
Both formulas provide the wrong result. How is the feature importance calculated correctly?
I think feature importance depends on the implementation so we need to look at the documentation of scikit-learn.
The feature importances. The higher, the more important the feature. The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance
That reduction or weighted information gain is defined as :
The weighted impurity decrease equation is the following:
N_t / N * (impurity - N_t_R / N_t * right_impurity - N_t_L / N_t * left_impurity)
where N is the total number of samples, N_t is the number of samples at the current node, N_t_L is the number of samples in the left child, and N_t_R is the number of samples in the right child.
Since each feature is used once in your case, feature information must be equal to equation above.
For X[2] :
feature_importance = (4 / 4) * (0.375 - (0.75 * 0.444)) = 0.042
For X[1] :
feature_importance = (3 / 4) * (0.444 - (2/3 * 0.5)) = 0.083
For X[0] :
feature_importance = (2 / 4) * (0.5) = 0.25