How to handle missing values in datasets before applying machine learning algorithm??.
I noticed that it is not a smart thing to drop missing NAN values. I usually do interpolate (compute mean) using pandas and fill it up the data which is kind of works and improves the classification accuracy but may not be the best thing to do.
Here is a very important question. What is the best way to handle missing values in data set?
For example if you see this dataset, only 30% has original data.
Int64Index: 7049 entries, 0 to 7048
Data columns (total 31 columns):
left_eye_center_x 7039 non-null float64
left_eye_center_y 7039 non-null float64
right_eye_center_x 7036 non-null float64
right_eye_center_y 7036 non-null float64
left_eye_inner_corner_x 2271 non-null float64
left_eye_inner_corner_y 2271 non-null float64
left_eye_outer_corner_x 2267 non-null float64
left_eye_outer_corner_y 2267 non-null float64
right_eye_inner_corner_x 2268 non-null float64
right_eye_inner_corner_y 2268 non-null float64
right_eye_outer_corner_x 2268 non-null float64
right_eye_outer_corner_y 2268 non-null float64
left_eyebrow_inner_end_x 2270 non-null float64
left_eyebrow_inner_end_y 2270 non-null float64
left_eyebrow_outer_end_x 2225 non-null float64
left_eyebrow_outer_end_y 2225 non-null float64
right_eyebrow_inner_end_x 2270 non-null float64
right_eyebrow_inner_end_y 2270 non-null float64
right_eyebrow_outer_end_x 2236 non-null float64
right_eyebrow_outer_end_y 2236 non-null float64
nose_tip_x 7049 non-null float64
nose_tip_y 7049 non-null float64
mouth_left_corner_x 2269 non-null float64
mouth_left_corner_y 2269 non-null float64
mouth_right_corner_x 2270 non-null float64
mouth_right_corner_y 2270 non-null float64
mouth_center_top_lip_x 2275 non-null float64
mouth_center_top_lip_y 2275 non-null float64
mouth_center_bottom_lip_x 7016 non-null float64
mouth_center_bottom_lip_y 7016 non-null float64
Image 7049 non-null object
What is the best way to handle missing values in data set?
There is NO best way, each solution/algorithm has their own pros and cons (and you can even mix some of them together to create your own strategy and tune the related parameters to come up one best satisfy your data, there are many research/papers about this topic).
For example, Mean Imputation is quick and simple, but it would underestimate the variance and the distribution shape is distorted by replacing NaN with the mean value, while KNN Imputation might not be ideal in a large data set in terms of time complexity, since it iterate over all the data points and perform calculation for each NaN value, and the assumption is that NaN attribute is correlated with other attributes.
How to handle missing values in datasets before applying machine learning algorithm??
In addition to mean imputation you mention, you could also take a look at K-Nearest Neighbor Imputation and Regression Imputation, and refer to the powerful Imputer class in scikit-learn to check existing APIs to use.
KNN Imputation
Calculate the mean of k nearest neighbors of this NaN point.
Regression Imputation
A regression model is estimated to predict observed values of a variable based on other variables, and that model is then used to impute values in cases where that variable is missing.
Here links to scikit's 'Imputation of missing values' section. I have also heard of Orange library for imputation, but haven't had a chance to use it yet.