Python: LightGBM cross validation. How to use lightgbm.cv for regression?

Marius picture Marius · Apr 11, 2018 · Viewed 21.3k times · Source

I want to do a cross validation for LightGBM model with lgb.Dataset and use early_stopping_rounds. The following approach works without a problem with XGBoost's xgboost.cv. I prefer not to use Scikit Learn's approach with GridSearchCV, because it doesn't support early stopping or lgb.Dataset.

import lightgbm as lgb
from sklearn.metrics import mean_absolute_error
dftrainLGB = lgb.Dataset(data = dftrain, label = ytrain, feature_name = list(dftrain))

params = {'objective': 'regression'}

cv_results = lgb.cv(
        params,
        dftrainLGB,
        num_boost_round=100,
        nfold=3,
        metrics='mae',
        early_stopping_rounds=10
        )

The task is to do regression, but the following code throws an error: Supported target types are: ('binary', 'multiclass'). Got 'continuous' instead.

Does LightGBM support regression, or did I supply wrong parameters?

Answer

Vivek Kumar picture Vivek Kumar · Apr 11, 2018

By default, the stratify parameter in the lightgbm.cv is True. According to the documentation:

stratified (bool, optional (default=True)) – Whether to perform stratified sampling.

But stratify works only with classification problems. So to work with regression, you need to make it False.

cv_results = lgb.cv(
        params,
        dftrainLGB,
        num_boost_round=100,
        nfold=3,
        metrics='mae',
        early_stopping_rounds=10,

        # This is what I added
        stratified=False
        )

Now its working.