当我在提前停止的情况下运行 LGBM 时,它会给出与其最佳迭代相对应的分数。
当我尝试自己重现这些分数时,我得到了不同的数字。
import lightgbm as lgb
from sklearn.datasets import load_breast_cancer
import pandas as pd
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import KFold
data = load_breast_cancer()
X = pd.DataFrame(data.data)
y = pd.Series(data.target)
lgb_params = {'boosting_type': 'dart', 'random_state': 42}
folds = KFold(5)
for train_idx, val_idx in folds.split(X):
X_train, X_valid = X.iloc[train_idx], X.iloc[val_idx]
y_train, y_valid = y.iloc[train_idx], y.iloc[val_idx]
model = lgb.LGBMRegressor(**lgb_params, n_estimators=10000, n_jobs=-1)
model.fit(X_train, y_train,
eval_set=[(X_valid, y_valid)],
eval_metric='mae', verbose=-1, early_stopping_rounds=200)
y_pred_valid = model.predict(X_valid)
print(mean_absolute_error(y_valid, y_pred_valid))
我期待着
valid_0's l1: 0.123608
将与我自己的计算相匹配mean_absolute_error
,但事实并非如此。事实上,这是我输出的顶部:
Training until validation scores don't improve for 200 rounds.
Early stopping, best iteration is:
[631] valid_0's l2: 0.0515033 valid_0's l1: 0.123608
0.16287265537021847
我正在使用 lightgbm 的“2.2.1”版本。
如果您更新 LGBM 版本,您将获得
“用户警告:在飞镖模式下无法提前停止”
有关详细信息,请参阅此问题。您可以做的是使用最佳数量的提升轮重新训练模型。
results = model.evals_result_['valid_0']['l1']
best_perf = min(results)
num_boost = results.index(best_perf)
print('with boost', num_boost, 'perf', best_perf)
model = lgb.LGBMRegressor(**lgb_params, n_estimators=num_boost+1, n_jobs=-1)
model.fit(X_train, y_train, verbose=-1)
本文收集自互联网,转载请注明来源。
如有侵权,请联系[email protected] 删除。
我来说两句