SKLearner Home | About | Contact | Examples

Configure HistGradientBoostingRegressor "n_iter_no_change" Parameter

The n_iter_no_change parameter in scikit-learn’s HistGradientBoostingRegressor controls early stopping during training.

Early stopping is a technique to prevent overfitting by halting the training process when the model’s performance on a validation set stops improving. The n_iter_no_change parameter specifies the number of iterations with no improvement after which training will stop.

This parameter helps balance model performance and training time. A smaller value may lead to earlier stopping, potentially reducing overfitting and training time, while a larger value allows more iterations for potential improvement.

The default value for n_iter_no_change is 10. In practice, values between 5 and 50 are commonly used, depending on the dataset’s size and complexity.

from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.ensemble import HistGradientBoostingRegressor
from sklearn.metrics import mean_squared_error
import numpy as np

# Generate synthetic dataset
X, y = make_regression(n_samples=1000, n_features=20, noise=0.1, random_state=42)

# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train with different n_iter_no_change values
n_iter_no_change_values = [5, 10, 20, 50]
mse_scores = []
learning_curves = []

for n in n_iter_no_change_values:
    model = HistGradientBoostingRegressor(n_iter_no_change=n, random_state=42, max_iter=1000)
    model.fit(X_train, y_train)
    y_pred = model.predict(X_test)
    mse = mean_squared_error(y_test, y_pred)
    mse_scores.append(mse)
    learning_curves.append(model.train_score_)
    print(f"n_iter_no_change={n}, MSE: {mse:.3f}")

Running the example gives an output like:

n_iter_no_change=5, MSE: 2953.254
n_iter_no_change=10, MSE: 2953.254
n_iter_no_change=20, MSE: 2953.254
n_iter_no_change=50, MSE: 2953.254

The key steps in this example are:

  1. Generate a synthetic regression dataset
  2. Split the data into train and test sets
  3. Train HistGradientBoostingRegressor models with different n_iter_no_change values
  4. Evaluate the mean squared error of each model on the test set

Some tips and heuristics for setting n_iter_no_change:

Issues to consider:



See Also