SKLearner Home | About | Contact | Examples

Configure AdaBoostClassifier "learning_rate" Parameter

The learning_rate parameter in scikit-learn’s AdaBoostClassifier controls the contribution of each classifier in the ensemble.

AdaBoost (Adaptive Boosting) is an ensemble learning method that combines weak learners sequentially, giving more weight to misclassified samples in each iteration. The learning_rate parameter shrinks the contribution of each classifier.

A smaller learning rate requires more estimators to maintain model performance, resulting in better generalization. However, it also increases computational cost and training time.

The default value for learning_rate is 1.0.

In practice, values between 0.01 and 1.0 are commonly used, with smaller values often leading to better performance at the cost of increased training time.

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import accuracy_score

# Generate synthetic dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15,
                           n_redundant=0, random_state=42)

# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train with different learning_rate values
learning_rates = [0.01, 0.1, 0.5, 1.0]
accuracies = []

for lr in learning_rates:
    ada = AdaBoostClassifier(learning_rate=lr, random_state=42)
    ada.fit(X_train, y_train)
    y_pred = ada.predict(X_test)
    accuracy = accuracy_score(y_test, y_pred)
    accuracies.append(accuracy)
    print(f"learning_rate={lr}, Accuracy: {accuracy:.3f}")

Running the example gives an output like:

learning_rate=0.01, Accuracy: 0.685
learning_rate=0.1, Accuracy: 0.755
learning_rate=0.5, Accuracy: 0.760
learning_rate=1.0, Accuracy: 0.765

The key steps in this example are:

  1. Generate a synthetic binary classification dataset with informative features
  2. Split the data into train and test sets
  3. Train AdaBoostClassifier models with different learning_rate values
  4. Evaluate the accuracy of each model on the test set

Some tips and heuristics for setting learning_rate:

Issues to consider:



See Also