SKLearner Home | About | Contact | Examples

Configure GradientBoostingClassifier "ccp_alpha" Parameter

The ccp_alpha parameter in scikit-learn’s GradientBoostingClassifier controls the complexity of the individual decision trees by pruning them, which helps in preventing overfitting and improving generalization.

Gradient Boosting is an ensemble learning method that builds multiple decision trees sequentially, with each tree correcting the errors of the previous ones. This method improves model performance by focusing on hard-to-predict cases.

The ccp_alpha parameter is a regularization parameter used for post-pruning trees. It represents the complexity parameter used to prune the tree and control its size.

The default value for ccp_alpha is 0.0, meaning no pruning is done. Commonly used values are small positive numbers like 0.01, 0.05, or 0.1, depending on the dataset’s complexity.

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import accuracy_score

# Generate synthetic dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=10, n_redundant=0, random_state=42)

# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train with different ccp_alpha values
ccp_alpha_values = [0.0, 0.01, 0.05, 0.1]
accuracies = []

for alpha in ccp_alpha_values:
    gbc = GradientBoostingClassifier(ccp_alpha=alpha, random_state=42)
    gbc.fit(X_train, y_train)
    y_pred = gbc.predict(X_test)
    accuracy = accuracy_score(y_test, y_pred)
    accuracies.append(accuracy)
    print(f"ccp_alpha={alpha}, Accuracy: {accuracy:.3f}")

Running the example gives an output like:

ccp_alpha=0.0, Accuracy: 0.945
ccp_alpha=0.01, Accuracy: 0.790
ccp_alpha=0.05, Accuracy: 0.505
ccp_alpha=0.1, Accuracy: 0.505

The key steps in this example are:

  1. Generate a synthetic classification dataset with informative and noise features.
  2. Split the data into train and test sets.
  3. Train GradientBoostingClassifier models with different ccp_alpha values.
  4. Evaluate the accuracy of each model on the test set.

Some tips and heuristics for setting ccp_alpha:

Issues to consider:



See Also