Gradient Boosting is an ensemble method that combines the predictions of several base estimators (typically decision trees) to improve robustness and accuracy. It works by building trees sequentially, each one correcting the errors of its predecessor.
The key hyperparameters of GradientBoostingClassifier
include the n_estimators
(the number of boosting stages to be run), learning_rate
(shrinks the contribution of each tree by this factor), and max_depth
(the maximum depth of the individual regression estimators).
The algorithm is appropriate for binary and multi-class classification problems.
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import accuracy_score
# generate binary classification dataset
X, y = make_classification(n_samples=100, n_features=5, n_classes=2, random_state=1)
# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
# create model
model = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1, max_depth=3)
# fit model
model.fit(X_train, y_train)
# evaluate model
yhat = model.predict(X_test)
acc = accuracy_score(y_test, yhat)
print('Accuracy: %.3f' % acc)
# make a prediction
row = [[-1.10325445, -0.49821356, -0.05962247, -0.89224592, -0.70158632]]
yhat = model.predict(row)
print('Predicted: %d' % yhat[0])
Running the example gives an output like:
Accuracy: 0.900
Predicted: 0
The steps are as follows:
First, a synthetic binary classification dataset is generated using the
make_classification()
function. This creates a dataset with a specified number of samples (n_samples
), classes (n_classes
), and a fixed random seed (random_state
) for reproducibility. The dataset is split into training and test sets usingtrain_test_split()
.Next, a
GradientBoostingClassifier
model is instantiated withn_estimators=100
,learning_rate=0.1
, andmax_depth=3
. The model is then fit on the training data using thefit()
method.The performance of the model is evaluated by comparing the predictions (
yhat
) to the actual values (y_test
) using the accuracy score metric.A single prediction can be made by passing a new data sample to the
predict()
method.
This example demonstrates how to quickly set up and use a GradientBoostingClassifier
model for binary classification tasks, showcasing the power and flexibility of this algorithm in scikit-learn.
The model can be fit directly on the training data, and once fit, it can be used to make predictions on new data, enabling its use in real-world classification problems.