SKLearner Home | About | Contact | Examples

Scikit-Learn HistGradientBoostingClassifier Model

HistGradientBoostingClassifier is an efficient implementation of the Gradient Boosting algorithm for classification problems. It is designed to handle large datasets by binning continuous features into discrete intervals.

The key hyperparameters of HistGradientBoostingClassifier include max_iter (number of boosting iterations), learning_rate (shrinkage rate), and max_leaf_nodes (maximum number of leaf nodes in each tree).

The algorithm is appropriate for binary and multi-class classification problems.

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.ensemble import HistGradientBoostingClassifier
from sklearn.metrics import accuracy_score

# generate binary classification dataset
X, y = make_classification(n_samples=1000, n_features=20, n_classes=2, random_state=1)

# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)

# create model
model = HistGradientBoostingClassifier(max_iter=100, learning_rate=0.1, max_leaf_nodes=31)

# fit model
model.fit(X_train, y_train)

# evaluate model
yhat = model.predict(X_test)
acc = accuracy_score(y_test, yhat)
print('Accuracy: %.3f' % acc)

# make a prediction
row = [X_test[0]]
yhat = model.predict(row)
print('Predicted: %d' % yhat[0])

Running the example gives an output like:

Accuracy: 0.870
Predicted: 0

The steps are as follows:

  1. First, a synthetic binary classification dataset is generated using the make_classification() function. This creates a dataset with a specified number of samples (n_samples), features (n_features), classes (n_classes), and a fixed random seed (random_state) for reproducibility. The dataset is split into training and test sets using train_test_split().

  2. Next, a HistGradientBoostingClassifier model is instantiated with 100 boosting iterations (max_iter), a learning rate of 0.1 (learning_rate), and a maximum of 31 leaf nodes per tree (max_leaf_nodes). The model is then fit on the training data using the fit() method.

  3. The performance of the model is evaluated by comparing the predictions (yhat) to the actual values (y_test) using the accuracy score metric.

  4. A single prediction can be made by passing a new data sample to the predict() method.

This example demonstrates how to set up and use a HistGradientBoostingClassifier for binary classification tasks, showcasing the efficiency and effectiveness of this algorithm in scikit-learn.

The model can handle large datasets efficiently and can be used to make accurate predictions on new data, making it suitable for real-world classification problems.



See Also