SKLearner Home | About | Contact | Examples

Scikit-Learn HistGradientBoostingRegressor Model

HistGradientBoostingRegressor is a powerful gradient boosting algorithm optimized for large datasets. It constructs decision trees in a stepwise manner to minimize a loss function and is highly efficient due to histogram-based binning of continuous variables.

The key hyperparameters of HistGradientBoostingRegressor include the learning_rate (controls the contribution of each tree), max_iter (number of boosting iterations), and max_leaf_nodes (maximum number of leaves per tree).

This algorithm is appropriate for regression tasks where predictive accuracy is paramount and computational efficiency is desired.

from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.ensemble import HistGradientBoostingRegressor
from sklearn.metrics import mean_squared_error

# generate regression dataset
X, y = make_regression(n_samples=100, n_features=10, noise=0.1, random_state=1)

# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)

# create model
model = HistGradientBoostingRegressor()

# fit model
model.fit(X_train, y_train)

# evaluate model
yhat = model.predict(X_test)
mse = mean_squared_error(y_test, yhat)
print('Mean Squared Error: %.3f' % mse)

# make a prediction
row = [[0.5, -1.2, 0.3, 1.5, -0.7, 0.6, -0.8, 1.0, -1.5, 0.2]]
yhat = model.predict(row)
print('Predicted: %.3f' % yhat[0])

Running the example gives an output like:

Mean Squared Error: 11073.718
Predicted: 147.028

The steps are as follows:

  1. First, a synthetic regression dataset is generated using the make_regression() function. This creates a dataset with a specified number of samples (n_samples), features (n_features), and added noise (noise). The dataset is split into training and test sets using train_test_split().

  2. Next, a HistGradientBoostingRegressor model is instantiated with default hyperparameters. The model is then fit on the training data using the fit() method.

  3. The performance of the model is evaluated by comparing the predictions (yhat) to the actual values (y_test) using the mean squared error metric.

  4. A single prediction can be made by passing a new data sample to the predict() method.

This example demonstrates how to quickly set up and use a HistGradientBoostingRegressor model for regression tasks, showcasing the efficiency and effectiveness of this algorithm in scikit-learn.

The model is optimized for large datasets, and once fit, it can be used to make accurate predictions on new data, enabling its use in real-world regression problems.



See Also