SKLearner Home | About | Contact | Examples

Scikit-Learn AdaBoostRegressor Model

AdaBoost, short for Adaptive Boosting, is an ensemble learning method primarily used for boosting the performance of weak learners. AdaBoostRegressor improves the accuracy of regression models by combining multiple weak learners to create a strong predictor.

The key hyperparameters of AdaBoostRegressor include n_estimators (the number of boosting rounds) and learning_rate (the contribution of each weak learner).

This algorithm is appropriate for regression problems where the goal is to predict a continuous target variable.

from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.ensemble import AdaBoostRegressor
from sklearn.metrics import mean_squared_error

# generate regression dataset
X, y = make_regression(n_samples=100, n_features=5, noise=0.1, random_state=1)

# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)

# create model
model = AdaBoostRegressor(n_estimators=50, learning_rate=1.0, random_state=1)

# fit model
model.fit(X_train, y_train)

# evaluate model
yhat = model.predict(X_test)
mse = mean_squared_error(y_test, yhat)
print('MSE: %.3f' % mse)

# make a prediction
row = [[0.5, -0.2, 0.3, -1.5, 1.2]]
yhat = model.predict(row)
print('Predicted: %.3f' % yhat[0])

Running the example gives an output like:

MSE: 1326.777
Predicted: 0.965

The steps are as follows:

  1. First, a synthetic regression dataset is generated using the make_regression() function. This creates a dataset with a specified number of samples (n_samples), features (n_features), and a fixed random seed (random_state) for reproducibility. The dataset is split into training and test sets using train_test_split().

  2. Next, an AdaBoostRegressor model is instantiated with specific hyperparameters such as n_estimators, learning_rate, and a random seed for reproducibility. The model is then fit on the training data using the fit() method.

  3. The performance of the model is evaluated by comparing the predictions (yhat) to the actual values (y_test) using the mean squared error (MSE) metric.

  4. A single prediction can be made by passing a new data sample to the predict() method.

This example demonstrates how to set up and use an AdaBoostRegressor model for regression tasks, showcasing the ability of this algorithm to boost the performance of weak learners in scikit-learn.



See Also