SKLearner Home | About | Contact | Examples

Scikit-Learn VotingRegressor Model

VotingRegressor is an ensemble regression algorithm that combines the predictions from multiple other regression models to improve overall performance.

The key hyperparameters of VotingRegressor include the list of estimators (base models) and weights (optional weights for the importance of each model).

The algorithm is appropriate for regression problems where leveraging multiple models’ strengths can yield better predictions.

from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.ensemble import VotingRegressor
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.svm import SVR
from sklearn.metrics import mean_squared_error

# generate synthetic regression dataset
X, y = make_regression(n_samples=100, n_features=5, noise=0.1, random_state=1)

# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)

# create base models
model1 = LinearRegression()
model2 = DecisionTreeRegressor()
model3 = SVR()

# create ensemble model
ensemble = VotingRegressor(estimators=[('lr', model1), ('dt', model2), ('svr', model3)])

# fit ensemble model
ensemble.fit(X_train, y_train)

# evaluate model
yhat = ensemble.predict(X_test)
mse = mean_squared_error(y_test, yhat)
print('Mean Squared Error: %.3f' % mse)

# make a prediction
row = [[-0.6, -0.1, 0.8, 0.3, 1.2]]
yhat = ensemble.predict(row)
print('Predicted: %.3f' % yhat[0])

Running the example gives an output like:

Mean Squared Error: 1424.325
Predicted: 22.844

The steps are as follows:

  1. First, a synthetic regression dataset is generated using the make_regression() function. This creates a dataset with a specified number of samples (n_samples), features (n_features), and a fixed random seed (random_state) for reproducibility. The dataset is split into training and test sets using train_test_split().

  2. Next, three base models are instantiated: LinearRegression, DecisionTreeRegressor, and SVR.

  3. A VotingRegressor ensemble model is created, combining the three base models. The ensemble model is then fit on the training data using the fit() method.

  4. The performance of the ensemble model is evaluated by predicting on the test set and calculating the mean squared error (mse).

  5. A single prediction is made by passing a new data sample to the predict() method of the ensemble model.

This example demonstrates how to set up and use a VotingRegressor for regression tasks, highlighting how combining different models can improve predictive performance. The ensemble model leverages the strengths of individual models to make more accurate predictions on new data.



See Also