SKLearner Home | About | Contact | Examples

Scikit-Learn ExtraTreesRegressor Model

Extra Trees Regressor is an ensemble learning method for regression tasks that builds multiple decision trees and averages their predictions for better accuracy and robustness.

The key hyperparameters of ExtraTreesRegressor include n_estimators (number of trees), max_features (number of features to consider when looking for the best split), and min_samples_split (minimum number of samples required to split an internal node).

The algorithm is appropriate for regression problems where the goal is to predict a continuous output variable.

from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.metrics import mean_absolute_error

# generate regression dataset
X, y = make_regression(n_samples=100, n_features=5, noise=0.1, random_state=1)

# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)

# create model
model = ExtraTreesRegressor(n_estimators=100, random_state=1)

# fit model
model.fit(X_train, y_train)

# evaluate model
yhat = model.predict(X_test)
mae = mean_absolute_error(y_test, yhat)
print('Mean Absolute Error: %.3f' % mae)

# make a prediction
row = [[-1.10325445, -0.49821356, -0.05962247, -0.89224592, -0.70158632]]
yhat = model.predict(row)
print('Predicted: %.3f' % yhat[0])

Running the example gives an output like:

Mean Absolute Error: 20.598
Predicted: -57.706

The steps are as follows:

  1. First, a synthetic regression dataset is generated using the make_regression() function. This creates a dataset with a specified number of samples (n_samples), features (n_features), noise level (noise), and a fixed random seed (random_state) for reproducibility. The dataset is split into training and test sets using train_test_split().

  2. Next, an ExtraTreesRegressor model is instantiated with n_estimators set to 100 and a fixed random seed (random_state). The model is then fit on the training data using the fit() method.

  3. The performance of the model is evaluated by comparing the predictions (yhat) to the actual values (y_test) using the mean absolute error (MAE) metric.

  4. A single prediction can be made by passing a new data sample to the predict() method.

This example demonstrates how to quickly set up and use an ExtraTreesRegressor model for regression tasks, showcasing the simplicity and effectiveness of this ensemble method in scikit-learn.

The model can be fit directly on the training data without the need for scaling or normalization. Once fit, the model can be used to make predictions on new data, enabling its use in real-world regression problems.



See Also