Hyperparameter tuning is a crucial step in optimizing machine learning models for best performance. In this example, we’ll demonstrate how to use scikit-learn’s GridSearchCV
to perform hyperparameter tuning for ExtraTreesRegressor
, a powerful ensemble learning algorithm for regression tasks.
Grid search is a method for evaluating different combinations of model hyperparameters to find the best performing configuration. It exhaustively searches through a specified parameter grid, trains and evaluates the model for each combination using cross-validation, and selects the hyperparameters that yield the best performance metric.
ExtraTreesRegressor
is an ensemble learning method that fits multiple decision trees on various sub-samples of the dataset and averages the results to improve predictive accuracy and control over-fitting. It is similar to Random Forest but with some differences in how the trees are constructed.
The key hyperparameters for ExtraTreesRegressor
include the number of trees in the forest (n_estimators
), the number of features considered for splitting (max_features
), and the minimum number of samples required to split an internal node (min_samples_split
).
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import ExtraTreesRegressor
# Generate synthetic regression dataset
X, y = make_regression(n_samples=1000, n_features=10, noise=0.1, random_state=42)
# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Define parameter grid
param_grid = {
'n_estimators': [50, 100, 200],
'max_features': ['auto', 'sqrt', 'log2'],
'min_samples_split': [2, 5, 10]
}
# Perform grid search
grid_search = GridSearchCV(estimator=ExtraTreesRegressor(random_state=42),
param_grid=param_grid,
cv=5,
scoring='r2')
grid_search.fit(X_train, y_train)
# Report best score and parameters
print(f"Best score: {grid_search.best_score_:.3f}")
print(f"Best parameters: {grid_search.best_params_}")
# Evaluate on test set
best_model = grid_search.best_estimator_
r2_score = best_model.score(X_test, y_test)
print(f"Test set R^2 score: {r2_score:.3f}")
Running the example gives an output like:
Best score: 0.843
Best parameters: {'max_features': 'sqrt', 'min_samples_split': 2, 'n_estimators': 100}
Test set R^2 score: 0.854
The steps are as follows:
- Generate a synthetic regression dataset using scikit-learn’s
make_regression
function. - Split the dataset into train and test sets using
train_test_split
. - Define the parameter grid with different values for
n_estimators
,max_features
, andmin_samples_split
hyperparameters. - Perform grid search using
GridSearchCV
, specifying theExtraTreesRegressor
model, parameter grid, 5-fold cross-validation, and R^2 scoring metric. - Report the best cross-validation score and best set of hyperparameters found by grid search.
- Evaluate the best model on the hold-out test set and report the R^2 score.
By using GridSearchCV
, we can efficiently explore different hyperparameter settings and find the optimal configuration for our ExtraTreesRegressor
model, enhancing its performance on regression tasks.