Ridge regression is a powerful technique for linear regression with L2 regularization, useful for dealing with multicollinearity.
In scikit-learn, the Ridge
class implements this algorithm with key hyperparameters such as alpha
(regularization strength), solver
(algorithm to use in the optimization problem), and max_iter
(maximum number of iterations for the solver). Tuning these manually requires domain knowledge and can be time-consuming.
RidgeCV
extends Ridge
by providing built-in cross-validation for automated hyperparameter tuning. Its key hyperparameters include alphas
(list of alpha values to try), cv
(number of folds for cross-validation), and scoring
(metric to optimize).
The main difference is that RidgeCV
automates hyperparameter tuning using cross-validation, while Ridge
requires manual tuning. However, this automation can be more computationally expensive due to the cross-validation process.
Ridge
is suitable for quick prototyping when good hyperparameter values are known. RidgeCV
is preferred when hyperparameters need to be tuned and model selection is necessary.
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Ridge, RidgeCV
from sklearn.metrics import mean_squared_error, r2_score
# Generate synthetic regression dataset
X, y = make_regression(n_samples=1000, n_features=20, noise=0.1, random_state=42)
# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Fit and evaluate Ridge regression with default hyperparameters
ridge = Ridge(random_state=42)
ridge.fit(X_train, y_train)
y_pred_ridge = ridge.predict(X_test)
print(f"Ridge MSE: {mean_squared_error(y_test, y_pred_ridge):.3f}")
print(f"Ridge R2 score: {r2_score(y_test, y_pred_ridge):.3f}")
# Fit and evaluate RidgeCV with cross-validation
ridge_cv = RidgeCV(cv=5, scoring='neg_mean_squared_error', alphas=[0.1, 1.0, 10.0])
ridge_cv.fit(X_train, y_train)
y_pred_ridge_cv = ridge_cv.predict(X_test)
print(f"\nRidgeCV MSE: {mean_squared_error(y_test, y_pred_ridge_cv):.3f}")
print(f"RidgeCV R2 score: {r2_score(y_test, y_pred_ridge_cv):.3f}")
print(f"Best alpha: {ridge_cv.alpha_}")
Running the example gives an output like:
Ridge MSE: 0.076
Ridge R2 score: 1.000
RidgeCV MSE: 0.012
RidgeCV R2 score: 1.000
Best alpha: 0.1
The steps are as follows:
- Generate a synthetic regression dataset using
make_regression
. - Split the data into training and test sets using
train_test_split
. - Instantiate
Ridge
with default hyperparameters, fit it on the training data, and evaluate its performance on the test set. - Instantiate
RidgeCV
with 5-fold cross-validation and a list of alpha values to try, fit it on the training data, and evaluate its performance on the test set. - Compare the test set performance (MSE and R2 score) of both models and print the best alpha value found by
RidgeCV
.