Bayesian Ridge Regression is a linear regression algorithm that models the uncertainty in the model parameters. It introduces regularization parameters that are tuned by the data, reducing the risk of overfitting.
The key hyperparameters of BayesianRidge
include alpha_1
, alpha_2
, lambda_1
, and lambda_2
, which control the shape and scale of the prior and posterior distributions of the model parameters.
The algorithm is appropriate for regression problems.
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.linear_model import BayesianRidge
from sklearn.metrics import mean_squared_error
# generate regression dataset
X, y = make_regression(n_samples=100, n_features=5, noise=0.1, random_state=1)
# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
# create model
model = BayesianRidge()
# fit model
model.fit(X_train, y_train)
# evaluate model
yhat = model.predict(X_test)
mse = mean_squared_error(y_test, yhat)
print('MSE: %.3f' % mse)
# make a prediction
row = [[1.91783117, -0.45885096, -0.32742464, 0.36951487, 1.23456858]]
yhat = model.predict(row)
print('Predicted: %.3f' % yhat[0])
Running the example gives an output like:
MSE: 0.010
Predicted: 14.805
The steps are as follows:
First, a synthetic regression dataset is generated using the
make_regression()
function. This creates a dataset with a specified number of samples (n_samples
), features (n_features
), noise level (noise
), and a fixed random seed (random_state
) for reproducibility. The dataset is split into training and test sets usingtrain_test_split()
.Next, a
BayesianRidge
model is instantiated with default hyperparameters. The model is then fit on the training data using thefit()
method.The performance of the model is evaluated by comparing the predictions (
yhat
) to the actual values (y_test
) using the mean squared error metric.A single prediction can be made by passing a new data sample to the
predict()
method.
This example demonstrates how to quickly set up and use a BayesianRidge
model for regression tasks, showcasing the simplicity and effectiveness of this algorithm in scikit-learn.
The model automatically handles the regularization and can be fit directly on the training data. Once fit, the model can be used to make predictions on new data, enabling its use in real-world regression problems.