SKLearner Home | About | Contact | Examples

Scikit-Learn KernelRidge Regression Model

Kernel Ridge Regression is a powerful algorithm that combines ridge regression with the kernel trick, allowing it to model complex, non-linear relationships in regression tasks.

The key hyperparameters of KernelRidge include alpha (regularization strength), kernel (type of kernel function such as linear, polynomial, or RBF), and gamma (kernel coefficient for certain kernels).

The algorithm is appropriate for regression problems where the relationship between the features and the target variable is non-linear.

from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.kernel_ridge import KernelRidge
from sklearn.metrics import mean_squared_error

# generate synthetic regression dataset
X, y = make_regression(n_samples=100, n_features=5, noise=0.1, random_state=1)

# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)

# create model
model = KernelRidge(alpha=1.0, kernel='rbf', gamma=0.1)

# fit model
model.fit(X_train, y_train)

# evaluate model
yhat = model.predict(X_test)
mse = mean_squared_error(y_test, yhat)
print('Mean Squared Error: %.3f' % mse)

# make a prediction
row = [[-1.10325445, -0.49821356, -0.05962247, -0.89224592, -0.70158632]]
yhat = model.predict(row)
print('Predicted: %.3f' % yhat[0])

Running the example gives an output like:

Mean Squared Error: 567.353
Predicted: -77.482

The steps are as follows:

  1. First, a synthetic regression dataset is generated using the make_regression() function. This creates a dataset with a specified number of samples (n_samples), features (n_features), and a fixed random seed (random_state) for reproducibility. The dataset is split into training and test sets using train_test_split().

  2. Next, a KernelRidge model is instantiated with specified hyperparameters (alpha, kernel, and gamma). The model is then fit on the training data using the fit() method.

  3. The performance of the model is evaluated by comparing the predictions (yhat) to the actual values (y_test) using the mean squared error metric.

  4. A single prediction can be made by passing a new data sample to the predict() method.

This example demonstrates how to set up and use a KernelRidge model for regression tasks, highlighting the ability of this algorithm to handle non-linear relationships between features and the target variable.



See Also