SKLearner Home | About | Contact | Examples

Scikit-Learn Gaussian Process with "RBF" Kernel

Gaussian Process (GP) is a powerful probabilistic model used for regression and classification tasks. It is particularly useful when dealing with small datasets or when a measure of uncertainty is required for predictions.

The RBF (Radial Basis Function) kernel, also known as the Gaussian kernel, is a covariance function used in GP that measures the similarity between input points based on their distance. This kernel is suitable for problems where the relationship between inputs and outputs is non-linear, making it a good choice for complex regression problems.

The key hyperparameters for the RBF kernel are the length_scale and length_scale_bounds. The length_scale determines how far apart two points need to be for their covariance to significantly decrease, with common values typically ranging from 0.1 to 10. The length_scale_bounds define the range within which the length_scale parameter is allowed to vary during the fitting process.

The RBF kernel is appropriate for regression problems where the relationship between inputs and outputs is non-linear.

from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
import numpy as np

# Prepare a synthetic dataset
X = np.random.uniform(low=-5, high=5, size=(100, 3))
y = np.sin(X[:, 0]) + np.cos(X[:, 1]) + 0.5 * X[:, 2] + np.random.normal(loc=0, scale=0.1, size=(100,))

# Create an instance of GaussianProcessRegressor with RBF kernel
kernel = RBF(length_scale=1.0, length_scale_bounds=(1e-1, 10.0))
gp = GaussianProcessRegressor(kernel=kernel, random_state=0)

# Split the dataset into train and test portions
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)

# Fit the model on the training data
gp.fit(X_train, y_train)

# Evaluate the model's performance using mean squared error
y_pred = gp.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print(f"Mean Squared Error: {mse:.2f}")

# Make a prediction using the fitted model on a test sample
test_sample = np.array([[1, -2, 3]])
pred = gp.predict(test_sample)
print(f"Predicted value for test sample: {pred[0]:.2f}")

Running the example gives an output like:

Mean Squared Error: 0.11
Predicted value for test sample: 2.13

The key steps in this code example are:

  1. Dataset preparation: A synthetic dataset is generated where the target variable has a non-linear relationship with the input features, plus some random noise.

  2. Model instantiation and configuration: An instance of GaussianProcessRegressor is created with the RBF kernel, and relevant hyperparameters are set.

  3. Model training: The dataset is split into train and test portions, and the model is fitted on the training data.

  4. Model evaluation: The model’s performance is evaluated using mean squared error on the test set.

  5. Inference on test sample(s): A prediction is made using the fitted model on one test sample, demonstrating how the model can be used for inference on new data.



See Also