SKLearner Home | About | Contact | Examples

Scikit-Learn GaussianProcessRegressor Model

Gaussian Process Regression (GPR) is a non-parametric kernel-based probabilistic model. It is useful for regression problems, providing not only predictions but also uncertainty estimates.

Key hyperparameters include the kernel (determines the covariance function of the process), alpha (regularization term), and n_restarts_optimizer (number of restarts of the optimizer for finding the best hyperparameters).

The algorithm is appropriate for regression problems where a probabilistic approach and uncertainty estimation are beneficial.

from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF
from sklearn.metrics import mean_squared_error

# generate regression dataset
X, y = make_regression(n_samples=100, n_features=1, noise=0.1, random_state=1)

# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)

# define kernel
kernel = RBF(length_scale=1.0)

# create model
model = GaussianProcessRegressor(kernel=kernel, alpha=1e-2, n_restarts_optimizer=10)

# fit model
model.fit(X_train, y_train)

# evaluate model
yhat = model.predict(X_test)
mse = mean_squared_error(y_test, yhat)
print('Mean Squared Error: %.3f' % mse)

# make a prediction
row = [[0.5]]
yhat = model.predict(row)
print('Predicted: %.3f' % yhat[0])

Running the example gives an output like:

Mean Squared Error: 0.455
Predicted: 40.512

The steps are as follows:

  1. First, a synthetic regression dataset is generated using the make_regression() function. This creates a dataset with a specified number of samples (n_samples), features (n_features), and a fixed random seed (random_state) for reproducibility. The dataset is split into training and test sets using train_test_split().

  2. Next, a GaussianProcessRegressor model is instantiated with an RBF kernel, a small regularization term (alpha), and multiple restarts for the optimizer (n_restarts_optimizer). The model is then fit on the training data using the fit() method.

  3. The performance of the model is evaluated by comparing the predictions (yhat) to the actual values (y_test) using the mean squared error metric.

  4. A single prediction can be made by passing a new data sample to the predict() method.

This example demonstrates the use of GaussianProcessRegressor for regression tasks, highlighting its ability to provide both predictions and uncertainty estimates, which can be particularly useful in various real-world applications.



See Also