SKLearner Home | About | Contact | Examples

Scikit-Learn RidgeClassifierCV Model

Ridge Classifier with built-in cross-validation (RidgeClassifierCV) provides a robust method for classification by automatically selecting the best regularization parameter through cross-validation. This example demonstrates how to use RidgeClassifierCV to classify data efficiently.

RidgeClassifierCV is suitable for both binary and multi-class classification problems. Key hyperparameters include alphas, a list of alpha values to try for regularization strength, cv, the cross-validation generator or integer, and scoring, which specifies the metric for evaluation.

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.linear_model import RidgeClassifierCV
from sklearn.metrics import accuracy_score

# generate binary classification dataset
X, y = make_classification(n_samples=100, n_features=5, n_classes=2, random_state=1)

# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)

# create model
model = RidgeClassifierCV(alphas=[0.1, 1.0, 10.0], cv=5)

# fit model
model.fit(X_train, y_train)

# evaluate model
yhat = model.predict(X_test)
acc = accuracy_score(y_test, yhat)
print('Accuracy: %.3f' % acc)

# make a prediction
row = [[-1.10325445, -0.49821356, -0.05962247, -0.89224592, -0.70158632]]
yhat = model.predict(row)
print('Predicted: %d' % yhat[0])

Running the example gives an output like:

Accuracy: 0.950
Predicted: 0

The steps are as follows:

  1. First, a synthetic binary classification dataset is generated using the make_classification() function. This creates a dataset with a specified number of samples (n_samples), classes (n_classes), and a fixed random seed (random_state) for reproducibility. The dataset is split into training and test sets using train_test_split().

  2. Next, a RidgeClassifierCV model is instantiated with a list of alpha values for regularization strength and the number of cross-validation folds (cv).

  3. The model is then fit on the training data using the fit() method.

  4. The performance of the model is evaluated by comparing the predictions (yhat) to the actual values (y_test) using the accuracy score metric.

  5. A single prediction can be made by passing a new data sample to the predict() method.

This example demonstrates how to quickly set up and use a RidgeClassifierCV model for classification tasks, showcasing the simplicity and effectiveness of this algorithm in scikit-learn.



See Also