SKLearner Home | About | Contact | Examples

Scikit-Learn OneVsOneClassifier Model

OneVsOneClassifier is an ensemble technique for extending binary classifiers to multi-class problems. It fits one classifier per pair of classes and predicts based on a voting mechanism. This method is beneficial for algorithms that perform well on binary tasks but need adaptation for multi-class classification.

Key hyperparameters include the base estimator (the binary classifier used for each pair) and the number of jobs (n_jobs) to run in parallel.

OneVsOneClassifier is appropriate for multi-class classification problems.

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.multiclass import OneVsOneClassifier
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score

# generate multi-class classification dataset
X, y = make_classification(n_samples=100, n_features=5, n_classes=3, n_informative=3, n_clusters_per_class=1, random_state=1)

# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)

# create OneVsOneClassifier with SVC as base estimator
model = OneVsOneClassifier(SVC())

# fit model
model.fit(X_train, y_train)

# evaluate model
yhat = model.predict(X_test)
acc = accuracy_score(y_test, yhat)
print('Accuracy: %.3f' % acc)

# make a prediction
row = [[-1.10325445, -0.49821356, -0.05962247, -0.89224592, -0.70158632]]
yhat = model.predict(row)
print('Predicted: %d' % yhat[0])

Running the example gives an output like:

Accuracy: 0.900
Predicted: 2

The steps are as follows:

  1. First, a synthetic multi-class classification dataset is generated using the make_classification() function. This creates a dataset with a specified number of samples (n_samples), classes (n_classes), and a fixed random seed (random_state) for reproducibility. The dataset is split into training and test sets using train_test_split().

  2. Next, a OneVsOneClassifier is instantiated with SVC as the base estimator. The model is then fit on the training data using the fit() method.

  3. The performance of the model is evaluated by comparing the predictions (yhat) to the actual values (y_test) using the accuracy score metric.

  4. A single prediction can be made by passing a new data sample to the predict() method.

This example demonstrates how to set up and use a OneVsOneClassifier with a support vector classifier (SVC) for multi-class classification tasks. It highlights the effectiveness of this ensemble technique in handling multi-class problems.



See Also