The d2_log_loss_score()
metric measures the reduction in log loss (negative log-likelihood) of the model compared to a no-skill model.
It is calculated by comparing the log loss of the model to the log loss of a baseline model that always predicts the mean probability of each class.
This metric is useful for classification problems where the accuracy of probability estimates is important. However, it may not be suitable for imbalanced datasets or when the absolute value of log loss is not relevant.
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import d2_log_loss_score
# Generate synthetic dataset
X, y = make_classification(n_samples=1000, n_classes=2, random_state=42)
# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Train a Logistic Regression classifier
clf = LogisticRegression(random_state=42)
clf.fit(X_train, y_train)
# Predict probabilities on the test set
y_pred_proba = clf.predict_proba(X_test)
# Calculate d2 log loss score
d2_ll_score = d2_log_loss_score(y_test, y_pred_proba)
print(f"d2 Log Loss Score: {d2_ll_score:.2f}")
Running the example gives an output like:
d2 Log Loss Score: 0.47
- Generate a synthetic binary classification dataset using
make_classification()
. - Split the dataset into training and test sets using
train_test_split()
. - Train a
LogisticRegression
classifier on the training set. - Use the trained classifier to predict probabilities on the test set with
predict_proba()
. - Calculate the
d2_log_loss_score
usingd2_log_loss_score()
by comparing the predicted probabilities to the true labels.
First, generate a synthetic binary classification dataset using the make_classification()
function from scikit-learn. This function creates a dataset with 1000 samples and 2 classes.
Next, split the dataset into training and test sets using the train_test_split()
function, reserving 20% of the data for testing.
Train a LogisticRegression
classifier using the LogisticRegression
class from scikit-learn. The fit()
method is called on the classifier object with the training features (X_train
) and labels (y_train
).
After training, predict probabilities on the test set using the predict_proba()
method with X_test
. This generates predicted probabilities for each class.
Finally, evaluate the model using the d2_log_loss_score()
function. This function takes the true labels (y_test
) and the predicted probabilities (y_pred_proba
) as input and calculates the reduction in log loss compared to a no-skill model. The resulting score is printed to quantify the model’s performance.