SKLearner Home | About | Contact | Examples

Configure StackingClassifier "verbose" Parameter

The verbose parameter in scikit-learn’s StackingClassifier controls the verbosity of output during training and prediction.

StackingClassifier is an ensemble method that combines multiple base classifiers by training a meta-classifier on their outputs. The verbose parameter determines how much information is printed during the fitting process.

When verbose is set to 0, no output is produced. Higher values increase the level of detail in the progress messages, with 1 showing basic progress and 2 or greater showing more detailed information.

The default value for verbose is 0, meaning no output is produced.

Common values for verbose are 0 (silent), 1 (basic progress), and 2 (detailed progress).

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import StackingClassifier
from sklearn.metrics import accuracy_score

# Generate synthetic dataset
X, y = make_classification(n_samples=1000, n_features=10, n_informative=5,
                           n_redundant=0, random_state=42)

# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Define base estimators and final estimator
estimators = [
    ('rf', RandomForestClassifier(n_estimators=10, random_state=42)),
    ('svm', SVC(kernel='rbf', random_state=42))
]
final_estimator = LogisticRegression(random_state=42)

# Train with different verbose values
verbose_values = [0, 1, 2]

for verbose in verbose_values:
    clf = StackingClassifier(estimators=estimators, final_estimator=final_estimator, verbose=verbose)
    print(f"\nTraining StackingClassifier with verbose={verbose}")
    clf.fit(X_train, y_train)
    y_pred = clf.predict(X_test)
    accuracy = accuracy_score(y_test, y_pred)
    print(f"Accuracy: {accuracy:.3f}")

Running the example gives an output like:


Training StackingClassifier with verbose=0
Accuracy: 0.915

Training StackingClassifier with verbose=1
Accuracy: 0.915

Training StackingClassifier with verbose=2
Accuracy: 0.915

The key steps in this example are:

  1. Generate a synthetic binary classification dataset
  2. Split the data into train and test sets
  3. Define base estimators (Random Forest and SVM) and final estimator (Logistic Regression)
  4. Create StackingClassifier instances with different verbose values
  5. Train each model and observe the output produced
  6. Evaluate the accuracy of each model on the test set

Some tips for using verbose:

Issues to consider:



See Also