The verbose
parameter in scikit-learn’s StackingClassifier
controls the verbosity of output during training and prediction.
StackingClassifier
is an ensemble method that combines multiple base classifiers by training a meta-classifier on their outputs. The verbose
parameter determines how much information is printed during the fitting process.
When verbose
is set to 0, no output is produced. Higher values increase the level of detail in the progress messages, with 1 showing basic progress and 2 or greater showing more detailed information.
The default value for verbose
is 0, meaning no output is produced.
Common values for verbose
are 0 (silent), 1 (basic progress), and 2 (detailed progress).
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import StackingClassifier
from sklearn.metrics import accuracy_score
# Generate synthetic dataset
X, y = make_classification(n_samples=1000, n_features=10, n_informative=5,
n_redundant=0, random_state=42)
# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Define base estimators and final estimator
estimators = [
('rf', RandomForestClassifier(n_estimators=10, random_state=42)),
('svm', SVC(kernel='rbf', random_state=42))
]
final_estimator = LogisticRegression(random_state=42)
# Train with different verbose values
verbose_values = [0, 1, 2]
for verbose in verbose_values:
clf = StackingClassifier(estimators=estimators, final_estimator=final_estimator, verbose=verbose)
print(f"\nTraining StackingClassifier with verbose={verbose}")
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print(f"Accuracy: {accuracy:.3f}")
Running the example gives an output like:
Training StackingClassifier with verbose=0
Accuracy: 0.915
Training StackingClassifier with verbose=1
Accuracy: 0.915
Training StackingClassifier with verbose=2
Accuracy: 0.915
The key steps in this example are:
- Generate a synthetic binary classification dataset
- Split the data into train and test sets
- Define base estimators (Random Forest and SVM) and final estimator (Logistic Regression)
- Create
StackingClassifier
instances with differentverbose
values - Train each model and observe the output produced
- Evaluate the accuracy of each model on the test set
Some tips for using verbose
:
- Use
verbose=0
for silent operation in production environments - Set
verbose=1
for basic progress information during development - Use
verbose=2
or higher for detailed debugging information
Issues to consider:
- Higher
verbose
values can significantly slow down training, especially with large datasets - In a production environment, consider redirecting verbose output to log files instead of stdout
- The verbosity level doesn’t affect the model’s performance, only the amount of information displayed during training