The verbose
parameter in scikit-learn’s VotingClassifier
controls the verbosity of output during fitting and prediction.
VotingClassifier
is an ensemble method that combines predictions from multiple base classifiers. It can use either hard or soft voting to make final predictions.
The verbose
parameter determines how much information is printed during the fitting and prediction processes. Higher values result in more detailed output.
The default value for verbose
is 0, which means no output. Common values are 1 for basic output and 2 for more detailed information.
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import VotingClassifier
from sklearn.metrics import accuracy_score
# Generate synthetic dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15,
n_redundant=5, random_state=42)
# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Create base classifiers
clf1 = LogisticRegression(random_state=42)
clf2 = RandomForestClassifier(random_state=42)
clf3 = SVC(random_state=42)
# Create VotingClassifier instances with different verbose levels
verbose_levels = [0, 1, 2]
for verbose in verbose_levels:
voting_clf = VotingClassifier(
estimators=[('lr', clf1), ('rf', clf2), ('svc', clf3)],
voting='hard',
verbose=verbose
)
print(f"\nFitting VotingClassifier with verbose={verbose}")
voting_clf.fit(X_train, y_train)
print(f"\nPredicting with VotingClassifier (verbose={verbose})")
y_pred = voting_clf.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print(f"Accuracy: {accuracy:.3f}")
Running the example gives an output like:
Fitting VotingClassifier with verbose=0
Predicting with VotingClassifier (verbose=0)
Accuracy: 0.910
Fitting VotingClassifier with verbose=1
[Voting] ....................... (1 of 3) Processing lr, total= 0.0s
[Voting] ....................... (2 of 3) Processing rf, total= 0.3s
[Voting] ...................... (3 of 3) Processing svc, total= 0.0s
Predicting with VotingClassifier (verbose=1)
Accuracy: 0.910
Fitting VotingClassifier with verbose=2
[Voting] ....................... (1 of 3) Processing lr, total= 0.0s
[Voting] ....................... (2 of 3) Processing rf, total= 0.3s
[Voting] ...................... (3 of 3) Processing svc, total= 0.0s
Predicting with VotingClassifier (verbose=2)
Accuracy: 0.910
The key steps in this example are:
- Generate a synthetic classification dataset
- Split the data into train and test sets
- Create base classifiers (LogisticRegression, RandomForestClassifier, SVC)
- Create
VotingClassifier
instances with differentverbose
levels - Fit the models and make predictions
- Compare the output and accuracy for each
verbose
level
Some tips for using the verbose
parameter:
- Use
verbose=0
for silent operation in production environments - Set
verbose=1
for basic progress information during development - Use
verbose=2
for detailed debugging information
Issues to consider:
- Higher
verbose
levels can significantly slow down execution - In large-scale applications, consider using logging instead of
verbose
output - The amount of output can be overwhelming for very large datasets or many estimators