The verbose
parameter in scikit-learn’s MLPClassifier
controls the level of logging output during model training.
MLPClassifier
(Multi-layer Perceptron Classifier) is a neural network model used for classification tasks. It learns a non-linear function approximator for classification by training on a dataset.
The verbose
parameter determines how much information is printed during the training process. Higher values provide more detailed output, which can be useful for monitoring training progress and debugging.
The default value for verbose
is False
, which means no output is produced during training. Common values are True
(or equivalently, any positive integer) for verbose output, and False
(or 0) for no output.
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score
# Generate synthetic dataset
X, y = make_classification(n_samples=1000, n_features=20, n_classes=3,
n_informative=10, random_state=42)
# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Train with different verbose values
verbose_values = [False, True, 10]
for verbose in verbose_values:
print(f"\nTraining with verbose={verbose}")
mlp = MLPClassifier(hidden_layer_sizes=(100, 50), max_iter=50, random_state=42, verbose=verbose)
mlp.fit(X_train, y_train)
y_pred = mlp.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print(f"Accuracy: {accuracy:.3f}")
Running the example gives an output like:
Training with verbose=False
Accuracy: 0.865
Training with verbose=True
Iteration 1, loss = 1.45031949
Iteration 2, loss = 1.20889502
Iteration 3, loss = 1.05425711
Iteration 4, loss = 0.92952438
Iteration 5, loss = 0.83394302
Iteration 6, loss = 0.76487259
Iteration 7, loss = 0.71192492
Iteration 8, loss = 0.66502478
Iteration 9, loss = 0.62732257
Iteration 10, loss = 0.59570854
Iteration 11, loss = 0.56709319
Iteration 12, loss = 0.54144373
Iteration 13, loss = 0.51889018
Iteration 14, loss = 0.49779808
Iteration 15, loss = 0.47840489
Iteration 16, loss = 0.46102295
Iteration 17, loss = 0.44382774
Iteration 18, loss = 0.42805304
Iteration 19, loss = 0.41366675
Iteration 20, loss = 0.40017040
Iteration 21, loss = 0.38707525
Iteration 22, loss = 0.37433072
Iteration 23, loss = 0.36309738
Iteration 24, loss = 0.35155949
Iteration 25, loss = 0.34029438
Iteration 26, loss = 0.33002944
Iteration 27, loss = 0.31982892
Iteration 28, loss = 0.30992593
Iteration 29, loss = 0.30074459
Iteration 30, loss = 0.29208920
Iteration 31, loss = 0.28312635
Iteration 32, loss = 0.27422476
Iteration 33, loss = 0.26619000
Iteration 34, loss = 0.25791049
Iteration 35, loss = 0.25044742
Iteration 36, loss = 0.24333948
Iteration 37, loss = 0.23556747
Iteration 38, loss = 0.22902981
Iteration 39, loss = 0.22207390
Iteration 40, loss = 0.21575367
Iteration 41, loss = 0.20979728
Iteration 42, loss = 0.20304091
Iteration 43, loss = 0.19783638
Iteration 44, loss = 0.19169598
Iteration 45, loss = 0.18596400
Iteration 46, loss = 0.18086196
Iteration 47, loss = 0.17532249
Iteration 48, loss = 0.17013690
Iteration 49, loss = 0.16540149
Iteration 50, loss = 0.16031085
Accuracy: 0.865
Training with verbose=10
Iteration 1, loss = 1.45031949
Iteration 2, loss = 1.20889502
Iteration 3, loss = 1.05425711
Iteration 4, loss = 0.92952438
Iteration 5, loss = 0.83394302
Iteration 6, loss = 0.76487259
Iteration 7, loss = 0.71192492
Iteration 8, loss = 0.66502478
Iteration 9, loss = 0.62732257
Iteration 10, loss = 0.59570854
Iteration 11, loss = 0.56709319
Iteration 12, loss = 0.54144373
Iteration 13, loss = 0.51889018
Iteration 14, loss = 0.49779808
Iteration 15, loss = 0.47840489
Iteration 16, loss = 0.46102295
Iteration 17, loss = 0.44382774
Iteration 18, loss = 0.42805304
Iteration 19, loss = 0.41366675
Iteration 20, loss = 0.40017040
Iteration 21, loss = 0.38707525
Iteration 22, loss = 0.37433072
Iteration 23, loss = 0.36309738
Iteration 24, loss = 0.35155949
Iteration 25, loss = 0.34029438
Iteration 26, loss = 0.33002944
Iteration 27, loss = 0.31982892
Iteration 28, loss = 0.30992593
Iteration 29, loss = 0.30074459
Iteration 30, loss = 0.29208920
Iteration 31, loss = 0.28312635
Iteration 32, loss = 0.27422476
Iteration 33, loss = 0.26619000
Iteration 34, loss = 0.25791049
Iteration 35, loss = 0.25044742
Iteration 36, loss = 0.24333948
Iteration 37, loss = 0.23556747
Iteration 38, loss = 0.22902981
Iteration 39, loss = 0.22207390
Iteration 40, loss = 0.21575367
Iteration 41, loss = 0.20979728
Iteration 42, loss = 0.20304091
Iteration 43, loss = 0.19783638
Iteration 44, loss = 0.19169598
Iteration 45, loss = 0.18596400
Iteration 46, loss = 0.18086196
Iteration 47, loss = 0.17532249
Iteration 48, loss = 0.17013690
Iteration 49, loss = 0.16540149
Iteration 50, loss = 0.16031085
Accuracy: 0.865
The key steps in this example are:
- Generate a synthetic multi-class classification dataset
- Split the data into train and test sets
- Train
MLPClassifier
models with differentverbose
values - Evaluate the accuracy of each model on the test set
Some tips for setting the verbose
parameter:
- Use
verbose=False
(or 0) for no output during training - Set
verbose=True
(or any positive integer) for detailed training information - Higher integer values (e.g., 10 or 50) provide even more detailed output
Issues to consider:
- Verbose output can slow down training, especially for large datasets or complex models
- Very detailed output (high verbose values) may be overwhelming and hard to interpret
- In production environments, it’s usually best to keep verbose off for performance reasons
- For debugging or monitoring long-running tasks, verbose output can be invaluable