SKLearner Home | About | Contact | Examples

Configure LogisticRegression "verbose" Parameter

The verbose parameter in scikit-learn’s LogisticRegression controls the verbosity of the solver’s output.

LogisticRegression is a linear model used for binary classification that estimates the probability of a binary response based on one or more predictor variables.

The verbose parameter affects how much information about the training process is displayed. Setting it to 1 or higher outputs progress messages to the console.

The default value for verbose is 0, meaning no messages are output.

In practice, values of 1 or 2 are commonly used to provide insights into the model training process without overwhelming the console with too much information.

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score

# Generate synthetic dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=10,
                           n_redundant=5, random_state=42)

# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train with different verbose values
verbose_values = [0, 1, 2]
accuracies = []

for v in verbose_values:
    print(f"Training with verbose={v}")
    lr = LogisticRegression(verbose=v, random_state=42, max_iter=100)
    lr.fit(X_train, y_train)
    y_pred = lr.predict(X_test)
    accuracy = accuracy_score(y_test, y_pred)
    accuracies.append(accuracy)
    print(f"verbose={v}, Accuracy: {accuracy:.3f}\n")

Running the example gives an output like:

Training with verbose=0
verbose=0, Accuracy: 0.795

Training with verbose=1
RUNNING THE L-BFGS-B CODE

           * * *

Machine precision = 2.220D-16
 N =           21     M =           10
 This problem is unconstrained.

At X0         0 variables are exactly at the bounds

At iterate    0    f=  6.93147D-01    |proj g|=  6.67579D-01

           * * *

Tit   = total number of iterations
Tnf   = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip  = number of BFGS updates skipped
Nact  = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F     = final function value

           * * *

   N    Tit     Tnf  Tnint  Skip  Nact     Projg        F
   21     21     23      1     0     0   8.860D-05   3.530D-01
  F =  0.35297858554898159

CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL
verbose=1, Accuracy: 0.795

Training with verbose=2
RUNNING THE L-BFGS-B CODE

           * * *

Machine precision = 2.220D-16
 N =           21     M =           10
 This problem is unconstrained.

At X0         0 variables are exactly at the bounds

At iterate    0    f=  6.93147D-01    |proj g|=  6.67579D-01

At iterate    1    f=  5.38472D-01    |proj g|=  2.95589D-01

At iterate    2    f=  4.52195D-01    |proj g|=  1.78593D-01

At iterate    3    f=  3.88540D-01    |proj g|=  7.68124D-02

At iterate    4    f=  3.74904D-01    |proj g|=  6.49299D-02

At iterate    5    f=  3.66697D-01    |proj g|=  7.19245D-02

At iterate    6    f=  3.62072D-01    |proj g|=  3.03246D-02

At iterate    7    f=  3.60724D-01    |proj g|=  2.67718D-02

At iterate    8    f=  3.58316D-01    |proj g|=  3.04631D-02

At iterate    9    f=  3.55699D-01    |proj g|=  2.69311D-02

At iterate   10    f=  3.53809D-01    |proj g|=  3.48229D-02

At iterate   11    f=  3.53589D-01    |proj g|=  2.73864D-02

At iterate   12    f=  3.53084D-01    |proj g|=  3.75213D-03

At iterate   13    f=  3.53061D-01    |proj g|=  3.12438D-03

At iterate   14    f=  3.53032D-01    |proj g|=  2.58135D-03

At iterate   15    f=  3.53017D-01    |proj g|=  2.91533D-03

At iterate   16    f=  3.52991D-01    |proj g|=  3.61941D-03

At iterate   17    f=  3.52986D-01    |proj g|=  3.12513D-03

At iterate   18    f=  3.52979D-01    |proj g|=  5.55471D-04

At iterate   19    f=  3.52979D-01    |proj g|=  2.06446D-04

At iterate   20    f=  3.52979D-01    |proj g|=  1.41434D-04

At iterate   21    f=  3.52979D-01    |proj g|=  8.85970D-05

           * * *

Tit   = total number of iterations
Tnf   = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip  = number of BFGS updates skipped
Nact  = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F     = final function value

           * * *

   N    Tit     Tnf  Tnint  Skip  Nact     Projg        F
   21     21     23      1     0     0   8.860D-05   3.530D-01
  F =  0.35297858554898159

CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL
verbose=2, Accuracy: 0.795

The key steps in this example are:

Some tips and heuristics for setting verbose:

Issues to consider:



See Also