The verbose
parameter in RandomizedSearchCV
controls the verbosity of the output during the search process. Random search is a hyperparameter optimization method that tries random combinations of parameters to find the best performing model.
The verbose
parameter can be set to an integer value, with higher values resulting in more detailed output.
The default value is 0, which means no output is printed during the search process.
Common values for verbose
are 1 (minimal output), 2 (more detailed output), or 10 (even more detailed output, useful for debugging).
As a heuristic, set verbose
to 0 for no output, 1 for basic progress updates, or higher values for more detailed information during the search process.
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint
# Generate a synthetic binary classification dataset
X, y = make_classification(n_samples=1000, n_features=10, n_informative=5, n_redundant=5, random_state=42)
# Define a parameter distribution for RandomForestClassifier hyperparameters
param_dist = {'n_estimators': randint(10, 100),
'max_depth': [None, 5, 10],
'min_samples_split': randint(2, 10)}
# Create a base RandomForestClassifier model
rf = RandomForestClassifier(random_state=42)
# List of verbose values to test
verbose_values = [0, 1, 2]
for verbose in verbose_values:
print(f"Running RandomizedSearchCV with verbose={verbose}")
# Run RandomizedSearchCV with the current verbose value
search = RandomizedSearchCV(rf, param_dist, n_iter=10, cv=5, verbose=verbose, random_state=42)
search.fit(X, y)
print(f"Best score for verbose={verbose}: {search.best_score_:.3f}")
print()
Running the example gives an output like:
Running RandomizedSearchCV with verbose=0
Best score for verbose=0: 0.939
Running RandomizedSearchCV with verbose=1
Fitting 5 folds for each of 10 candidates, totalling 50 fits
Best score for verbose=1: 0.939
Running RandomizedSearchCV with verbose=2
Fitting 5 folds for each of 10 candidates, totalling 50 fits
[CV] END .max_depth=10, min_samples_split=5, n_estimators=24; total time= 0.1s
[CV] END .max_depth=10, min_samples_split=5, n_estimators=24; total time= 0.1s
[CV] END .max_depth=10, min_samples_split=5, n_estimators=24; total time= 0.1s
[CV] END .max_depth=10, min_samples_split=5, n_estimators=24; total time= 0.1s
[CV] END .max_depth=10, min_samples_split=5, n_estimators=24; total time= 0.1s
[CV] END .max_depth=10, min_samples_split=9, n_estimators=70; total time= 0.1s
[CV] END .max_depth=10, min_samples_split=9, n_estimators=70; total time= 0.1s
[CV] END .max_depth=10, min_samples_split=9, n_estimators=70; total time= 0.2s
[CV] END .max_depth=10, min_samples_split=9, n_estimators=70; total time= 0.1s
[CV] END .max_depth=10, min_samples_split=9, n_estimators=70; total time= 0.2s
[CV] END max_depth=None, min_samples_split=8, n_estimators=92; total time= 0.2s
[CV] END max_depth=None, min_samples_split=8, n_estimators=92; total time= 0.2s
[CV] END max_depth=None, min_samples_split=8, n_estimators=92; total time= 0.2s
[CV] END max_depth=None, min_samples_split=8, n_estimators=92; total time= 0.2s
[CV] END max_depth=None, min_samples_split=8, n_estimators=92; total time= 0.2s
[CV] END .max_depth=10, min_samples_split=4, n_estimators=84; total time= 0.2s
[CV] END .max_depth=10, min_samples_split=4, n_estimators=84; total time= 0.2s
[CV] END .max_depth=10, min_samples_split=4, n_estimators=84; total time= 0.2s
[CV] END .max_depth=10, min_samples_split=4, n_estimators=84; total time= 0.2s
[CV] END .max_depth=10, min_samples_split=4, n_estimators=84; total time= 0.2s
[CV] END max_depth=None, min_samples_split=5, n_estimators=33; total time= 0.1s
[CV] END max_depth=None, min_samples_split=5, n_estimators=33; total time= 0.1s
[CV] END max_depth=None, min_samples_split=5, n_estimators=33; total time= 0.1s
[CV] END max_depth=None, min_samples_split=5, n_estimators=33; total time= 0.1s
[CV] END max_depth=None, min_samples_split=5, n_estimators=33; total time= 0.1s
[CV] END .max_depth=10, min_samples_split=7, n_estimators=62; total time= 0.1s
[CV] END .max_depth=10, min_samples_split=7, n_estimators=62; total time= 0.1s
[CV] END .max_depth=10, min_samples_split=7, n_estimators=62; total time= 0.1s
[CV] END .max_depth=10, min_samples_split=7, n_estimators=62; total time= 0.1s
[CV] END .max_depth=10, min_samples_split=7, n_estimators=62; total time= 0.1s
[CV] END ..max_depth=5, min_samples_split=9, n_estimators=39; total time= 0.1s
[CV] END ..max_depth=5, min_samples_split=9, n_estimators=39; total time= 0.1s
[CV] END ..max_depth=5, min_samples_split=9, n_estimators=39; total time= 0.1s
[CV] END ..max_depth=5, min_samples_split=9, n_estimators=39; total time= 0.1s
[CV] END ..max_depth=5, min_samples_split=9, n_estimators=39; total time= 0.1s
[CV] END ..max_depth=5, min_samples_split=3, n_estimators=73; total time= 0.1s
[CV] END ..max_depth=5, min_samples_split=3, n_estimators=73; total time= 0.1s
[CV] END ..max_depth=5, min_samples_split=3, n_estimators=73; total time= 0.1s
[CV] END ..max_depth=5, min_samples_split=3, n_estimators=73; total time= 0.2s
[CV] END ..max_depth=5, min_samples_split=3, n_estimators=73; total time= 0.1s
[CV] END max_depth=None, min_samples_split=2, n_estimators=85; total time= 0.2s
[CV] END max_depth=None, min_samples_split=2, n_estimators=85; total time= 0.2s
[CV] END max_depth=None, min_samples_split=2, n_estimators=85; total time= 0.2s
[CV] END max_depth=None, min_samples_split=2, n_estimators=85; total time= 0.2s
[CV] END max_depth=None, min_samples_split=2, n_estimators=85; total time= 0.2s
[CV] END ..max_depth=5, min_samples_split=7, n_estimators=98; total time= 0.3s
[CV] END ..max_depth=5, min_samples_split=7, n_estimators=98; total time= 0.5s
[CV] END ..max_depth=5, min_samples_split=7, n_estimators=98; total time= 0.2s
[CV] END ..max_depth=5, min_samples_split=7, n_estimators=98; total time= 0.2s
[CV] END ..max_depth=5, min_samples_split=7, n_estimators=98; total time= 0.2s
Best score for verbose=2: 0.939
The steps are as follows:
- Generate a synthetic binary classification dataset using
make_classification()
from scikit-learn. - Define a parameter distribution dictionary
param_dist
forRandomForestClassifier
hyperparameters usingrandint
for random integer sampling. - Create a base
RandomForestClassifier
modelrf
. - Iterate over different
verbose
values (0, 1, 2). - For each
verbose
value:- Print a message indicating the current
verbose
value. - Run
RandomizedSearchCV
with 10 iterations and 5-fold cross-validation, using the currentverbose
value. - Print the best score for the current
verbose
value.
- Print a message indicating the current