SKLearner Home | About | Contact | Examples

Configure RandomForestClassifier "warm_start" Parameter

The warm_start parameter in scikit-learn’s RandomForestClassifier controls whether the model is incrementally trained on new data or retrained from scratch.

When warm_start is set to True, the model can be incrementally trained on new data without discarding the previously learned information. This can be useful when data arrives in batches or when the model needs to be updated without full retraining.

The default value for warm_start is False, meaning the model is trained from scratch each time .fit() is called.

Setting warm_start to True on a model that is already fit enables incremental learning.

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
import numpy as np

# Generate synthetic dataset
X, y = make_classification(n_samples=1000, n_features=10, random_state=42)

# Split into initial train set and additional batch
X_train, X_new, y_train, y_new = train_test_split(X, y, test_size=0.2, random_state=42)

# Train with warm_start=False
rf = RandomForestClassifier(n_estimators=100, warm_start=False, random_state=42)
rf.fit(X_train, y_train)
y_pred_false = rf.predict(X_new)
accuracy_false = accuracy_score(y_new, y_pred_false)
print(f"Accuracy with warm_start=False: {accuracy_false:.3f}")

# Train with warm_start=True
X_combined = np.concatenate((X_train, X_new))
y_combined = np.concatenate((y_train, y_new))

rf.set_params(n_estimators=120, warm_start=True)
rf.fit(X_combined, y_combined)
y_pred_true = rf.predict(X_new)
accuracy_true = accuracy_score(y_new, y_pred_true)
print(f"Accuracy with warm_start=True: {accuracy_true:.3f}")

The output from running this code would look like:

Accuracy with warm_start=False: 0.880
Accuracy with warm_start=True: 0.895

The key steps in this example are:

  1. Generate a synthetic classification dataset
  2. Split the data into an initial train set and an additional batch
  3. Train a RandomForestClassifier with warm_start=False on the initial train set
  4. Evaluate the accuracy of the model on the additional batch
  5. Train the already trained RandomForestClassifier with warm_start=True on the combined initial and additional data
  6. Evaluate the accuracy of the warm started model on the additional batch

Some tips and heuristics for using warm_start:

Issues to consider:



See Also