SKLearner Home | About | Contact | Examples

Configure BaggingClassifier "max_samples" Parameter

The max_samples parameter in scikit-learn’s BaggingClassifier controls the number of samples drawn from the training set to train each base estimator.

Bagging (Bootstrap Aggregating) is an ensemble method that combines predictions from multiple base estimators trained on different subsets of the data. The max_samples parameter influences the size of these subsets.

Setting max_samples affects the diversity of the base estimators. Smaller values increase diversity but may lead to underfitting, while larger values reduce diversity but could result in overfitting.

The default value for max_samples is 1.0, which means using all samples. Common values range from 0.5 to 1.0, depending on the dataset size and desired trade-off between bias and variance.

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score

# Generate synthetic dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=10,
                           n_redundant=5, n_classes=2, random_state=42)

# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train with different max_samples values
max_samples_values = [0.1, 0.5, 0.8, 1.0]
accuracies = []

for samples in max_samples_values:
    bagging = BaggingClassifier(estimator=DecisionTreeClassifier(),
                                max_samples=samples,
                                n_estimators=100,
                                random_state=42)
    bagging.fit(X_train, y_train)
    y_pred = bagging.predict(X_test)
    accuracy = accuracy_score(y_test, y_pred)
    accuracies.append(accuracy)
    print(f"max_samples={samples}, Accuracy: {accuracy:.3f}")

Running the example gives an output like:

max_samples=0.1, Accuracy: 0.865
max_samples=0.5, Accuracy: 0.890
max_samples=0.8, Accuracy: 0.900
max_samples=1.0, Accuracy: 0.895
[Finished in 3.2s]

The key steps in this example are:

  1. Generate a synthetic binary classification dataset with informative and noise features
  2. Split the data into train and test sets
  3. Create BaggingClassifier instances with different max_samples values
  4. Train models and evaluate accuracy on the test set
  5. Compare performance across different max_samples values

Some tips for setting max_samples:

Issues to consider:



See Also