The max_samples
parameter in scikit-learn’s BaggingClassifier
controls the number of samples drawn from the training set to train each base estimator.
Bagging (Bootstrap Aggregating) is an ensemble method that combines predictions from multiple base estimators trained on different subsets of the data. The max_samples
parameter influences the size of these subsets.
Setting max_samples
affects the diversity of the base estimators. Smaller values increase diversity but may lead to underfitting, while larger values reduce diversity but could result in overfitting.
The default value for max_samples
is 1.0, which means using all samples. Common values range from 0.5 to 1.0, depending on the dataset size and desired trade-off between bias and variance.
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
# Generate synthetic dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=10,
n_redundant=5, n_classes=2, random_state=42)
# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Train with different max_samples values
max_samples_values = [0.1, 0.5, 0.8, 1.0]
accuracies = []
for samples in max_samples_values:
bagging = BaggingClassifier(estimator=DecisionTreeClassifier(),
max_samples=samples,
n_estimators=100,
random_state=42)
bagging.fit(X_train, y_train)
y_pred = bagging.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
accuracies.append(accuracy)
print(f"max_samples={samples}, Accuracy: {accuracy:.3f}")
Running the example gives an output like:
max_samples=0.1, Accuracy: 0.865
max_samples=0.5, Accuracy: 0.890
max_samples=0.8, Accuracy: 0.900
max_samples=1.0, Accuracy: 0.895
[Finished in 3.2s]
The key steps in this example are:
- Generate a synthetic binary classification dataset with informative and noise features
- Split the data into train and test sets
- Create
BaggingClassifier
instances with differentmax_samples
values - Train models and evaluate accuracy on the test set
- Compare performance across different
max_samples
values
Some tips for setting max_samples
:
- Start with the default value of 1.0 and decrease it to find a balance between diversity and performance
- For large datasets, smaller values (e.g., 0.5-0.8) often work well
- For smaller datasets, using higher values or even 1.0 may be necessary to ensure sufficient training data for each base estimator
Issues to consider:
- The optimal
max_samples
value depends on the dataset size and complexity - Smaller values increase model variance but can lead to underfitting if too small
- Larger values reduce diversity among base estimators, potentially limiting the ensemble’s ability to generalize