SKLearner Home | About | Contact | Examples

Configure ElasticNet "tol" Parameter

The tol parameter in scikit-learn’s ElasticNet controls the tolerance for the optimization: the solver stops when the objective function changes by less than tol.

ElasticNet is a linear regression model that combines both L1 and L2 regularization techniques. It is useful for datasets with correlated features.

Adjusting the tol parameter can impact the convergence speed and the precision of the solution. Smaller values of tol lead to higher precision but may require more iterations.

The default value for tol is 1e-4. Common values range from 1e-5 to 1e-2 depending on the precision and computational constraints.

from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.linear_model import ElasticNet
from sklearn.metrics import mean_squared_error

# Generate synthetic regression dataset
X, y = make_regression(n_samples=1000, n_features=20, noise=0.1, random_state=42)

# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train with different tol values
tol_values = [1e-2, 1e-4, 1e-6]
errors = []

for tol in tol_values:
    enet = ElasticNet(tol=tol, random_state=42)
    enet.fit(X_train, y_train)
    y_pred = enet.predict(X_test)
    mse = mean_squared_error(y_test, y_pred)
    errors.append(mse)
    print(f"tol={tol}, Mean Squared Error: {mse:.3f}")

Running the example gives an output like:

tol=0.01, Mean Squared Error: 4638.354
tol=0.0001, Mean Squared Error: 4638.839
tol=1e-06, Mean Squared Error: 4638.839

The key steps in this example are:

  1. Generate a synthetic regression dataset with noise to simulate real-world conditions
  2. Split the data into train and test sets
  3. Train ElasticNet models with different tol values
  4. Evaluate the Mean Squared Error (MSE) of each model on the test set

Some tips and heuristics for setting tol:

Issues to consider:



See Also