SKLearner Home | About | Contact | Examples

Configure SVR "C" Parameter

Support Vector Regression (SVR) is a powerful algorithm for regression tasks, particularly when dealing with non-linear relationships between features and the target variable. The C parameter in SVR controls the trade-off between model complexity and the amount of error allowed in training.

A smaller C value prioritizes model simplicity and allows more training errors, resulting in a smoother decision boundary. Conversely, a larger C value emphasizes fitting the training data closely, potentially leading to a more complex model.

The default value of C in scikit-learn’s SVR is 1.0. In practice, common values for C typically range from 0.1 to 100, depending on the specific problem and dataset characteristics.

from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.svm import SVR
from sklearn.metrics import mean_squared_error

# Generate synthetic dataset
X, y = make_regression(n_samples=1000, n_features=10, noise=0.1, random_state=42)

# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train with different C values
C_values = [0.1, 1, 10, 100]
mse_scores = []

for C in C_values:
    svr = SVR(C=C)
    svr.fit(X_train, y_train)
    y_pred = svr.predict(X_test)
    mse = mean_squared_error(y_test, y_pred)
    mse_scores.append(mse)
    print(f"C={C}, MSE: {mse:.3f}")

Running the example gives an output like:

C=0.1, MSE: 16593.806
C=1, MSE: 12758.146
C=10, MSE: 2190.753
C=100, MSE: 521.694

The key steps in this example are:

  1. Generate a synthetic regression dataset with non-linear patterns
  2. Split the data into train and test sets
  3. Train SVR models with different C values
  4. Evaluate the models using mean squared error (MSE) on the test set

Some tips and heuristics for setting the C parameter:

Issues to consider when tuning C:



See Also