SKLearner Home | About | Contact | Examples

Scikit-Learn LarsCV Regression Model

LarsCV (Least Angle Regression with Cross-Validation) is used for linear regression models, particularly useful for high-dimensional data where the number of features exceeds the number of samples.

The key hyperparameters of LarsCV include max_iter (maximum number of iterations), cv (cross-validation generator or integer for the number of folds), and n_jobs (number of CPU cores used during cross-validation).

The algorithm is primarily used for regression problems, especially in scenarios involving feature selection and high-dimensional datasets.

from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LarsCV
from sklearn.metrics import mean_squared_error

# generate regression dataset
X, y = make_regression(n_samples=100, n_features=10, noise=0.1, random_state=1)

# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)

# create model
model = LarsCV(cv=5)

# fit model
model.fit(X_train, y_train)

# evaluate model
yhat = model.predict(X_test)
mse = mean_squared_error(y_test, yhat)
print('Mean Squared Error: %.3f' % mse)

# make a prediction
row = [[-0.80609165, 1.55344634, 1.59930016, -1.70157052, 0.07734007,
        0.75041164, -1.5118818, -0.32477353, -0.71915392, 1.31341878]]
yhat = model.predict(row)
print('Predicted: %.3f' % yhat[0])

Running the example gives an output like:

Mean Squared Error: 0.013
Predicted: 16.368

The steps are as follows:

  1. First, a synthetic regression dataset is generated using the make_regression() function. This creates a dataset with a specified number of samples (n_samples), features (n_features), noise level (noise), and a fixed random seed (random_state) for reproducibility. The dataset is split into training and test sets using train_test_split().

  2. Next, a LarsCV model is instantiated with cross-validation set to 5 folds (cv=5). The model is then fit on the training data using the fit() method.

  3. The performance of the model is evaluated by comparing the predictions (yhat) to the actual values (y_test) using the mean squared error metric.

  4. A single prediction can be made by passing a new data sample to the predict() method.

This example demonstrates how to implement and use LarsCV for regression tasks, illustrating the steps to train the model, evaluate its performance, and make predictions. The code is designed to be straightforward, emphasizing the key steps required to leverage LarsCV in scikit-learn.



See Also