Decision Tree Regressor is a non-linear regression algorithm that models data using tree-like structures.
The key hyperparameters of DecisionTreeRegressor
include max_depth
(maximum depth of the tree), min_samples_split
(minimum number of samples required to split an internal node), and min_samples_leaf
(minimum number of samples required to be at a leaf node).
The algorithm is appropriate for regression problems.
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import mean_squared_error
# generate regression dataset
X, y = make_regression(n_samples=100, n_features=5, noise=0.1, random_state=1)
# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
# create model
model = DecisionTreeRegressor()
# fit model
model.fit(X_train, y_train)
# evaluate model
yhat = model.predict(X_test)
mse = mean_squared_error(y_test, yhat)
print('Mean Squared Error: %.3f' % mse)
# make a prediction
row = [[-0.56175409, 0.56096297, -1.01336191, -0.65477558, -0.32444283]]
yhat = model.predict(row)
print('Predicted: %.3f' % yhat[0])
Running the example gives an output like:
Mean Squared Error: 2104.554
Predicted: -2.791
The steps are as follows:
First, a synthetic regression dataset is generated using the
make_regression()
function. This creates a dataset with a specified number of samples (n_samples
), features (n_features
), and a fixed random seed (random_state
) for reproducibility. The dataset is split into training and test sets usingtrain_test_split()
.Next, a
DecisionTreeRegressor
model is instantiated with default hyperparameters. The model is then fit on the training data using thefit()
method.The performance of the model is evaluated by comparing the predictions (
yhat
) to the actual values (y_test
) using the mean squared error metric.A single prediction can be made by passing a new data sample to the
predict()
method.
This example demonstrates how to quickly set up and use a DecisionTreeRegressor
model for regression tasks, showcasing the flexibility and power of this algorithm in scikit-learn.
The model can be fit directly on the training data without the need for scaling or normalization. Once fit, the model can be used to make predictions on new data, enabling its use in real-world regression problems.