In one of our previous articles, we discussed Support Vector Machine Regressor (SVR). Linear SVR is very similar to SVR. SVR uses the “rbf” kernel by default. Linear SVR uses a linear kernel. Also, linear SVR uses liblinear instead of libsvm. And, linear SVR provides more options for the choice of penalties and loss functions. As a result, it scales better for larger samples.
We can use the following Python code to implement linear SVR using sklearn in Python.
from sklearn.svm import LinearSVR from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score from sklearn.datasets import make_regression X, y = make_regression(n_samples=200, n_features=5, n_targets=1, shuffle=True, random_state=1) model = LinearSVR() kfold = KFold(n_splits=10, shuffle=True, random_state=1) scores = cross_val_score(model, X, y, cv=kfold, scoring="r2") print("R2: ", scores.mean())
Here, we are first using the make_regression() function to create two ndarrays X and y. X contains 5 features, and y contains one target. (How to create datasets using make_regression() in sklearn?)
X, y = make_regression(n_samples=200, n_features=5, n_targets=1, shuffle=True, random_state=1)
The argument shuffle=True indicates that we are shuffling the features and the samples. And random_state is used to initialize the pseudo-random number generator that is used for randomization. …






0 Comments