In one of our previous articles, we discussed Support Vector Machine Classifiers (SVC). Linear Support Vector Machine Classifier or linear SVC is very similar to SVC. SVC uses the rbf kernel by default. A linear SVC uses a linear kernel. It also uses liblinear instead of libsvm solver. And it provides more options for the choice of loss functions and penalties. As a result, linear SVC is more suitable for larger datasets.
We can use the following Python code to implement linear SVC using sklearn.
from sklearn.svm import LinearSVC from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score from sklearn.datasets import make_classification X, y = make_classification(n_samples=200, n_features=5, n_informative=4, n_redundant=1, n_repeated=0, n_classes=3, shuffle=True, random_state=1) model = LinearSVC(max_iter=20000) kfold = KFold(n_splits=10, shuffle=True, random_state=1) scores = cross_val_score(model, X, y, cv=kfold, scoring="accuracy") print("Accuracy: ", scores.mean())
We are here creating two ndarrays X and y. X contains a total of five features, out of which there are 4 informative features and 1 redundant feature (How to create datasets using make_classification()?).
X, y = make_classification(n_samples=200, n_features=5, n_informative=4, n_redundant=1, n_repeated=0, n_classes=3, shuffle=True, random_state=1)
We are also shuffling the samples and the features. And random_state is used to initialize the pseudo-random number generator …






0 Comments