- Using SVC with One-Vs-One (OVO) or One-Vs-Rest (OVR) strategy by specifying the decision_function_shape argument of the SVC() constructor.
- Using the One-Vs-One (OVO) classifier with SVC
- Using the One-Vs-Rest (OVR) classifier with SVC
We have already discussed the last two methods in our previous articles. Interested readers, please follow the above links to learn more. In this article, we will discuss the first method.
We can use the following Python code to use the OVO or the OVR strategy along with SVC by specifying the decision_function_shape argument in the SVC() constructor.
import seaborn
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.svm import SVC
dataset = seaborn.load_dataset("iris")
D = dataset.values
X = D[:, :-1]
y = D[:, -1]
kfold = KFold(n_splits=10, shuffle=True, random_state=1)
model = SVC(decision_function_shape="ovo")
scores = cross_val_score(model, X, y, cv=kfold, scoring="accuracy")
print("Accuracy: ", scores.mean())
Here, we are first reading the iris dataset and then, splitting the columns of the dataset into features and the target variable.
dataset = seaborn.load_dataset("iris")
D = dataset.values
X = D[:, :-1]
y = D[:, -1]
The last column of the dataset contains the target variable. So, X here contains all the features and y contains the target variable.
kfold = KFold(n_splits=10, shuffle=True, random_state=1)
Now, we are initializing the k-fold cross-validation with 10 splits. We are shuffling the data before splitting. And the argument random_state is used to initialize the pseudo-random number generator that is used for randomization.
model = SVC(decision_function_shape="ovo")
Now, we are initializing the model using the SVC class. The argument decision_function_shape=”ovo” indicates that we are using the OVO strategy here. We can also use the OVR strategy instead of OVO.
model = SVC(decision_function_shape="over")
Please note that by default, the SVC classifier uses the OVR strategy.
scores = cross_val_score(model, X, y, cv=kfold, scoring="accuracy")
print("Accuracy: ", scores.mean())
Now, we are using the cross_val_score() function to estimate the performance of the model. We are using an accuracy score here (What is the accuracy score in machine learning?) Please note that we will get an accuracy score for each iteration of the k-fold cross-validation. We are printing the average accuracy score here.
The output of the given program will be like the following:
Accuracy: 0.96








































0 Comments