sepal width, petal length, and petal width. And based on these features, a machine learning model can predict the species of the flowers.
dataset = seaborn.load_dataset("iris") D = dataset.values X = D[:, :-1] y = D[:, -1]
The last column of the dataset contains the target variable. So, X here contains all the features and y contains the target variable.
kfold = KFold(n_splits=10, shuffle=True, random_state=1)
Now, we are initializing the k-fold cross-validation. The argument n_splits indicates the number of splits. The argument shuffle=True indicates that we are shuffling the data before splitting. And, the random_state argument is used to initialize the pseudo-random number generator that is used for shuffling the data.
classifier = LogisticRegression(solver="liblinear") ovo = OneVsOneClassifier(classifier)
Now, we are initializing the logistic regression classifier. And then, we are using the logistic regression classifier to initialize the One-vs-One (OVO) classifier.
scores = cross_val_score(ovo, X, y, scoring="accuracy", cv=kfold) print("Accuracy: ", scores.mean())
We can now use the cross_val_score() function to estimate the performance of the model. We are using the accuracy score here (What is the accuracy score in machine learning?) Each iteration of the k-fold cross-validation will give an accuracy score. We are printing the average accuracy score here.
The output of the given program will be like the following:
Accuracy: 0.9733333333333334






0 Comments