from sklearn.model_selection import LeaveOneOut from sklearn.linear_model import LogisticRegression from sklearn.model_selection import cross_val_score import pandas data = pandas.read_csv("diabetes.csv") D = data.values X = D[:, :-1] y = D[:, -1] loocv = LeaveOneOut() classifier = LogisticRegression(solver="liblinear") results = cross_val_score(classifier, X, y, cv=loocv, scoring="accuracy") print("Accuracy: ", results.mean())
Here, we are first using pandas to read the Pima Indians Diabetes dataset. The dataset contains various predictor variables such as the number of pregnancies the patient has had, the BMI, insulin level, age, etc. A machine learning model can learn from the dataset and predict whether the patient has diabetes based on these predictor variables.
D = data.values X = D[:, :-1] y = D[:, -1]
Now, we are splitting the columns of the dataset into features and the target variable. The last column of the dataset contains the target variable. So, X here contains all the features. And y contains the target variable.
loocv = LeaveOneOut()
Now, we are using the LeaveOneOut class to initialize the Leave One Out Cross Validation.
classifier = LogisticRegression(solver="liblinear") results = cross_val_score(classifier, X, y, cv=loocv, scoring="accuracy")
We now initialize the classifier using the LogisticRegression class. Please note that LogisticRegression() by default, uses libfgs or Limited-memory Broyden–Fletcher–Goldfarb–Shanno. This solver may be good for smaller datasets. On larger datasets, …






0 Comments