Total Population | Predicted Class = False | Predicted Class = True |
---|---|---|
Actual Class = False | True Negative (TN) | False Positive (FP) |
Actual Class = True | False Negative (FN) | True Positive (TP) |
So, in our example, the actual or expected output is:
Ya = [False, False, True, True, True, False, True, False, False, True]
And the predicted output is:
Y = [True, False, True, True, False, False, True, False, False, True]
So, the total number of:
True Negative = the output labels that are predicted to be False and they are actually False also = 4
False Positive = the output labels that are predicted to be True but they are actually False = 1
False Negative = output labels that are predicted to be False but they are actually True = 1
True Positive = output labels that are predicted to be True and they are actually True = 4
So, the confusion matrix will be like the following:
Total Population | Predicted Class = False | Predicted Class = True |
---|---|---|
Actual Class = False | True Negative (TN) 4 | False Positive (FP) 1 |
Actual Class = True | False Negative (FN) 1 | True Positive (TP) 4 |
How to calculate Confusion Matrix using the sklearn Python library?
We can use the following Python code to compute the confusion matrix.
from sklearn.metrics import confusion_matrix Ya = [False, False, True, True, True, False, True, False, False, True] Y = [True, False, True, True, False, False, True, False, False, True] conf_matrix = confusion_matrix(Ya, Y) print("Confusion Matrix: \n", conf_matrix)
Here, Ya is the actual or expected output and Y is the predicted output. The output of the above program will be:
Confusion Matrix: [[4 1] [1 4]]






0 Comments