In Python, the accuracy_score
function of the sklearn.metrics
package calculates the accuracy score for a set of predicted labels against the true labels.
To use the accuracy_score
function, we’ll import it into our program, as shown below:
from sklearn.metrics import accuracy_score
The syntax of the accuracy_score
function is as follows:
sklearn.metrics.accuracy_score(y_true, y_pred, *, normalize=True, sample_weight=None)
The accuracy_score
function accepts the following parameters:
y_true
: These are the true labels.y_pred
: These are the predicted labels.normalize
: If this value is True
, then the fraction of correct predictions is returned; otherwise, the number of correct predictions is returned. By default, normalize
is True
.sample_weight
: These are any sample weights to be used in calculating the accuracy.This function returns either the fraction of the correct predictions or the number of correct predictions, depending on the value of the normalize
parameter.
The code below shows how to use the accuracy_score
function in Python.
from sklearn.metrics import accuracy_score# define true labelstrue_labels = ["a", "c", "b", "a"]# define corresponding predicted labelspred_labels = ["c", "c", "b", "a"]# find accuracy scoresaccuracy = accuracy_score(true_labels, pred_labels)print("The accuracy of prediction is: ", accuracy)# find number of accurate predictionsaccurate_predictions = accuracy_score(true_labels, pred_labels, normalize=False)print("The number of accurate predictions is: ", accurate_predictions)
Line 1: We import the accuracy_score
function from the sklearn.metrics
library.
Lines 4-7: We define the true labels and predicted labels.
Line 10: We use the accuracy_score
function to find the fraction of correctly classified labels. As true_labels
and pred_labels
have only 1 value that does not match and 3 values that match, the accuracy_score
function returns 0.75
.
Line 14: We use the accuracy_score
function with normalize
set to False
, so it returns the number of correctly classified labels, 3
.