Trusted answers to developer questions
Trusted Answers to Developer Questions

Related Tags

sklearn
communitycreator
python

# What is the f1_score function in Sklearn?

Talha Ashar

Grokking Modern System Design Interview for Engineers & Managers

Ace your System Design Interview and take your career to the next level. Learn to handle the design of applications like Netflix, Quora, Facebook, Uber, and many more in a 45-min interview. Learn the RESHADED framework for architecting web-scale applications by determining requirements, constraints, and assumptions before diving into a step-by-step design process.

### Overview

In Python, the f1_score function of the sklearn.metrics package calculates the F1 score for a set of predicted labels.

The F1 score is the harmonic mean of precision and recall, as shown below:

F1_score = 2 * (precision * recall) / (precision + recall)


An F1 score can range between $0-1$, with 0 being the worst score and 1 being the best.

To use the f1_score function, we’ll import it into our program, as shown below:

from sklearn.metrics import f1_score


### Syntax

sklearn.metrics.f1_score(y_true, y_pred, *, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn')


### Parameters

The f1_score function accepts the following parameters:

• y_true: These are the true labels.

• y_pred: These are the predicted labels.

• labels: This parameter identifies the labels to be included when there is a multiclass problem.

• pos_label: This is the class to report in case of a binary classification problem.

• average: This is the type of averaging to be performed in the case of multiclass data.

• sample_weight: These are any sample weights to be used in the calculation of the F1 score.

Note: Find a comprehensive list of parameters and their possible values here.

### Return value

This function returns the F1 score of the positive class for binary classification problems or the weighted average of the F1 scores of each class for multiclass problems.

### Example

from sklearn.metrics import f1_score

# define true labels
true_labels = ["a", "c", "b", "a"]

# define corresponding predicted labels
pred_labels = ["c", "c", "b", "a"]

# find f1 scores for different weighted averages

score = f1_score(true_labels, pred_labels, average="macro")
print("Macro F1-Score: ", score)

score = f1_score(true_labels, pred_labels, average="micro")
print("Micro F1-Score: ", score)

score = f1_score(true_labels, pred_labels, average="weighted")
print("Weighted F1-Score: ", score)
f1_score

### Explanation

• Line 1: We import the f1_score function from the sklearn.metrics library.

• Lines 4–7: We define the true labels and predicted labels. As there are 3 classes (a, b, c), this is a multiclass problem.

• Line 11: We calculate the macro-average of the predicted classes through the F1_score function. The calculated score is output accordingly.

• Line 14: We calculate the micro-average of the predicted classes through the F1_score function. The calculated score is output accordingly.

• Line 17: We calculate the weighted average of the predicted classes through the F1_score function. The calculated score is output accordingly.

RELATED TAGS

sklearn
communitycreator
python

CONTRIBUTOR

Talha Ashar

Grokking Modern System Design Interview for Engineers & Managers

Ace your System Design Interview and take your career to the next level. Learn to handle the design of applications like Netflix, Quora, Facebook, Uber, and many more in a 45-min interview. Learn the RESHADED framework for architecting web-scale applications by determining requirements, constraints, and assumptions before diving into a step-by-step design process.

Keep Exploring

Learn in-demand tech skills in half the time 