# Accuracy metrics in Model Validation: Precision, Recall, F1 Score, True Positive, False Positive, False Negative, Interview questions

Most frequently asked data science interview questions for Accuracy metrics: Precision, Recall, TP, FP, FN

**What are true positives?**

The true positive rate is the number of correctly classified positive examples divided by the total number of positive examples.

**What are false positives?**

The false positive rate is the number of incorrectly classified positive examples divided by the total number of negative examples.

**What are false negatives?**

A false negative is when a test result incorrectly indicates that a particular condition or attribute is not present.

**What is Precision? And How to calculate it?**

Precision is then the true positive rate divided by the sum of the true positive rate and the false positive rate.

Precision = TP / (TP + FP)

Where,

- TP is the number of
,*true positives* - FP is the number of
**false positives**

**What is Recall? And How to calculate Recall?**

Recall is the true positive rate divided by the sum of the true positive rate and the false negative rate.

Recall = TP / (TP + FN)

Where,

- TP is the number of
*true positives**,* - FN is the number of
.*false negatives*

**What is F1 Score? And How to calculate F1 Score?**

The F1 score is a measure of a classifier's accuracy.

F1 score is calculated by taking the harmonic mean of precision and recall.

F1 = 2 * (precision * recall) / (precision + recall)