0 / 0
False negative rate difference evaluation metric
Last updated: Feb 21, 2025
False negative rate difference evaluation metric

The false negative rate difference metric calculates the percentage of positive transactions that were incorrectly scored as negative by your model.

Metric details

False negative rate difference is a fairness evaluation metric that can help determine whether your asset produces biased outcomes.

Scope

The false negative rate difference metric evaluates generative AI assets and machine learning models.

  • Types of AI assets:
    • Prompt templates
    • Machine learning models
  • Generative AI tasks: Text classification
  • Machine learning problem type: Binary classification

Scores and values

The false negative rate difference metric score indicates the difference in false negative rates for the monitored and reference groups.

  • Range of values: 0.0-1.0
  • Best possible score: 0.0
  • Ratios:
    • Under 0: Less false negatives in monitored group
    • At 0: Both groups have equal odds
    • Over 0: Higher rate of false negatives in monitored group

Evaluation process

To calculate the false discovery rate difference, confusion matrices are generated for the monitored and reference groups to identify the amount of false negatives and true positives for each group. The false negative and true positive values are used to calculate the false negative rate for each group. The false negative rate of the reference group is subtracted from the false negative rate of the monitored group to calculate the false negative rate difference.

Do the math

The following formula is used for calculating false negative rate (FNR):

false negative rate formula is displayed

The following formula is used for calculating false negative rate difference:

false negative rate difference formula is displayed

Parent topic: Evaluation metrics