About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Coverage evaluation metric
Last updated: May 08, 2025
The coverage metric measures the extent that the foundation model output is generated from the model input by calculating the percentage of output text that is also in the input.
Metric details
Coverage is a content analysis metric for generative AI quality evalutions that evaluates your model output against your model input or context.
Scope
The coverage metric evaluates generative AI assets only.
- Types of AI assets: Prompt templates
- Generative AI tasks:
- Retrieval Augmented Generation (RAG)
- Text summarization
- Supported languages: English
Scores and values
The coverage metric score indicates the extent that the foundation model output is generated from the model input. Higher scores indicate that a higher percentage of output words are within the input text.
Range of values: 0.0-1.0
Settings
- Thresholds:
- Lower limit: 0
- Upper limit: 1
Parent topic: Evaluation metrics
Was the topic helpful?
0/1000