About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Readability evaluation metric
Last updated: May 08, 2025
The readability metric determines how difficult the model's output is to read by measuring characteristics such as sentence length and word complexity.
Metric details
Readability is a generative AI quality evaluation metric that measures how well generative AI assets perform tasks.
Scope
The readability metric evaluates generative AI assets only.
- Types of AI assets: Prompt templates
- Generative AI tasks:
- Text summarization
- Content generation
- Supported languages: English
Scores and values
The readability metric score indicates how easy the model's output is to read. Higher scores indicate that the model's output is easier to read.
- Range of values: 0.0-1.0
- Best possible score: 1.0
Settings
- Thresholds:
- Lower limit: 60
Parent topic: Evaluation metrics
Was the topic helpful?
0/1000