The feature drift evaluation metric measures the change in value distribution for important features.
Metric details
Feature drift is a drift v2 evaluation metric that evaluates data distribution changes for machine learning models.
Scope
The feature drift metric evaluates machine learning models only.
Types of AI assets: Machine learning models
Scores and values
The feature drift metric score indicates the change in value distribution for important features.
- Best possible score: 0.0
- Ratios:
- At 0: No change in value distribution
- Over 0: Increasing change in value distribution
Evaluation process
Drift is calculated for categorical and numeric features by measuring the probability distribution of continuous and discrete values. To identify discrete values for numeric features, a binary logarithm is used to compare the number of distinct values of each feature to the total number of values of each feature.
Do the math
The following binary logarithm formula is used to identify discrete numeric features:
If the distinct_values_count
is less than the binary logarithm of the total_count
, the feature is identified as discrete.
Jensen Shannon Distance is the normalized form of Kullback-Leibler (KL) Divergence that measures how much one probability distribution differs from the second probabillity distribution. Jensen Shannon Distance is a symmetrical score and always has a finite value.
The following formula is used to calculate the Jensen Shannon distance for two probability distributions, baseline (B) and production (P):
The overlap coefficient is calculated by measuring the total area of the intersection between two probability distributions. To measure dissimilarity between distributions, the intersection or the overlap area is subtracted from 1 to calculate the amount of drift.
The following formula is used to calculate the overlap coefficient:
-
π₯ is a series of equidistant samples that span the domain of
that range from the combined miniumum of the baseline and production data to the combined maximum of the baseline and production data.
-
is the difference between two consecutive π₯ samples.
-
is the value of the density function for production data at a π₯ sample.
-
is the value of the density function for baseline data for at a π₯ sample.
Total variation distance measures the maximum difference between the probabilities that two probability distributions, baseline (B) and production (P), assign to the same transaction as shown in the following formula:
If the two distributions are equal, the total variation distance between them becomes 0.
The following formula is used to calculate total variation distance:
-
π₯ is a series of equidistant samples that span the domain of
that range from the combined miniumum of the baseline and production data to the combined maximum of the baseline and production data.
-
is the difference between two consecutive π₯ samples.
-
is the value of the density function for production data at a π₯ sample.
-
is the value of the density function for baseline data for at a π₯ sample.
The denominator represents the total area under the density function plots for production and baseline data. These summations are an approximation of the integrations
over the domain space and both these terms should be 1 and total should
Parent topic: Evaluation metrics