Unexplainable and untraceable actions risk for AI
Description
Explanations, lineage and trace information, and source attribution for AI agent actions might be difficult, imprecise or unobtainable.
Why is unexplainable and untraceable actions a concern for foundation models?
Without clear explanations, lineage trace information, and source attributions for AI agent actions, it is difficult for users, model validators, and auditors to understand and trust the model. Wrong explanations might lead to over-trust.
Parent topic: AI risk atlas
We provide examples covered by the press to help explain many of the foundation models' risks. Many of these events covered by the press are either still evolving or have been resolved, and referencing them can help the reader understand the potential risks and work toward mitigations. Highlighting these examples are for illustrative purposes only.