0 / 0

Over- or under-reliance on AI agents risk for AI

Last updated: May 29, 2025
Over- or under-reliance on AI agents risk for AI
Value alignment
Agentic AI risks
Amplified by agentic AI

Description

Reliance, that is the willingness to accept an AI agent behavior, depends on how much a user trusts that agent and what they are using it for. Over-reliance occurs when a user puts too much trust in an AI agent, accepting an AI agent’s behavior even when it is likely undesired. Under-reliance is the opposite, where the user doesn’t trust the AI agent but should.

Increasing autonomy (to take action, select and consult resources/tools) of AI agents and the possibility of opaqueness and open-endedness increase the variability and visibility of agent behavior leading to difficulty in calibrating trust and possibly contributing to both over- and under-reliance.

Why is over- or under-reliance on ai agents a concern for foundation models?

Over/under reliance can lead to poor decision making by humans because of their misplaced trust in the AI agent, with negative consequences that escalate with the significance of the decision.

Parent topic: AI risk atlas

We provide examples covered by the press to help explain many of the foundation models' risks. Many of these events covered by the press are either still evolving or have been resolved, and referencing them can help the reader understand the potential risks and work toward mitigations. Highlighting these examples are for illustrative purposes only.