0 / 0

Redundant actions risk for AI

Last updated: May 27, 2025
Redundant actions risk for AI
Computational inefficiency Icon representing computational inefficiency.
Computational inefficiency
Agentic AI risks
Specific to agentic AI

Description

AI agents can execute actions that are not needed for achieving the goal.

In an extreme case, AI agents might enter a cycle of executing the same actions repeatedly without any progress. This could happen because of unexpected conditions in the environment, the AI agent’s failure to reflect on its action, AI agent reasoning and planning errors or the AI agent’s lack of knowledge about the problem.

Why is redundant actions a concern for foundation models?

Executing actions that are not needed for the goal might result in wasting computation resources, increased cost, reducing AI agent’s efficiency in achieving the goal, and leading to potentially harmful outcomes. Executing the same actions repeatedly could prevent the AI agent from achieving the goal, strain computational resources, and increase cost. As agents become more autonomous, verifying that AI agents operate efficiently becomes increasing time consuming.

Parent topic: AI risk atlas

We provide examples covered by the press to help explain many of the foundation models' risks. Many of these events covered by the press are either still evolving or have been resolved, and referencing them can help the reader understand the potential risks and work toward mitigations. Highlighting these examples are for illustrative purposes only.