0 / 0

Function calling hallucination risk for AI

Last updated: May 27, 2025
Function calling hallucination risk for AI
Robustess Icon representing robustness risks.
Robustness
Agentic AI risks
Specific to agentic AI

Description

AI agents might make mistakes when generating function calls (calls to tools to execute actions). Those function calls might result in incorrect, unnecessary or harmful actions. Examples: Generating wrong functions or wrong parameters for the functions.

Why is function calling hallucination a concern for foundation models?

Hallucinations when generating function calls might result in wrong or redundant actions being performed. Depending on the actions taken, AI agents can cause harms to owners and users of the AI agents.

Parent topic: AI risk atlas

We provide examples covered by the press to help explain many of the foundation models' risks. Many of these events covered by the press are either still evolving or have been resolved, and referencing them can help the reader understand the potential risks and work toward mitigations. Highlighting these examples are for illustrative purposes only.