AI risk atlas

Explore this atlas to understand some of the risks of working with agentic AI, generative AI, and machine learning models.
On this page
New and amplified risks of agentic AI

Risks are categorized with one of these tags:
An AI agent is a software entity that employs AI techniques and has agency to act in its environment based on set goals, which means it can decide which actions to perform and has the ability to execute them. Agentic AI systems are software systems that leverage AI agents (together with other components like tools, planners, memory, and datasets), pursue goals, and can operate autonomously.
AI agents can perform three types of actions:
- Take actions that impact the world (physical or digital).
- Consult resources and use tools.
- Decide which process to choose in the selection of resources/tools/other AI agents and select them.
The risks in this section are specific to or amplified by agentic AI. Since recent agents are built on large language models, the generative AI risks in the following section may also be applicable to agentic AI.
Privacy
Value alignment
Robustness
Computational inefficiency
Governance
Societal impact
Explainability
All risks

Risks are categorized with one of these tags:
The risks below describe risks that are applicable to generative AI models and traditional (non-generative) AI models. These risks may also apply to agentic AI, especially in cases where the agent's behavior or output is determined using a generative or traditional AI model.