As an increasing number of organizations use Agentic AI systems agents that can act on their own without human intervention the cyber-security risk associated with those agents has also changed compared to traditional software. Because of the fact that Agentic AI systems operate independently of humans, they can use tools, execute actions, and interface with other systems without any direct human input. The Open Web Application Security Project (OWASP) released the OWASP Top Ten Risks for Agentic AI in 2026 to help organizations understand the different ways that Agentic AI may be misused, manipulated and ultimately result in a potentially dangerous outcome for an organization. Each risk explanation has a very simple real world example to illustrate it.
The manipulation of agent goals refers to when an attacker influences an agent's goal to change from what it was originally intended to do. Because Agentic AI systems follow strict rules regarding goals, even minor redirects of an agent's goal could potentially have negative consequences for the organization. For example, if you were to influence an Agentic AI agent whose goal was to securely lower operational costs, one way to do so would be to use a covert approach (e.g., hidden prompts) to encourage it to "lower costs by writing off security tools." Once the agent received the prompt to "lower costs," it might automatically disable the current security monitoring systems, thereby exposing itself and the organization to future attacks.
The AI’s ability to take major actions for a person without waiting for earlier approval and/or bypassing previously established milestones in its implementation means that it could make autonomous decisions rapidly and perhaps permanently. For example, a cloud-based Resource Management AI may unintentionally delete an important Production Database because it misunderstood instructions to “delete unused backup databases”, which meant there was no human intervention needed prior to carrying out the deletion decision.
AI agents may be granted access to a number of resources and applications, such as cloud services through APIs (Application Programming Interfaces), Cloud Management Systems, Scripts, and Batch Processes. If an AI has access to the Cloud Management System(s) and the tools provided by organizations or end-users, but does not adhere to the policies or guidelines of the workplace regarding the usage of cloud resources, it can end up mismanaging its Resource(s) or performing unsafe actions against any cloud service provider’s resources. One of the most common scenarios is when a DevOps AI agent is directed to “delete unused backup files” but instead deletes all of the backup files, including files that are essential to Disaster Recovery, due to the absence of sufficient restrictions and safeguards against its extrajudicial actions.
Where an AI has access to more resources than necessary for successfully completing its assigned work or misuses its credential permissions, then the AI can be an unintentional high-privilege insider within an organization/institution. For instance, a Customer Service AI with elevated access to a database has the potential to see and modify sensitive Customer information, thereby holding/store different log files and other records of potential misuse of that Customer’s personal data, e.g., automated generic responses.
Memory poisoning is what happens when an attacker is able to change the way that an AI remembers and uses previous information causing that AI not to have a good decision-making process going forward. Since all AI agents learn from their past interactions with people and other AIs it can be particularly damaging when this happens. As an example, if an attacker tells an AI designed to sell products that company policy allows for large negotiated discounts because there is enough competition in that space, this could lead the AI to offer very low prices automatically to real customers when needed.
Many AI systems rely on components from third parties such as models, plugins, datasets, and frameworks. These components may have hidden risks associated with them. When these components are no longer safe to use or are compromised, they may make the AI system insecure prior to the deployment of that system. A specific instance of this would consist of an AI being set up to use a pre-trained model but that model will also privately share sensitive prompt data or other internal data with an external server controlled by the attacker.
Agentic AIs will automatically produce the code necessary to develop and implement solutions; they will also execute their codes if allowed. When agentic AIs are not properly monitored, this may produce a serious security risk. If an agentic AI is asked to resolve a performance issue and it produces a script that disables the checks put in place to protect the agentic AI or opens an additional network port, and then runs that script, it may create a significant vulnerability in the defense of that system.
Humans can be too reliant on AI to make decisions without verifying the information. It does not matter how confident an AI appears to be, it can make mistakes. An example of this would be an AI for security operations marking suspicious login attempts as “low risk” and ignoring them, only to discover later that it was the beginning of a slow moving insider attack.
Most companies now have many different AIs working together, communicating with each other, and making decisions to approve or reject actions. If these channels of communication between AIs are not properly authenticated, hackers can impersonate AIs. For example, a hacker could send a false message that stated “Approved by Admin AI” and another AI could take this as genuine and perform critical functions without sufficient authorization.
The potential for this type of risk is compounded in environments that use AI without established policies, recording, or auditing. Without access to records of AI activity, it will be extremely difficult to understand or investigate events that were caused by AI. For example, if a business has made a financial transaction that has been unauthorized, it will be difficult to establish accountability and means to rectify the error due to the absence of records that document the reasoning for the AI’s decision making, and what data it accessed.
In 2026, the OWASP Top 10 Agentic AI demonstrates how dramatically our entire understanding of Cybersecurity is changing; we now consider AI to be more than a piece of software, but rather an agent capable of making autonomous decisions. As such, security for Agentic AI demands stronger controls over access to the system, including Human Oversight, Continual Monitoring, and Adequate Governance, due to the real-world impact that failures & manipulations of Agentic AI can have once the autonomy has been granted to it.