Laurent - stock.adobe.com
Gartner: Mitigating security threats in AI agents
Agents represent a step-change in the use of artificial intelligence in the enterprise - as attendees at Salesforce's annual conference saw first hand this month - but do not come without their risks
Artificial intelligence (AI) continues to evolve at an unprecedented pace, with AI agents emerging as a particularly powerful and transformative technology. These agents, powered by advanced models from companies like OpenAI and Microsoft, are being integrated into various enterprise products, offering significant benefits in automation and efficiency. However, AI agents bring a host of new risks and security threats that organisations must address proactively.
Understanding the unique risks of AI agents
AI agents are not just another iteration of AI models; they represent a fundamental shift in how AI interacts with digital and physical environments. These agents can act autonomously or semi-autonomously, making decisions, taking actions, and achieving goals with minimal human intervention. While this autonomy opens up new possibilities, it also expands the threat surface significantly.
Traditionally, AI-related risks have been confined to the inputs, processing, and outputs of models, along with the vulnerabilities in the software layers that orchestrate them. With AI agents, however, the risks extend far beyond these boundaries. The chain of events and interactions initiated by AI agents can be vast and complex, often invisible to human operators. This lack of visibility can lead to serious security concerns, as organisations struggle to monitor and control the agents' actions in real time.
Among the most pressing risks are data exposure and exfiltration, which can occur at any point along the chain of agent-driven events. The unbridled consumption of system resources by AI agents – benign or malicious – can lead to denial of service or wallet scenarios, where system resources are overwhelmed. Perhaps more concerning is the potential for unauthorised or malicious activities carried out by misguided autonomous agents, including "agent hijacking" by external actors.
The risk doesn't stop there. Coding errors within AI agents can lead to unintended data breaches or other security threats, while the use of third-party libraries or code introduces supply chain risks that can compromise both AI and non-AI environments. The hard-coding of credentials within agents, a common practice in low-code or no-code development environments, further exacerbates access management issues, making it easier for attackers to exploit these agents for nefarious purposes.
Read more about AI agents
- A technology fueled by generative AI is growing. Vendors like Google are providing products to help enterprises create their own agents. However, there are concerns.
- CRM giant’s Salesforce's Agentforce lets organisations build and deploy autonomous agents to automate business processes through advanced learning and data integration.
- Zendesk customers only pay for successful AI query resolutions, easing the transition to AI-powered customer support and improving scalability during peak periods.
Three essential controls to mitigate AI agent risks
Given the multifaceted risks associated with AI agents, organisations should implement robust controls to manage these threats effectively. The first step in mitigating AI agent risks is to provide a comprehensive view and map of all agent activities, processes, connections, data exposures, and information flows. This visibility is crucial for detecting anomalies and ensuring that agent interactions align with enterprise security policies. An immutable audit trail of agent interactions should also be maintained to support accountability and traceability.
It is also essential to have a detailed dashboard that tracks how AI agents are used, their performance against enterprise policies, and their compliance with security, privacy, and legal requirements. This dashboard should also integrate with existing enterprise identity and access management (IAM) systems to enforce least privilege access and prevent unauthorised actions by AI agents.
Once a comprehensive map of agent activities is in place, consider establishing mechanisms to detect and flag any anomalous or policy-violating activities. Baseline behaviours should be established to identify outlier transactions, which can then be addressed through automatic real-time remediation.
Given the speed and volume of AI agent interactions, humans alone cannot scale the oversight and remediation required. Therefore, implement tools that can automatically suspend and remediate rogue transactions while forwarding any unresolved issues to human operators for manual review.
The final control involves applying automatic real-time remediation to address detected anomalies. This may include actions such as redacting sensitive data, enforcing least privilege access, and blocking access when violations are detected. Also, ensure to maintain deny lists of threat indicators and files that AI agents are disallowed from accessing. A continuous monitoring and feedback loop should be established to identify and correct any unwanted actions resulting from AI agent inaccuracies.
As AI agents become increasingly integrated into enterprise environments, the associated risks and security threats cannot be ignored. Organisations must educate themselves on these new risks and implement the necessary controls to mitigate them. By viewing and mapping all AI agent activities, detecting and flagging anomalies, and applying real-time remediation, businesses can harness the power of AI agents while maintaining robust security measures. In this rapidly evolving landscape, proactive risk management is not just an option – it is a necessity.
Avivah Litan is a distinguished VP Analyst at Gartner. Digital risk management and strategies for cyber security resilience will be further discussed at the Security & Risk Management Summit 2024 in London, from 23-25 September.