Companies hurry to AI agents, but are now worried because of the tendency of security problems
- Advertisement -
- Advertisement -
- Almost half of IT teams do not fully know what their AI agents have daily access
- Enterprises love AI agents, but also fear what they do behind closed digital doors
- AI tools now need governance, audit paths and control, just like human employees
Despite the growing enthusiasm for Agentic AI in various companies, new research suggests that the rapid expansion of these tools exceeds the efforts to secure them.
A Sailpoint survey among 353 IT professionals with security responsible for business results has shown a complex mix of optimism and fear of AI agents.
The survey reports that 98% of the organizations intend to expand their use of AI agents in the coming year.
AI agents Adoption exceeds the readiness of safety
AI agents are integrated into activities that process sensitive business data, from customer data and financial data to legal documents and supply chain transactions -96% of the respondents said that they consider these very agents a growing threat to security.
A core problem is visibility: only 54% of the professionals claim that they are fully aware of the data to which their agents have access -so that almost half of the Enterprise environments in the dark are left over how AI agents handle critical information.
The problem worsen, 92% of the respondents agreed that driving AI agents is crucial for safety, but only 44% have an actual policy.
Moreover, eight out of ten companies say that their AI agents have taken actions for which they were not intended – this includes access to unauthorized systems (39%), sharing inappropriate data (33%) and downloading sensitive content (32%).
Even more disturbing, 23% of the respondents admitted that their AI agents were misled to reveal access references, a potential gold mine for malignant actors.
A remarkable insight is that 72% believe that AI agents are greater risks than traditional machine identities.
Part of the reason is that AI agents often require multiple identities to function efficiently, especially when they are integrated with a high performance AI Tools Or systems used for development and writing.
Call for a shift to an identity-first model becomes louder, but Sailpoint and others claim that organizations must treat AI agents such as human users, complete with access controls, accountability mechanisms and complete audit paths.
AI agents are a relatively new addition to the business space and it will take time for organizations to fully integrate them into their activities.
“Many organizations are still early in this journey and growing care controls emphasize the need for stronger, more extensive strategies for identity protection,” Sailpoint concluded.
Maybe you like it too
- Advertisement -