Excerpt from BetaNews Article, Published on August 12, 2025
Agentic AI is transforming business operations by enabling AI systems to autonomously perform tasks, yet this rapid adoption is raising cybersecurity concerns in 2025. According to a recent report by Censuswide for Salt Security, over half of organizations now deploy or plan to deploy agentic AI for customer-facing roles. However, only a fraction conduct daily API risk assessments or have dedicated security teams specifically overseeing AI initiatives. This lack of governance has created a trust gap, as consumers remain wary about sharing personal information with AI agents, with just 22 percent feeling comfortable doing so.
While agentic AI enhances customer engagement and operational efficiency, it also exposes companies to significant cybersecurity risks, including data leakage and attacks on AI-driven systems. Businesses are urged to prioritize API discovery, governance, and security to prevent breaches and build consumer trust. Michael Callahan, CMO at Salt Security, emphasizes that the success of agentic AI depends on securing the underlying APIs that power it—otherwise, the risks will escalate rapidly.
On the consumer side, frequent interactions with AI chatbots have increased, but the pressure to share personal information for task completion adds to privacy concerns. This dynamic highlights the evolving cybersecurity landscape, where businesses must balance innovation with safeguarding sensitive data.
Agentic AI stands as both a powerful tool and a potential vulnerability, underscoring the urgent need for comprehensive security strategies as this technology becomes more embedded in daily operations. Companies that effectively govern agentic AI and ensure transparency will be better positioned to harness its benefits while mitigating growing cyber threats.
To delve deeper into this topic, read the BetaNews Article.




