AI is transforming business operations, offering unprecedented productivity, faster decision-making, and new competitive edges. As per Gartner, by 2028, more than 95% of enterprises will be using generative AI APIs or models, and/or will have deployed GenAI-enabled applications in production environments. At Zscaler, we have witnessed exponential increase in AI transactions, with a 36x increase year-over-year, highlighting the explosive growth of enterprise AI adoption. The surge is fueled by ChatGPT, Microsoft Copilot, Grammarly, and other generative AI tools, which account for the majority of AI-related traffic from known applications.
However, AI adoption and its integration into daily workflows introduces novel security, data privacy, and compliance risks. For the past two years, security leaders have been grappling with shadow AI—the unsanctioned use of public AI tools like ChatGPT by employees. The initial response was often reactive—block the domains and hope for the best—but the landscape has shifted dramatically. AI is no longer just a destination tool or website; it's an integrated feature, embedded directly into the sanctioned, everyday business applications we rely on.
According to the 2025 Gartner Cybersecurity Innovations in AI Risk Management and Use Survey, 71% of cybersecurity leaders suspect or have evidence of employees using embedded AI features without going through necessary cybersecurity risk management processes.
This evolution from standalone shadow AI to embedded, pervasive AI creates far more complex and layered security challenges. Blocking is no longer a viable strategy when AI is part of your core collaboration suite. To safely harness the productivity benefits of AI, enterprises need a new security playbook—one that goes beyond simply blocking shadow AI and embraces a zero trust + AI security approach focused on visibility, context, and intent. This post will explore the new frontier of AI security challenges, risks and outline a modern framework for securing it.
Fig: Zscaler ThreatLabz Report: Top AI application usage
As organizations integrate AI deeper into their operations, our findings indicate they face a growing twofold challenge:
Below, we outline the five biggest AI security challenges that will shape how you protect the AI ecosystem and how to address them.
Shadow AI can enable innovation, but it also exposes organizations to significant risks, particularly concerning data loss and breach potential. BCG’s latest “AI at Work” study reveals that 54% of employees openly admit they would use AI tools even without company authorization. The consequences of staying blind? As per a recent IBM report, 20% of organizations experienced breaches linked to unauthorized AI use, adding an average of $670,000 to breach costs. Additionally, shadow AI incidents had serious downstream effects beyond security concerns:
These impacts demonstrate that shadow AI isn't just a security concern—it's a business risk that affects operations, finances, and reputation.
The new front line for AI security isn't a standalone website. It's the "AI" button inside the tools employees use every day. Countless SaaS applications—from CRMs to design tools—are embedding generative AI features.
Enterprises significantly underestimate the security risks posed by Embedded AI, which accounts for over 40% of their AI usage and often operates opaquely. Current AI TRiSM (Artificial Intelligence Trust, Risk, and Security Management) solutions and vendor-provided security assurances are largely ineffective for Embedded AI, which frequently lurks in Shadow AI. This leaves organizations vulnerable, relying on outdated audits and inadequate clickwrap agreements that fail to address the complex orchestration and interfaces of these embedded systems.
Here are a few classic examples of Embedded AI security challenges:
Each AI integration represents a new, unvetted channel for data to leave the environment. An organization might have 20 different sanctioned SaaS apps, each with its own embedded AI that communicates with a different large language model (LLM) under different data privacy terms. Manually tracking and governing this hidden mesh of AI interactions is a challenging task. Security teams often have no visibility into the data being exchanged in these interactions, creating a massive blind spot.
AI Prompts may contain sensitive data that includes source code, unreleased financial data, customer personally identifiable information (PII), healthcare records, and strategic plans. As per the Zscaler ThreatLabz AI Security Report, 59.9% of AI transactions were blocked, signaling concerns over data security and the uncontrolled use of AI applications.
The risk isn't just with the input. The output from AI models carries its own set of dangers like:
Security teams need to regularly sanitize and validate AI inputs and outputs and implement comprehensive prompt monitoring strategies.
AI’s reliance on large datasets introduces compliance risks for organizations bound by regulations such as GDPR, CCPA, and HIPAA. Improper handling of sensitive data within AI models can lead to regulatory violations, fines, and reputational damage. One of the biggest challenges is AI’s opacity—in many cases, organizations lack full visibility into how AI systems process, store, and generate insights from data. This makes it difficult to prove compliance, implement effective governance, or ensure that AI applications don’t inadvertently expose PII.
As regulatory scrutiny on AI increases, businesses must prioritize AI-specific security policies and governance frameworks to mitigate legal and compliance risks.
Effective AI governance remains out of reach for most organizations because they fail to:
AI is moving faster than traditional security and governance policies—and that’s exactly where risk grows. Organizations can follow the basic steps below to ensure the safe use of AI:
These steps help apply the principles of a zero trust architecture to your use of AI applications, enabling organizations to stay resilient even as AI evolves at lightspeed.
This blog post has been created by Zscaler for informational purposes only and is provided "as is" without any guarantees of accuracy, completeness or reliability. Zscaler assumes no responsibility for any errors or omissions or for any actions taken based on the information provided. Any third-party websites or resources linked in this blog post are provided for convenience only, and Zscaler is not responsible for their content or practices. All content is subject to change without notice. By accessing this blog, you agree to these terms and acknowledge your sole responsibility to verify and use the information as appropriate for your needs.
Share this content on your favorite social network today!
Monthly updates on all things CSA - research highlights, training, upcoming events, webinars, and recommended reading.
Monthly insights on new Zero Trust research, training, events, and happenings from CSA's Zero Trust Advancement Center.
Quarterly updates on key programs (STAR, CCM, and CAR), for users interested in trust and assurance.
Quarterly insights on new research releases, open peer reviews, and industry surveys.
Subscribe to our newsletter for the latest expert trends and updates
We value your privacy. Our website uses analytics and advertising cookies to improve your browsing experience. Read our full Privacy Policy.
Analytics cookies, from Google Analytics and Microsoft Clarity help us analyze site usage to continuously improve our website.
Advertising cookies, enable Google to collect information to display content and ads tailored to your interests.
© 2009–2026 Cloud Security Alliance.
All rights reserved.