I’ve been contemplating writing this post for a while now, but struggled with the framing. Throughout 2025 I started moving from “talking about AI security” to helping advise organizations directly on active projects. Yep, I was surfing the hype wave, but it beats drowning.
Thus when I jumped into my morning news feed and saw my friend Nick Selby wrote an article for Inc. entitled “How FOMO Is Turning AI Into a Cybersecurity Nightmare,” I knew I finally had my framing. And that I wasn’t alone. (I strongly recommend reading the entire article).
Just like (I suspect) all of you, I started seeing organizations adopting AI more rapidly than most other major technologies we’ve seen in the past 25 years. In some cases it was due to vendors shoving AI down their throats, but I also noticed a real fear of missing out. Enterprises were kicking off projects without really defining their goal in concrete terms.
Heck, in one case I had a member tell me some outside consultants told them they would be able to reduce the size of this particular security team by 75% thanks to AI. I can’t really even tell you which function this team had, but there was literally not a single use case where generative AI, or any AI, could help that team. These were nuts and bolts operational functions that had already been automated as much as possible.
Like Nick, I’ve talked with a wide range of organizations with AI projects. Also like Nick, I am far from an AI skeptic. I think that generative AI provides a wide range of benefits, when used appropriately.
The problem I keep running into is organizations that are jumping on AI without determining their desired business outcome with any level of specificity or meaning. Like my example above, I hear that they are doing this to increase efficiency, reduce headcount, or because their vendor shoved it down their throats.
I find that asking the business, “Why are we using AI here? What is our desired business outcome? How do we measure success? How do we measure failure?” can be helpful.
Now here’s the trick: The next question is, “How?” How will this specific use of AI enable that desired outcome?
Believe it or not, this has direct implications for security. First, I find when you push into specifics about the purpose and results, it forces a change of mindset and a deeper discussion on architecture, human interaction, data access, and other elements that we security types need to understand to measure risks. It also provides an opportunity to communicate those risks.
“We want a chatbot to reduce the number of low-level customer service interactions that require a call with a representative.”
“Okay, great! What kind of data is needed to achieve that? Is that data allowed to be used with this particular AI service/model/technology under our current compliance? Is there any level of personalization required? How should we ensure that a given customer sees only their data? What about AI safety? What guardrails do you want in place to manage harm, bias, and inappropriate responses? What kinds of hallucinations are you most concerned with involving that data? Where can we insert hallucination and information disclosure guardrails? What actions can the AI agent take autonomously? What reports do you want tracking anomalous activity? With these limits in place will the system still effectively reduce the number of required human interactions to achieve your desired benefit?”
Now I tend to have a snarky writing style, but these questions shouldn’t be confrontational. We are the partner there to help guide the organization towards safety. In some cases this may help them realize that they won’t really see any benefits in this situation, or the benefits won’t outweigh the risks. You don’t know until you ask the question.
It’s about opening up a conversation so that our security controls align with the desired outcomes, and so that we can effectively communicate risks. And sometimes that risk might be, “You can’t tell me what you want and how it will work so… it probably won’t, and maybe we shouldn’t give it access to the customer database.”
Share this content on your favorite social network today!
Monthly updates on all things CSA - research highlights, training, upcoming events, webinars, and recommended reading.
Monthly insights on new Zero Trust research, training, events, and happenings from CSA's Zero Trust Advancement Center.
Quarterly updates on key programs (STAR, CCM, and CAR), for users interested in trust and assurance.
Quarterly insights on new research releases, open peer reviews, and industry surveys.
Subscribe to our newsletter for the latest expert trends and updates
We value your privacy. Our website uses analytics and advertising cookies to improve your browsing experience. Read our full Privacy Policy.
Analytics cookies, from Google Analytics and Microsoft Clarity help us analyze site usage to continuously improve our website.
Advertising cookies, enable Google to collect information to display content and ads tailored to your interests.
© 2009–2026 Cloud Security Alliance.
All rights reserved.