Technology

Securing AI in CMMC Level 2 Environments: A Strategic Guide for CISOs and Cloud Security Engineers

· 5 min read

Leveraging generative AI and machine learning can offer huge productivity gains – even for organizations handling sensitive Controlled Unclassified Information (CUI) and Federal Contract Information (FCI).

However, embedding AI into processes in a CMMC Level 2 environment introduces new risks that must be managed carefully to remain compliant. CMMC 2.0 Level 2 requires full implementation of NIST SP 800-171 controls to protect CUI/FCI, and it emphasizes strict safeguards when using cloud services.

This article provides a high-level strategy to secure the use of AI in corporate settings under CMMC L2, aligning with AI security frameworks and proven cloud security best practices. We’ll explore how to govern AI risk, enforce compliance requirements, and protect sensitive data – all while enabling innovation responsibly.

CMMC Level 2 compliance is non-negotiable for Defense Industrial Base companies handling CUI. It mandates adhering to 110 security controls (NIST SP 800-171) across areas like access control, audit logging, incident response, etc. While the CMMC requirements don’t explicitly mention “AI”, any AI platform used to process, store, or send CUI falls under cloud usage rules – specifically 32 CFR Part 170, which requires using FedRAMP-authorized (Moderate baseline) cloud services for CUI. In practice, an AI SaaS or cloud service is treated as a CSP, so it must be FedRAMP Moderate (or equivalent) if it touches CUI. This means most consumer-grade AI tools are off-limits for CUI data, since uploading CUI to a non-FedRAMP public AI is a direct compliance violation (violating DFARS contracts and inherently CMMC itself).

Meanwhile, organizations are eager to harness AI for efficiency – from code generation and document summarization to predictive analytics. This creates tension between innovation and strict compliance. Key risk factors include:

The mandate is clear: to use AI in a CMMC L2 environment, security and compliance must be baked in from the start. In practice, this means only using AI solutions within a secure boundary, thoroughly assessing AI-related risks, and extending your compliance controls to cover AI activities.

Companies will need to balance AI adoption with a “trust but verify” stance – ensuring every AI use case is evaluated for risk and aligned with CMMC requirements.

Effective management of AI risk is essential for organizations seeking to deploy AI solutions while maintaining strict adherence to CMMC Level 2 requirements. By using industry-recognized frameworks from bodies like the Cloud Security Alliance (CSA) and NIST, organizations can extend their compliance programs to cover the unique risks introduced by AI workloads, ensuring that CUI and FCI remain protected throughout the AI lifecycle.

In summary, adopting AI-specific risk management frameworks such as CSA’s AICM and MRMF, alongside NIST’s AI RMF, enables organizations to systematically secure AI workloads within CMMC Level 2 environments. These frameworks help translate high-level CMMC controls into concrete, actionable measures for AI, ensuring that sensitive data is protected, risks are documented and mitigated, and compliance is demonstrable across both traditional IT and emerging AI platforms.

One of the most critical decisions is where and how your AI systems run. In CMMC L2 settings, environment architecture must ensure that AI tools never become a conduit for data spillage and that they meet the same security bar as the rest of your IT.

1. Keep AI Within the Authorized Boundary: All CUI processing must stay inside your accredited IT environment, using FedRAMP Moderate (or higher) infrastructure. Use government-only cloud AI services (e.g., Azure OpenAI in Azure Government, AWS Bedrock in GovCloud), which are designed for sensitive data and meet compliance requirements. If using in-house or private cloud AI, apply the same security controls as other CUI systems—strict segmentation, access controls, encryption, and no outbound internet connections unless approved.

2. Avoid Unapproved AI Services: Do not use consumer or public AI tools (e.g., ChatGPT, Google Bard) for any CUI or FCI. Only use FedRAMP-authorized AI platforms. If experimentation is needed, use sanitized data in separate sandbox environments—never sensitive data. Enforce this rule with both policy and technical controls.

3. Segment and Control Access: Isolate AI workloads that handle sensitive data. Implement network segmentation, restrict access to trained personnel, require multi-factor authentication, and use RBAC. Ensure clear separation between sensitive and non-sensitive AI use—preferably with different applications or endpoints and automated technical safeguards to prevent accidental CUI leakage to public AI services.

4. Secure Data Handling and Storage: Encrypt all data in transit and at rest. Control access to training data, outputs, and logs—treat logs as sensitive since they may hold CUI. Use secure log management and monitoring and turn off any AI telemetry or cloud connections that are not routed within the compliant boundary.

5. Manage Supply Chain and Third-Party Risk: Ensure all vendors and AI providers meet NIST 800-171 or equivalent standards. Verify model sources and integrity, keep AI dependencies updated, and coordinate compliance with all involved partners and suppliers. Only share CUI-laden AI outputs with compliant partners.

By building AI within a secure, compliant architecture and enforcing strict controls, you minimize the risk of data leaks and maintain CMMC Level 2 compliance by default.

Technical controls alone won’t guarantee compliance; strong governance and user awareness are equally important. A misconfigured model or an uninformed employee could still cause an incident. Thus, organizations should update their cybersecurity governance to explicitly cover AI.

1. Update Policies – “AI Acceptable Use”: Develop an Artificial Intelligence Acceptable Use Policy that clearly delineates how employees may or may not use AI tools. This policy should:

Have all relevant staff read and sign this policy (or acknowledgement). This sets out a baseline of expectations and can be shown to auditors as evidence of governance.

2. Security Awareness and Training: Extend your security training program to include awareness about AI risks. Users need to be trained on:

3. Document AI in the System Security Plan (SSP): The SSP is a living document in CMMC that details your system boundaries, components, and how controls are implemented. If you integrate an AI system (say an AI SaaS or a new AI server) into the environment, update the SSP to include it. Document:

A well-documented SSP showing AI usage proves to assessors that you’ve proactively included AI in your security program, not treating it as a loophole. It also helps internal alignment – everyone (IT, security, compliance, procurement) will be on the same page about approved AI activities.

4. Map and Implement Controls for AI: Extend your existing security controls to cover AI workflows. Most of the CMMC (NIST 800-171) families stay applicable – you must interpret them in the AI context. Here’s a brief mapping of key control areas and how to apply them to AI:

Restrict AI system use to authorized users only. Enforce RBAC and MFA for any cloud AI portals or internal AI apps. Disable or tightly control guest/anonymous access to AI interfaces.

Log all AI interactions and administrator actions. This includes recording prompted by the AI and occurred. Retain these logs for incident investigations and periodic review. Use alerts to flag if, say, large volumes of data are input to the AI or if outputs have certain keywords (potential data leakage).

Configuration Management (CM) (and System & Communications Protection)

Secure the AI system configuration to an approved baseline. For cloud AI, review all configurable settings (e.g., data retention OFF, data encryption ON, telemetry OFF). For self-hosted AI, ensure the underlying OS, libraries, and model files are managed under change control. Network-wise, isolate the AI – no unapproved external connectivity (e.g. prevent it from calling external APIs). Encrypt data in transit between AI components and clients.

Treat AI outputs as sensitive data. If an AI generates a document or recommendation that has CUI, that output must be marked and handled per data handling procedures (e.g., stored only on approved drives, not emailed externally without encryption). Implement measures to automatically tag or classify AI-generated files if they likely include input data. Also, if the AI allows file uploads/downloads, ensure that those files are stored securely.

Maintain the integrity of AI models and prompts. Use mechanisms to filter or confirm inputs to the AI to block malicious payloads or obvious CUI leaks (for instance, some AI platforms let you define “forbidden” patterns or use DLP solutions to intercept sensitive content). Monitor the AI’s outputs for anomalies that could show tampering (e.g., sudden output of gibberish might hint at a corrupted model). Regularly update models with security patches or improved versions to fix vulnerabilities.

Update incident response plans to include AI-specific scenarios. For example: how will you respond if an employee accidentally leaks CUI to an external AI? (Plan to notify the proper channels, possibly invoke DFARS reporting if required, and work with the AI provider to purge data. What if the AI produces a very incorrect output that was acted upon? (Consider it as a quality incident or near-miss and analyze the root cause in the model or data.) Prepare playbooks for AI data spillage, model misuse, or model compromise events. Include the AI team in tabletop exercises.

By systematically going through each control family and considering its application to AI, you integrate AI into your overall security control environment. The key is to show that no control gap exists. If anything, AI might require augmenting controls (like DLP for prompts) to address its unique sides.

5. Continuous Monitoring and Enforcement: Once controls and policies are in place, continuously monitor for compliance:

6. Plan for Future Regulations: The regulatory landscape for AI is still taking shape (EU AI Act, etc.), but the spirit of CMMC – protecting sensitive data – will remain constant. By implementing the above practices, you’re not only compliant with today’s rules but building a foundation to meet future AI-specific regulations. Keep an eye on CMMC updates too – while current CMMC guidance doesn’t detail AI, future revisions might incorporate more AI-specific language. Being proactive now is a competitive advantage, as trust is becoming a key differentiator for organizations using AI.

Adopting generative AI and ML in a CMMC Level 2 environment is possible – but it requires a strategic, security-first approach. For CISOs and Cloud Security Engineers, the mission is to embed trust and compliance into every facet of AI integration. By leveraging frameworks like CSA’s AI Controls Matrix and Model Risk Management guidance, you gain a blueprint to address AI risks holistically – from technical vulnerabilities to governance and ethics. Coupling that with CMMC’s rigorous controls ensures that the same protections applied to your traditional systems also shield your AI systems from compromise or misuse.

By doing so, you protect CUI/FCI while still reaping AI’s benefits. The Cloud Security Alliance notes that trust is the foundation for responsible AI – in a CMMC environment, trust comes from knowing your AI is secure and compliant by design.

Organizations that achieve this will be able to confidently innovate with AI, maintaining their competitive edge without jeopardizing their obligations. In the high-stakes world of defense contracting, the ability to use cutting-edge AI securely and meet CMMC requirements will distinguish the leaders from the laggards.

Finally, keep engaging with industry efforts as they evolve. They will help you stay aligned with best practices and show to stakeholders – from DoD customers to board members – that your AI implementations are not only smart, but also safe, compliant, and worthy of trust.

Share this content on your favorite social network today!

Monthly updates on all things CSA - research highlights, training, upcoming events, webinars, and recommended reading.

Monthly insights on new Zero Trust research, training, events, and happenings from CSA's Zero Trust Advancement Center.

Quarterly updates on key programs (STAR, CCM, and CAR), for users interested in trust and assurance.

Quarterly insights on new research releases, open peer reviews, and industry surveys.

Subscribe to our newsletter for the latest expert trends and updates

We value your privacy. Our website uses analytics and advertising cookies to improve your browsing experience. Read our full Privacy Policy.

Analytics cookies, from Google Analytics and Microsoft Clarity help us analyze site usage to continuously improve our website.

Advertising cookies, enable Google to collect information to display content and ads tailored to your interests.

© 2009–2026 Cloud Security Alliance.
All rights reserved.