AI Audit Trails: What Your C3PAO Assessor Will Ask For
C3PAO assessors are asking about AI. Here's exactly what audit trail evidence you need to have ready for your CMMC Level 2 assessment.
If you're preparing for a CMMC Level 2 assessment, your C3PAO assessor is going to ask about AI. Not whether you use it — they already know you do. They're going to ask how you govern it, and they'll want evidence.
The Questions Are Coming
Based on assessment trends and guidance from the CMMC Accreditation Body, assessors are increasingly probing AI governance during Level 2 assessments. The questions typically follow this pattern:
- "Describe your organization's use of AI tools." — This is the opening. They want to know if you have visibility into AI usage.
- "How do you control which AI tools can access CUI?" — They're looking for documented access controls specific to AI.
- "Show me your audit logs for AI interactions involving controlled data." — This is where most organizations fail.
- "What is your process when an employee needs to use a new AI tool?" — They want to see a defined change control process.
What Constitutes an AI Audit Trail
An AI audit trail is a documented record of AI tool interactions that captures enough information to reconstruct what happened, who did it, and what data was involved. At minimum, each record should include:
Identity
- Who initiated the interaction (user identity, tied to your IAM system)
- What role the user holds and what data access they're authorized for
Context
- Which tool was used (specific application, version, deployment model)
- When the interaction occurred (timestamp with timezone)
- Where the interaction originated (device, network segment)
Content
- What data was input to the AI tool (or a classification-level indicator if full content capture isn't feasible)
- What output was generated
- What data classification applies to both input and output
Disposition
- Was the interaction within policy? (approved tool, approved use case, approved data classification)
- Were any policy violations detected? (data classification mismatch, unapproved tool, etc.)
Mapping Audit Trails to CMMC Controls
Your AI audit trail directly supports several CMMC Level 2 practices:
AU.L2-3.3.1 (System Auditing): Create and retain system audit logs and records to the extent needed to enable monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity.
Your AI audit trail is part of your system auditing capability. AI tools that process CUI are systems within your boundary, and their activity must be logged.
AU.L2-3.3.2 (Audit Review): Ensure that the actions of individual system users can be uniquely traced to those users.
Each AI interaction must be attributable to a specific user. Anonymous or shared-account AI usage creates an immediate compliance gap.
AU.L2-3.3.4 (Audit Failure Alerting): Alert in the event of an auditing process failure.
If your AI logging mechanism fails, you need to know about it. This means monitoring the health of your audit trail capture, not just the content.
IR.L2-3.6.1 (Incident Handling): Establish an operational incident-handling capability for organizational systems.
AI-related incidents (CUI spillage through an unapproved tool, for example) must be detectable through your audit trail and handled through your incident response process.
Implementation Approaches
API-Level Logging
If your team uses AI through APIs (e.g., deploying a sanctioned AI service within your infrastructure), you can capture interactions at the API layer. This gives you the most complete audit trail with full content capture.
Gateway/Proxy Logging
For web-based AI tools, a secure web gateway can log which AI services are accessed and by whom. Content capture depends on your gateway's capabilities and the tool's encryption model.
Application-Level Logging
Some enterprise AI platforms include built-in audit logging. If you're using an enterprise AI deployment, leverage the platform's native logging and export it to your SIEM.
Policy-Based Self-Reporting
For organizations early in their governance journey, a structured self-reporting process — where employees log their AI usage through a simple form — provides basic audit trail data while more sophisticated controls are being deployed.
What "Good" Looks Like to an Assessor
When a C3PAO assessor reviews your AI audit trail, they're looking for:
- Completeness — Does the trail cover all AI interactions involving controlled data, or are there gaps?
- Integrity — Are the logs tamper-resistant? Can someone modify or delete records?
- Retention — Are logs retained for the required period? (Align with your records retention policy and any contract-specific requirements.)
- Accessibility — Can you produce relevant records in response to an assessment request within a reasonable timeframe?
- Integration — Are AI audit logs integrated with your broader security monitoring and incident response processes?
Common Gaps We See
No logging at all. The most common gap. AI usage happens entirely outside logged systems.
Logging without identity. Logs show AI tool access but can't attribute interactions to specific users.
Logging without content classification. Logs capture that an interaction occurred but don't indicate whether controlled data was involved.
Logging without review. Logs exist but nobody reviews them. An audit trail that's never analyzed doesn't support your monitoring and incident detection obligations.
Start Now, Not Later
If your assessment is scheduled and you don't have AI audit trails in place, you're behind. The good news is that basic audit trail capability can be deployed quickly — often within the same timeline as your broader governance framework.
The worst time to discover you don't have AI audit evidence is when the assessor asks for it. The best time to start building it was last quarter. The second-best time is now.
Need help with AI governance?
Book a 30-minute call. We'll tell you exactly where your risk is and how to fix it.
Book a Call