SOC 2 readiness for AI usage
SOC 2 auditors increasingly ask a new question: 'How do you control what your employees paste into AI tools?' AI-Guardian is the control that produces the evidence.
Why AI usage is suddenly in scope
The AICPA's Trust Services Criteria were written before generative AI, but their principles apply cleanly: if customer data, proprietary source code, or regulated records can flow from an employee's browser into a third-party model, that pathway must be governed. Most organisations discover during their first renewal that their existing DLP does not cover this.
Mapping to the Trust Services Criteria
- CC6.1 / CC6.2 — Logical access controls. Enterprise accounts enforce SSO and role-based permissions on the Admin Dashboard. Audit-log access is separable from product access, so a SOC analyst can review events without seeing billing or configuration surfaces.
- CC7.2 — System monitoring. Every detection event is recorded with platform, category, severity, framework tag, and timestamp. Alerts can be exported to your SIEM or pulled on a schedule via API.
- CC7.3 — Incident response. When the extension blocks a high-severity event, it leaves a fingerprint that your incident responders can correlate with other signals (IDS, CASB, EDR) during a potential exposure.
- C1 — Confidentiality. Prompt content never leaves the user's device. The very sensitive strings that a traditional logger would have captured simply do not exist outside the browser.
- P6 — Privacy disclosure. Transparent blocking modals, plain-language Privacy Policy, and a public DPA align with the privacy criterion's requirement for clear communication to data subjects.
Evidence you can hand to an auditor
AI-Guardian generates three artefacts that auditors regularly ask for: a policy acknowledgement record (each user has seen the blocking modal at least once), a detection-event log (tamper-evident, exportable), and an admin-action log (who changed which rule, when). These three together cover the common questions under “describe how you prevent unauthorised disclosure of customer data through generative AI.”
Our own roadmap
AI-Guardian is building toward a SOC 2 Type II report of its own. Until that report is published, we share our internal security controls document and bridge letters with Enterprise customers under NDA — ask during onboarding.
This page is informational and is not legal or assurance advice. Please work with qualified counsel or your CPA on your specific obligations.