Skip to content
AI-Guardian
Enterprise Security10 min read

Zero-Trust AI Security Policy: A Complete Template for Engineering Teams

A concrete, copy-paste-ready AI security policy covering approved tools, data classification tiers, technical controls, incident response, and a 4-week rollout playbook.

AI-Guardian Security Team·

"Zero-trust" has become a security industry cliché — but its core principle is more relevant to AI than to any previous technology: assume breach, verify explicitly, grant least privilege. Applied to generative AI, this means assuming that any uncontrolled interaction with an AI system will eventually result in sensitive data exposure — and building controls accordingly.

This post provides a concrete, implementable zero-trust AI security policy that any engineering organisation can adopt. It is intentionally opinionated and prescriptive — adjust it to your organisation's risk appetite, but start from here rather than from scratch.

The Zero-Trust AI Principles

Classical zero-trust networking assumes no implicit trust based on network location. For AI, we extend this to:

  1. No implicit trust in any AI vendor — regardless of DPAs and enterprise agreements, assume that data transmitted to AI APIs carries risk and minimise what you send.
  2. No implicit trust in employee judgment — under deadline pressure, developers will paste sensitive content into AI tools. Build technical controls that protect against this without relying on human vigilance.
  3. No implicit trust in network controls — assume employees will access AI tools outside the corporate network. Build endpoint-level controls that work regardless of network path.
  4. Least privilege for AI context — AI systems should receive the minimum data necessary to complete the task. Default to redacting sensitive fields, not exposing them.
  5. Continuous verification — monitor AI usage patterns, audit what's being submitted, and review anomalies regularly.

The Policy Template

This template is structured for a technology company or engineering-heavy enterprise. It assumes a CTO or CISO sponsor and an engineering-specific rollout.

Section 1: Scope and Purpose

AI Security Policy v1.0

Purpose: This policy governs the use of generative AI tools by all employees, contractors, and service providers who access company systems or process company data. Its purpose is to enable productive AI usage while preventing data leakage, regulatory violations, and intellectual property exposure.


Scope: All generative AI tools — including but not limited to ChatGPT, Claude, Gemini, GitHub Copilot, Cursor, Perplexity, and any AI-powered features within approved SaaS tools — used by personnel in the course of their work for the company.

Section 2: Approved Tools and Tiers

Only AI tools that meet all of the following criteria are approved for use with company data:

  • A signed Data Processing Agreement (DPA) is in place between the company and the AI vendor
  • Training data opt-out is confirmed active at the organisation level (not per-user)
  • The tool is accessible from a company-managed account (not personal accounts)
  • The IT Security team has reviewed and approved the vendor's security documentation

Currently approved tools: [List your approved tools here — e.g. ChatGPT Enterprise, Claude for Work, GitHub Copilot Business]

All other AI tools are classified as personal use only and must not be used to process company data, customer data, or any information related to company activities.

Section 3: Data Classification and AI Handling Rules

PROHIBITED — Never submit to any AI tool

  • ·API keys, passwords, tokens, or any authentication credentials
  • ·Private cryptographic keys (SSH keys, PEM certificates, signing keys)
  • ·Customer PII without explicit customer consent for AI processing
  • ·Medical, legal, or financial records of individuals
  • ·Material non-public information (MNPI) or merger/acquisition details
  • ·Security vulnerability reports or penetration testing findings

RESTRICTED — Approved tools only, with DLP controls active

  • ·Customer data (anonymised or pseudonymised)
  • ·Source code from core proprietary systems
  • ·Internal architecture documentation
  • ·Employee data or HR records
  • ·Financial data not publicly disclosed

PERMITTED — Any approved AI tool

  • ·Publicly available information and documentation
  • ·Non-sensitive internal drafts and communications
  • ·Generic coding questions not involving proprietary logic
  • ·Anonymised data with all identifiers removed
  • ·Company public marketing and sales content

Section 4: Technical Controls (Non-Optional)

The following technical controls are mandatory on all company-managed devices. They are not optional and employees may not disable them:

  • AI-Guardian extension and desktop agent — intercepts outbound AI prompts and redacts PROHIBITED and RESTRICTED data categories
  • Managed browser profile — all AI web interfaces must be accessed from the company-managed browser profile (not personal profiles)
  • MDM enforcement — devices are enrolled in MDM; unapproved AI extensions are blocked via policy

Section 5: Incident Response

If an employee suspects or confirms that sensitive data has been submitted to an AI tool in violation of this policy:

  1. Immediately report to the security team via [incident channel]
  2. Rotate credentials immediately if API keys, passwords, or tokens were involved — do not wait for confirmation of exploitation
  3. Document the incident using the AI Security Incident Report template — include what was submitted, to which tool, and at what time
  4. The security team will assess whether a GDPR data breach notification is required (72-hour window under Article 33)

Section 6: AI-Generated Code Review Requirements

Code generated by or significantly assisted by AI tools is subject to additional review requirements before merging:

  • All AI-generated code must be reviewed by a human engineer — no direct merge from AI output
  • SAST (static analysis security testing) must pass on all AI-assisted PRs
  • All new package dependencies suggested by AI must be verified on the official registry before installation
  • AI-generated code must not contain hardcoded credentials, even as placeholders

Rolling Out the Policy: A Practical Playbook

Week 1: Baseline assessment

  • Audit current AI tool usage across the organisation (MDM + network logs)
  • Identify tools in use, tiers, and whether DPAs exist
  • Document which use cases involve each data classification tier

Week 2-3: Controls deployment

  • Deploy AI-Guardian to all managed devices
  • Configure MDM policies to block unapproved AI extensions
  • Upgrade AI tool licences to enterprise tiers where required
  • Execute outstanding DPAs

Week 4: Communication and training

  • Publish the policy to all staff with a mandatory acknowledgement
  • Run a 30-minute training session covering real examples (not abstract rules)
  • Set up the incident reporting channel and test it

Ongoing: Governance

  • Monthly review of AI-Guardian redaction event reports
  • Quarterly AI tool inventory audit (new tools, tier changes, vendor policy updates)
  • Annual policy review and update

If you'd like help adapting this template to your organisation's specific context — including regulated industries, multi-jurisdiction operations, or specific tooling configurations — book a 30-minute policy review with our team. We've helped 50+ engineering organisations build AI security programmes from scratch.

Zero-Trust AI Security Policy: A Complete Template for Engineering Teams | AI-Guardian Blog · AI-Guardian