Enterprise Governance & Compliance
Governance is not a barrier to AI adoption -- it is the infrastructure that makes sustainable, scalable adoption possible. This module covers the policies, processes, and controls every enterprise Claude deployment needs.
AI Acceptable Use Policy
Define what Claude may and may not be used for within your organisation. Cover: permitted data types, prohibited use cases, human review requirements, and consequences for misuse. Publish and train all employees before launch. Review annually or after any significant incident.
Data Classification Policy
Classify data by sensitivity: Public -- Internal -- Confidential -- Restricted. Define which classifications may be sent to Claude under which deployment model. Restricted data may require on-premises deployment or must never be sent to any external AI system.
Use-Case Approval Workflow
New Claude integrations must pass a review gate covering: Legal (liability), Privacy (GDPR/CCPA), Security (threat model), Architecture (integration design), and Risk (EU AI Act tier). Target a 5-business-day SLA for standard use cases.
Usage Monitoring
Instrument every Claude integration: log prompts, responses, user IDs, cost, latency, and errors. Build dashboards showing adoption, cost per department, error rates, and flagged outputs. Review monthly at the CoE steering meeting.
Governance Flow -- Use-Case Intake
Regulatory Landscape
| Regulation | Scope | Key Requirement for AI | Action Required |
|---|---|---|---|
| EU AI Act | Any AI system used in the EU | Risk classification; high-risk systems need conformity assessment | Classify each use case by risk tier; document controls |
| GDPR / UK GDPR | Personal data of EU/UK residents | Lawful basis for processing; data minimisation; right to explanation | DPA with Anthropic; PII detection; avoid sending personal data |
| CCPA | California residents' personal data | No sale of personal data; right to deletion | Verify Anthropic DPA covers CCPA; log and delete on request |
| SOC 2 | Enterprise SaaS security | Vendor security posture assessment | Request Anthropic SOC 2 report; include in vendor review |
| Sector-specific | Finance (FCA/SEC), Healthcare (HIPAA) | Varies by sector; often prohibits automated decisions | Legal review per use case; may require on-prem deployment |
When employees cannot access approved Claude tools easily, they use consumer Claude.ai, ChatGPT, or other tools on personal accounts -- outside all your governance controls. The best mitigation for shadow AI is making your approved, governed Claude experience better than the consumer alternative, not restricting access.
ShopMate -- Audit Logging
# shopmate/logging/audit.py -- every ShopMate API call logged for compliance import anthropic, json, hashlib, uuid from datetime import datetime, timezone from pathlib import Path LOG_FILE = Path("logs/shopmate_audit.jsonl") LOG_FILE.parent.mkdir(exist_ok=True) client = anthropic.Anthropic() def logged_create(brand_id: str, feature: str, **kwargs): """Wrap every ShopMate Claude call with audit logging.""" resp = client.messages.create(**kwargs) entry = { "id": str(uuid.uuid4()), "ts": datetime.now(timezone.utc).isoformat(), "brand": brand_id, "feature": feature, "model": kwargs.get("model"), "tokens_in": resp.usage.input_tokens, "tokens_out":resp.usage.output_tokens, "cost_usd": round(resp.usage.input_tokens/1e6*0.80 + resp.usage.output_tokens/1e6*4.00, 6), } with LOG_FILE.open("a") as f: f.write(json.dumps(entry) + " ") return resp # Monthly cost report per brand from collections import defaultdict def monthly_report(): logs = [json.loads(l) for l in LOG_FILE.read_text().splitlines()] by_brand = defaultdict(lambda: {"calls":0,"cost":0}) for log in logs: by_brand[log["brand"]]["calls"] += 1 by_brand[log["brand"]]["cost"] += log["cost_usd"] print(f"{'Brand':<15} {'Calls':>6} {'Cost':>10}") for brand, s in sorted(by_brand.items(), key=lambda x: -x[1]["cost"]): print(f"{brand:<15} {s['calls']:>6,} ${s['cost']:>9.2f}")