AI Assistance is rolling out across the platform
The capabilities below describe the AI layer being wired into the platform under FPD-601. Today we deliver them inside Custom Build and Compliance engagements — the productized per-tier rollout lands across Q3.
Join the waitlistAI handles the busywork. We handle the stakes.
Vanta's agent suggests. Ours executes — because there's a human and AI loop behind it, not just a model. AI drafts evidence, gap fixes, and comms. Our engineers review and ship. You approve.
Five capabilities, all grounded in your real data
Every output cites its source — a control, a policy, a project file, an evidence artifact. If there is no source, the draft says so instead of inventing one.
Control gap detection
AI scans your live cloud configuration against your active framework controls (SOC 2, HIPAA, ISO 27001) and flags drift the moment it appears — before an auditor sees it.
Evidence triage & auto-tagging
Inbound evidence (screenshots, configs, logs, attestations) is auto-classified, mapped to the right control, and timestamped. Your team reviews — never sorts.
Policy Q&A from your own docs
Ask plain-English questions across your policies, runbooks, and project docs. Cited answers from your real content — never invented, never trained on by a vendor.
Weekly digest
Every Monday: an AI-written summary of what changed, what needs you, and what your team handled. Compliance, projects, billing — one inbox view.
Smart messaging drafts
Draft incident comms, customer security questionnaires, and policy attestation reminders. Our team reviews and ships — you never send raw AI output to a customer.
Vanta and Drata sell agents. We sell the loop.
Both Vanta and Drata genuinely have AI agents — they're real, useful features. But they're pure SaaS: when AI suggests something, you still write the code, push the change, and chase the audit. We bundle the engineers.
- Suggests evidence and control gaps inside the Vanta UI
- You write the policy edit, you push the config change, you run the remediation
- No team to escalate to when AI is wrong
- Auto-collects evidence and proposes control improvements
- Still your team executing the fix in your environment
- No engineering hours bundled with the subscription
- AI drafts evidence, gap fixes, and policy updates
- Our engineers review every draft and execute the remediation in your cloud
- You approve. We ship. AI is leverage, not a chatbot you babysit.
Four steps from prompt to production
No artifact reaches your auditor, your customer, or your cloud without a human in the middle. AI is the leverage. Our engineers are the accountability.
- Step 1
AI drafts
Models propose evidence, draft policies, and flag control gaps using your real data — never a generic template.
- Step 2
Engineer reviews
Our team reviews every AI-generated artifact. We catch hallucinations, fix factual errors, and add the human judgment AI can’t.
- Step 3
You approve
You see the reviewed draft with full context: what AI proposed, what we changed, why. One-click approve, or request edits.
- Step 4
Platform ships
Approved evidence lands in the audit folder. Approved comms send. Approved code merges. Every step audited.
AI Assistance is included with Hosting + Compliance
Token usage is included in every Run tier. Enterprise can bring its own model endpoint (Azure OpenAI, AWS Bedrock, or self-hosted) so prompts never leave your environment.
- Weekly digest preview
- Smart messaging drafts (read-only)
- Control gap detection
- Evidence triage & auto-tagging
- Weekly digest
- Smart messaging drafts
- 500K tokens/mo included
- Everything in Hosting + Compliance
- Policy Q&A with embeddings over your full doc corpus
- 2M tokens/mo included
- Everything in Full Service
- Bring your own model (Azure OpenAI, AWS Bedrock, self-hosted)
- Unlimited tokens — billed at cost
Useful, accountable, never blocking
Grounded, not generative
Every AI response cites the underlying control, policy, evidence, or doc — so you can verify what it tells you. No source, no answer.
Transparent token tracking
Every AI call is logged per feature and per organization. You and your customers can see exactly what AI is being spent on.
Your data stays yours
No training on your data. Sensitive fields are redacted before any model call. Full audit log of every AI interaction. Bring your own model on Enterprise.
Graceful fallback
When AI is unavailable, unsure, or disabled, the platform falls back to deterministic templates and human-in-the-loop flows. Never broken, never blocking.
Honest answers about how this works
No. There is no chat sidebar, no popup, no AI mascot. AI is a layer inside the platform that drafts evidence, classifies inbound items, and writes summaries. You interact with the platform — not with the AI directly.
Your data stays in your cloud accounts and our hosted infrastructure. Model calls go to our chosen provider (configurable per environment) with training opt-out enabled. On Enterprise you can bring your own model endpoint — Azure OpenAI, AWS Bedrock, or a self-hosted inference server — so prompts never leave your environment.
Yes, on Enterprise. We support Azure OpenAI, AWS Bedrock, and self-hosted endpoints (vLLM, TGI, Ollama). The platform routes per-feature model selection, so you can use your model for sensitive workflows and ours for everything else.
Yes. Every AI feature has a per-organization toggle. With AI disabled, the platform falls back to deterministic templates and human-in-the-loop workflows — slower, but unchanged in capability. AI is leverage, not a dependency.
Yes — draft-and-review. AI drafts answers from your existing policies and prior questionnaires; our team reviews every answer for accuracy before you send it. We will not send raw AI output to a customer over your name.
Two safeguards. (1) Every AI draft is grounded — answers cite the source document, evidence file, or control. If there is no source, the draft says "no source found" instead of inventing one. (2) Every customer-facing artifact is human-reviewed before it ships. AI drafts. Engineers ship.
Want to put the loop to work?
We will scope a 2-week pilot inside one of your existing engagements so you can see the difference before committing to a tier.