DocsVisionPolicy Governance

For humans and agents alike

Authored once. Read by humans and agents alike.

Purpose-built for the policies humans author. Designed for the agents that have to read them.

Most policy programs end as PDFs. Authored once, distributed, never read again. Auditors ask "do you have it?" — never "is it working?" — because the document layer has nothing to say. That changes here.

Dictiva makes every policy machine-readable from the moment you author it. Statements carry their scope, modality, and enforcement mode in fields, not paragraphs — discoverable by humans through the dashboard, by agents through MCP. The same source of truth, two interfaces.

What we build

Three things that have to be true.

Authorship

Statements, not documents. Title, body, modality, scope, enforcement mode — discrete fields the wizard walks you through, AI-assisted if you want a draft. Every author writes against the same schema, so the next author finds what's already there instead of duplicating it.

Adoption

Your tenant doesn't start empty. Adopt from a curated library mapped to NIST AI RMF, ISO 27001, ISO 42001, and the EU AI Act. Refine for your context, mark divergence transparently, retire what doesn't apply.

Attestation

Every commitment leaves a trail. Humans acknowledge with a signature; agents attest with a signed W3C Verifiable Credential. Both flow into the same audit log, both answer the same audit question. Governance becomes detection, not paperwork.

Where to start

Choose your journey.

What we ship pre-mapped

Regulations your auditor already knows.

The library ships statement bundles aligned to the frameworks compliance teams are already mapped to. Adopt the bundle, refine for your context, retire what doesn't apply.

  • NIST AI RMF — Govern, Map, Measure, Manage functions
  • ISO 42001 — AI management system requirements
  • EU AI Act — High-risk system obligations (Aug 2026)
  • ISO 27001 — Information security controls
  • NIST Cybersecurity Framework — Identify, Protect, Detect, Respond, Recover
  • SOC 2 — Trust services criteria

The next time auditors ask "what NIST AI RMF function does this address?" — every statement has the answer in a field, not in a footnote.

Author once. Make it machine-readable from the start.
Adopt the substrate. Build on what auditors already know.
Attest with evidence. Humans and agents alike.

Choose your journey above. Start now. — Or see how this graph extends to AI agents next: Agentic Governance →