Primust Developer Documentation
Prove what your system did. Mathematically.
New here? Jump to the Quickstart to get your first verifiable credential in under five minutes, or read on for the concepts behind Primust.
What is Primust
You already have checks — access control, schema validation, model evaluation, bias audits, policy enforcement. Primust does not replace them. Primust makes them provable.
The flow is simple:
- Input — your system receives a request, data, or event.
- Checks — your existing governance logic executes: OPA policies, Cedar rules, ML classifiers, schema validators, anything.
- Output — Primust captures the execution trace, binds it cryptographically, and produces a credential.
- Verify — anyone with the credential can verify it independently. No phone calls. No network access. No trust required.
The credential is called a VPEC — a Verifiable Process Execution Credential. It is portable JSON, verifiable offline, and contains commitment hashes, never raw content.
What is a VPEC
Verifiable Process Execution Credential.
- Verifiable — any party can independently confirm the credential without contacting the issuer.
- Process — it binds to a specific execution: the checks that ran, the inputs they consumed, the outputs they produced.
- Execution — it captures what actually happened at runtime, not what was supposed to happen.
- Credential — it is a portable, self-contained JSON object that travels with the artifact it describes.
A VPEC is content-blind: a verifier can confirm that a bias audit passed without ever seeing the model weights or training data. Raw content never transits Primust — only commitment hashes do.
Proof Levels
Not all evidence is equal. Primust assigns every check a proof level based on the cryptographic strength of its binding. Six levels, from strongest to weakest:
| Level | Enum | How proven | Trust required |
|---|---|---|---|
| Mathematical | mathematical |
Zero-knowledge proof (Noir circuit) | None — pure math |
| Verifiable Inference | verifiable_inference |
ONNX-to-circuit (EZKL), Modal GPU | Trust model weights |
| Bounded Inference | operator_bound |
Per-operator Merkle commitment verified against Primust-signed drift profile | Trust the measurement |
| Execution | execution |
Model-hash-binding circuit | Trust the model (public, auditable) |
| Witnessed | witnessed |
Two RFC 3161 timestamps + Ed25519 sig | Trust reviewer identity |
| Attestation | attestation |
Invocation-binding circuit | Trust process owner's word |
proof_level_floor is the lowest proof level among all checks it contains. One attestation record forces the floor to attestation regardless of how well-proven the other records are. This is by design — a chain is only as strong as its weakest link. The floor is derived — never set manually.
Bounded Inference
Bounded Inference is the proof level for HuggingFace transformers and other ML classifiers. It sits between Verifiable Inference and Execution in the hierarchy.
Why it's stronger than Execution: primust verify on an Execution VPEC can only check the signature, timestamp, and schema — it cannot verify the output came from the declared model. Bounded Inference additionally commits the per-operator execution trace as a Merkle tree. primust verify checks the merkle_root against the Primust-signed drift profile for the declared model class and GPU class — providing offline-verifiable evidence the committed trace is consistent with running that model.
How it works: The SDK instruments your PyTorch/ONNX graph at operator granularity. On each inference, per-operator outputs are recorded locally, a Merkle tree is built, and only the merkle_root transits Primust. Raw ML outputs never leave your environment.
Overhead: 0.3% additional latency. No GPU proving job. No async proof generation — VPECs issue immediately.
See the Bounded Inference deep dive for full details, or the Quickstart section for setup.
Provable Surface
The provable surface is the share of your governance that is cryptographically provable. It answers: of all the declared checks in this process, how much produces verifiable evidence — and at what level?
A sample breakdown:
provable_surface: 0.87
provable_surface_breakdown:
mathematical: 0.52 ← ZK-proven: OPA rules, regex, decision trees
bounded_inference: 0.18 ← Trace verified against model profile
verifiable_inference: 0.00 ← ZK circuit ML (Tier 2, if triggered)
execution: 0.12 ← Hash-bound model/tool calls
witnessed: 0.05 ← Human review with Ed25519
attestation: 0.00 ← Self-attested only
proof_level_floor: operator_bound ← weakest-link scalar
Lead with provable surface, not floor.
- Wrong: "Your proof level floor is attestation."
- Right: "62% of your governance is mathematically proven."
Policy Bundles
Policy bundles are pre-built collections of checks mapped to specific compliance frameworks or use cases. Each bundle defines required checks, minimum proof levels, and framework mappings.
| Bundle ID | Description |
|---|---|
ai_agent_general_v1 | General-purpose AI agent governance — input validation, output filtering, cost bounds, tool-call authorization |
eu_ai_act_art12_v1 | EU AI Act Article 12 — logging completeness, explanation commitment, bias audit, risk classification |
hipaa_safeguards_v1 | HIPAA technical safeguards — PHI detection, access control, audit log integrity |
soc2_cc_v1 | SOC 2 Common Criteria — access control, change management, risk assessment |
coding_agent_v1 | Coding agent governance — command patterns, secrets scan, enforcement rate |
supply_chain_governance_v1 | Software supply chain — upstream VPEC verification, dependency hash checks |
financial_data_governance_v1 | Financial data pipelines — reconciliation check, schema validation, upstream VPEC verification |
Bundles are composable. Apply multiple bundles to cover overlapping requirements.