Quickstart
From zero to your first verifiable credential in under five minutes.
Install
Python
pip install primust primust-ai
Node.js
npm install @primust/sdk
Java (Maven)
<dependency>
<groupId>com.primust</groupId>
<artifactId>primust-sdk</artifactId>
<version>1.0.0</version>
</dependency>
Framework adapters
pip install primust-langgraph # LangGraph
pip install primust-openai-agents # OpenAI Agents SDK
pip install primust-google-adk # Google ADK
pip install primust-otel # OpenTelemetry
Verifier (standalone, free forever)
pip install primust-verify
Get a Sandbox Key
Sign up at primust.com. Sandbox is free — no credit card required. Your key looks like pk_sb_xxx.
pk_sb_xxx) are real proofs — same cryptography, same schema — but environment: "sandbox". Not audit-acceptable. No RFC 3161 timestamping. No KMS signing. Convert to production with no re-instrumentation: same code, key upgrades status.
export PRIMUST_API_KEY=pk_sb_xxx
First VPEC — Three Lines
LangGraph
import primust
import primust_ai
p = primust.Pipeline(api_key="pk_sb_xxx", policy="ai_agent_general_v1")
primust_ai.autoinstrument(pipeline=p)
result = your_graph.invoke({"input": user_message})
vpec = p.close()
print(vpec.vpec_id)
print(vpec.proof_level_floor) # mathematical, operator_bound, execution...
print(vpec.provable_surface) # 0.87
print(vpec.provable_surface_breakdown) # {"mathematical": 0.52, "bounded_inference": 0.18, ...}
OpenAI Agents SDK
from primust.adapters.openai_agents import PrimustAgentHook
p = primust.Pipeline(api_key="pk_sb_xxx", policy="ai_agent_general_v1")
agent = Agent(name="My Agent", instructions="...", hooks=PrimustAgentHook(pipeline=p))
Google ADK
from primust.adapters.google_adk import PrimustADKCallback
p = primust.Pipeline(api_key="pk_sb_xxx", policy="ai_agent_general_v1")
agent.add_callback(PrimustADKCallback(pipeline=p))
OpenTelemetry
from opentelemetry import trace
from primust.adapters.otel import PrimustSpanProcessor
tracer_provider = trace.get_tracer_provider()
tracer_provider.add_span_processor(
PrimustSpanProcessor(api_key="pk_sb_xxx", policy="ai_agent_general_v1")
)
OPA (Mathematical ceiling)
import "github.com/primust/primust-opa"
client := primust.NewClient("pk_sb_xxx")
result, vpec, err := primustopa.EvalWithProof(ctx, client, query, input)
// proof_level_floor: mathematical
Cedar / Drools / IBM ODM (Java, Mathematical ceiling)
PrimustClient client = new PrimustClient("pk_sb_xxx");
PrimustCedar cedar = new PrimustCedar(client, "ai_agent_general_v1");
AuthzResult result = cedar.isAuthorized(principal, action, resource);
// result.vpec() — Mathematical-level proof
Bounded Inference — HuggingFace Classifiers
If you're running HuggingFace toxicity, PII, prompt injection, or bias classifiers, you get Bounded Inference automatically when a Primust profile exists for your model.
Bounded Inference is stronger than Execution: primust verify checks the committed operator trace against the Primust-signed drift profile for your model class — providing offline-verifiable evidence the trace is consistent with running the declared model.
from transformers import pipeline
classifier = pipeline("text-classification", model="unitary/toxic-bert")
@p.record_check("toxicity_check")
def run_toxicity(text):
result = classifier(text)
return CheckResult(passed=result[0]["label"] == "non-toxic", evidence=result[0]["score"])
# SDK automatically:
# 1. Detects DistilBERT-class model
# 2. Looks up primust/distilbert-class/v1.2.0 in Model Profile Registry
# 3. Wraps in boundary_rule decomposition:
# pre: tokenization (Mathematical)
# core: bound_committed_inference (Bounded Inference)
# post: threshold check (Mathematical)
# VPEC: provable_surface ~0.71, proof_level_floor: operator_bound
If your model isn't in the registry yet, the check falls back to Execution and fires a model_profile_missing advisory gap. Request calibration at app.primust.com/policy/registry.
Supported models (initial registry)
| Category | Models |
|---|---|
| PII detection | distilbert-base-uncased, bert-base-NER, dbmdz/bert-large-cased-finetuned-conll03 |
| Toxicity | unitary/toxic-bert, martin-ha/toxic-comment-model, s-nlp/roberta_toxicity_classifier |
| Prompt injection | deepset/deberta-v3-base-injection, protectai/deberta-v3-base-prompt-injection |
| Bias detection | d4data/bias-detection-model, valurank/distilroberta-bias |
| Content moderation | facebook/roberta-hate-speech-dynabench-r4-target, cardiffnlp/twitter-roberta-base-offensive |
XGBoost & Sklearn — Mathematical Ceiling
Decision tree models are automatically elevated to Mathematical proof level via the decision_path_model stage type. The decision path is committed as arithmetic constraints — same mechanism as an OPA rule. No GPU, no ZK circuit compilation required.
import xgboost as xgb
model = xgb.XGBClassifier()
model.load_model("fraud_model.json")
@p.record_check("fraud_score")
def run_fraud_check(transaction):
score = model.predict_proba([transaction.features])[0][1]
return CheckResult(passed=score < 0.85, evidence={"score": round(score, 4)})
# SDK detects XGBClassifier → stage_type: decision_path_model → proof_level: mathematical
# VPEC: proof_level_floor: mathematical
Works automatically for: xgboost, lightgbm, sklearn.ensemble.RandomForestClassifier, sklearn.linear_model.LogisticRegression, sklearn.svm.LinearSVC.
stage_type: "open_source_ml" for these models, the SDK will warn at manifest registration:
⚠ STAGE TYPE MISMATCH: You declared stage_type: "open_source_ml" but passed an
sklearn.ensemble.RandomForestClassifier. This model qualifies for "decision_path_model"
at Mathematical ceiling. Run: primust upgrade-manifest ./manifests/fraud_check.json
LLM Calls — Boundary Rule Decomposition
A closed LLM call (GPT-4, Claude, Gemini) is Execution level. But you can add deterministic pre/post conditions to raise the provable_surface significantly — without changing the LLM call itself.
# Without decomposition: proof_level = execution for this check
@p.record_check("agent_response")
def run_agent(text):
response = openai.chat.completions.create(model="gpt-4o", messages=[...])
return CheckResult(passed=True, evidence={"model": "gpt-4o"})
# With boundary_rule decomposition: pre/post conditions are Mathematical
@p.record_check("agent_response", stage_type="boundary_rule")
def run_agent_with_bounds(text):
# Pre-conditions — Mathematical (SDK instruments automatically)
assert len(text) < 8000
assert not contains_injection(text)
# LLM call — Execution
response = openai.chat.completions.create(model="gpt-4o", messages=[...])
output = response.choices[0].message.content
# Post-conditions — Mathematical
assert len(output) < 4000
assert not is_toxic(output)
return CheckResult(passed=True, evidence={"model": "gpt-4o"})
# VPEC: provable_surface ~0.71 for this check ("72% of our governance is mathematically proven")
Lightweight Witnessed — acknowledged
For Slack approvals, analyst review gates, or manager sign-offs — the acknowledged stage type gives you Witnessed-level proof without provisioning Ed25519 keypairs. Uses OAuth identity instead. 30-minute setup vs half-day for full human_approval.
from primust.stage_types.acknowledged import record_acknowledgment
# Create a challenge for the approver
challenge = p.open_acknowledged(
check="loan_approval_review",
manifest_id=MANIFEST_LOAN,
content=loan_summary,
oauth_platform="slack",
approver_oauth_id="U0123ABCDEF"
)
# After the reviewer approves in Slack:
record_acknowledgment(
pipeline=p,
check_id="loan_approval_review",
manifest_id=MANIFEST_LOAN,
content=loan_summary,
oauth_identity="U0123ABCDEF",
platform="slack",
approved=True,
tst_token=rfc3161_timestamp_token
)
# VPEC: stage_type: acknowledged, proof_level: witnessed
# Note: oauth_identity is NEVER stored — only sha256 hash transits Primust
Verify a VPEC
# Install standalone verifier — Apache-2.0, free forever, no account required
pip install primust-verify
# Verify
primust verify vpec.json
# Zero network (pinned trust root)
primust verify vpec.json --trust-root primust-pubkey.pem
# Machine-readable output for CI/CD
primust verify vpec.json --json
Exit codes: 0 = valid, 1 = invalid, 2 = valid SANDBOX (not audit-acceptable), 3 = valid but key expired/revoked.
✓ Signature valid (Ed25519, key_id: primust-signing-key-2026-01)
✓ Chain intact (12 check records, hash chain unbroken)
✓ ZK proofs valid (3 mathematical claims verified)
✓ Timestamp authentic (existed before 2026-03-17T14:30:00Z, DigiCert RFC 3161)
✓ No governance gaps (0 unresolved)
✓ Profile consistent (primust/distilbert-class/v1.2.0 · A10G) ← Bounded Inference only
proof_level_floor: operator_bound
provable_surface: 0.87
mathematical: 0.52
bounded_inference: 0.18
execution: 0.12
witnessed: 0.05
VPEC: vpec_abc123
Environment: production
Share with an auditor: verify.primust.com/{vpec_id} — no login required.
primust-checks (Open Source)
Apache-2.0 harness for running governance checks locally. Checks run in your environment — zero content transits Primust. Add an API key to activate the proof layer; remove it and it's observability-only.
pip install primust-checks
from primust_checks import Harness, CheckResult
harness = Harness(policy="eu_ai_act_art12_v1") # observability only
harness = Harness(policy="eu_ai_act_art12_v1", api_key="pk_sb_xxx") # proof layer active
@harness.check
def my_pii_scanner(input, output) -> CheckResult:
result = your_pii_logic(input)
return CheckResult(
passed=not result.found_pii,
evidence="no_pii_detected" if not result.found_pii else f"types:{result.pii_types}"
)
8 built-in checks: secrets_scanner, pii_regex, cost_bounds, command_patterns, upstream_vpec_verify, schema_validation, reconciliation_check, dependency_hash_check — all Mathematical ceiling.
Receiving VPECs — The Downstream Pattern
If you receive AI outputs, data deliveries, or software artifacts from an upstream organization that issues VPECs, verify their governance before processing and record it as a Mathematical claim in your own VPEC.
from primust_verify import verify
# Quick verify (any VPEC you receive)
result = verify(vpec_json)
if not result.valid:
raise Exception(f"Upstream governance failed: {result.failure_reason}")
# Zero-network (high-frequency or air-gapped)
result = verify(vpec_json, trust_root_pem=SUPPLIER_PUBLIC_KEY_PEM)
# Record the verification as a Mathematical claim in your own VPEC
p.record(
check="upstream_vpec_verify",
manifest_id="sha256:...",
inputs={
"vpec_artifact": received_vpec,
"expected_issuer_org_id": "org_your_supplier",
"minimum_proof_level_floor": "execution",
"required_claims": ["pii_non_detection"],
"reject_sandbox": True
}
)
vpec = p.close()
# Your VPEC now contains a Mathematical claim:
# "We verified our upstream's governance before processing their output."
This is domain-neutral — works for AI outputs, financial data deliveries, software artifacts, clinical data, any VPEC-bearing process.
AIUC-1 Compliance Fields
For EU AI Act, HIPAA, AIUC-1, or FDA 21 CFR Part 11:
# Pipeline init — retention and risk classification
p = primust.Pipeline(
api_key="pk_sb_xxx",
policy="eu_ai_act_art12_v1",
retention_policy="EU_AI_ACT_10Y",
risk_classification="EU_HIGH_RISK"
)
# Per-check — actor attribution (ALCOA, SOC 2 CC6.1)
p.record(
check="credit_decision",
manifest_id=MANIFEST_CREDIT,
check_result="pass",
actor_id=f"user_{current_user.id}",
explanation_commitment=poseidon2(explanation_text) # plaintext NEVER sent to Primust
)
# Per-check — bias audit (NYC LL144, ECOA)
p.record(
check="hiring_screen",
manifest_id=MANIFEST_HIRING,
check_result="pass",
bias_audit={
"protected_categories": ["race", "gender", "age"],
"disparity_metric": "demographic_parity",
"disparity_threshold": 0.05,
"disparity_result_commitment": poseidon2(str(actual_disparity)), # value NEVER sent
"result": "pass"
}
)
Production Checklist
- Switch
pk_sb_xxxtopk_live_xxx— same code, key upgrade only - Add
activity_store="postgresql://your-db/primust_activity"for AI domain pack - Mode 2: declare required checks in policy (app.primust.com/policy)
- Check Model Profile Registry for your HuggingFace models (app.primust.com/policy/registry)
- If using XGBoost/RandomForest: confirm SDK infers
decision_path_model— check VPECprovable_surface_breakdown.mathematical - Set up alerts (app.primust.com/settings/alerts)
- Run
primust verifyon first production VPEC before sharing with auditors - If AIUC-1 required: configure compliance_requirements in policy pack
- If receiving VPECs from upstream: configure
upstream_vpec_verifyin policy - Publish public key at
/.well-known/primust-pubkey.pem(Enterprise BYOK only)
Key Concepts
| Term | Definition |
|---|---|
| GEP | Governed Execution Proof — the cryptographic primitive. Proves a defined governed process ran correctly on specific data. |
| VPEC | Verifiable Process Execution Credential — portable, signed, offline-verifiable JSON artifact. |
| Manifest | Binds a check name to an exact tool version, model hash, and configuration. Content-addressed (sha256:...). |
| proof_level_floor | Weakest-link scalar across all records. DERIVED — never set manually. |
| provable_surface | Share of governance that is cryptographically provable. Distribution shown in provable_surface_breakdown. |
| Bounded Inference | Proof level for HuggingFace transformers. Per-operator Merkle commitment verified against Primust-signed drift profile. Stronger than Execution. |
| decision_path_model | Stage type for XGBoost/sklearn. Mathematical ceiling. Decision path committed as arithmetic constraints. |
| boundary_rule | Wraps any stage type with Mathematical pre/post conditions. Raises provable_surface without changing the underlying model. |
| acknowledged | Stage type for lightweight Witnessed. OAuth identity + challenge hash + single RFC 3161. 30-minute setup. |