Build

Checks Harness

primust-checks is the open-source path for local preview checks, custom rules, and an easier on-ramp before you issue full runtime credentials.

Use it when you want to see value quickly, keep early work local, or bring custom checks under a consistent result model.

Playbook Position

Use this page if you want to run checks first and decide about full runtime issuance later. If you already know you want emitted credentials, go back to Quickstart.

Install

pip install primust-checks

Two Modes

Without API key

Run checks locally, inspect results, and see gaps. No VPEC is issued.

With API key

Run the same checks, but now they feed the Primust runtime evidence path and can contribute to emitted credentials.

Pythonfrom primust_checks import Harness

harness = Harness(policy="ai_agent_general_v1")
result = harness.run(
    input="Summarize this customer complaint",
    output="...",
)

print(result.passed)
print(result.gaps)
print(result.vpec)  # None without an API key

Built-In Checks

The current built-in checks in the repo are:

Check Purpose
secrets_scannerDetect likely secrets in governed content.
pii_regexDetect common PII patterns.
cost_boundsEnforce token or spend ceilings.
command_patternsCatch dangerous shell or command patterns.
upstream_vpec_verifyVerify upstream Primust evidence before downstream use.
schema_validationValidate structured payload shape and required fields.
reconciliation_checkCheck aligned values across inputs or sources.
dependency_hash_checkBind or verify dependency hash expectations.

Starter Bundles

If you do not want to assemble checks one by one, start from one of the shipped bundle IDs.

Bundle ID Use case
ai_agent_general_v1General AI agent baseline.
eu_ai_act_art12_v1EU AI Act Article 12 recordkeeping-oriented checks.
hipaa_safeguards_v1HIPAA technical safeguards.
soc2_cc_v1SOC 2 common-criteria style checks.
coding_agent_v1Coding-agent governance baseline.

Add A Custom Check

Pythonfrom primust_checks import Harness, CheckResult

harness = Harness(policy="ai_agent_general_v1")

@harness.check(name="tone_check", proof_ceiling="execution")
def check_tone(*, input, output=None, context=None, config=None):
    banned = {"idiot", "stupid", "dumb"}
    text = (output or "").lower()
    found = sorted(word for word in banned if word in text)
    return CheckResult(
        passed=not found,
        check_id="tone_check",
        evidence=f"Flagged words: {found}" if found else "Clean",
    )

The point of the harness is not to invent a second governance model. It gives you a simpler place to preview checks and define custom logic before those checks participate in a broader runtime evidence flow.

When To Use The Harness

Important

The harness is a great on-ramp, but it does not replace governance setup for regulated systems. Applicability, obligations, control plans, and approvals still matter if the evidence is going to be relied on externally.

Next

Use SDKs & Adapters for runtime capture, or Quickstart if you want the shortest path to a real emitted credential.