Model Profile Registry

The Model Profile Registry is a Primust-maintained signed registry of empirical per-operator drift profiles for governance-relevant ML models. It's what activates Bounded Inference — the proof level for HuggingFace transformers.

What it is

Each profile records how much a model's per-operator outputs drift across different hardware classes (CPU, A10G, A100, H100). Drift is measured over 1000 calibration passes at the 99th, 99.9th, and 99.99th percentile for each operator type (Linear/MatMul, LayerNorm, Softmax, GELU, Dropout).

Profiles are signed by Primust's GCP KMS key and verifiable offline against Primust's public key. A tampered profile is detectable the same way a tampered VPEC is detectable.

When primust verify sees a bound_committed_inference VPEC, it resolves the profile_id to the profile and checks whether the committed merkle_root is consistent with the declared model class and GPU class. This is the additional verification step that makes Bounded Inference stronger than Execution.

Included at all paid tiers
Class profiles (standard governance models) are included at all paid tiers — no additional cost. Model-specific calibration for proprietary or unusual models is an Enterprise add-on.

Profile format

{
  "profile_id": "primust/distilbert-class/v1.2.0",
  "model_class": "distilbert",
  "architecture": "DistilBertForSequenceClassification",
  "calibrated_gpu_classes": ["cpu", "a10g", "a100", "h100"],
  "calibration_passes": 1000,
  "calibration_date": "2026-03-17",
  "safety_margin": 2.0,
  "operators": {
    "Linear":    { "p99": 2.86e-4, "p99_9": 5.49e-4, "p99_99": 7.93e-4 },
    "LayerNorm": { "p99": 3.19e-5, "p99_9": 3.91e-5, "p99_99": 4.23e-5 },
    "Softmax":   { "p99": 4.59e-6, "p99_9": 5.78e-6, "p99_99": 7.55e-6 },
    "GELU":      { "p99": 3.43e-5, "p99_9": 4.20e-5, "p99_99": 4.96e-5 },
    "Dropout":   { "p99": 4.88e-4, "p99_9": 7.32e-4, "p99_99": 9.52e-4 }
  },
  "profile_signature": "Ed25519:...",
  "signed_by": "primust.com/.well-known/primust-pubkey.pem"
}

The safety_margin: 2.0 means the thresholds are twice what was empirically measured. A DistilBERT-class profile covers both distilbert-base-uncased and distilbert-base-multilingual-cased — empirical calibration showed GELU and LayerNorm drift transfers within 1.25x ratio across both models, so a 2x safety margin provides full coverage.

How SDK lookup works

  1. SDK extracts onnx_model_hash from the model at manifest registration or first p.record_check() call
  2. SDK calls GET /api/v1/registry/lookup?hash={onnx_model_hash}
  3. If profile found: downloads and caches locally; sets stage_type: bound_committed_inference
  4. If not found: sets stage_type: open_source_ml (Execution fallback); queues model_profile_missing advisory gap
  5. Cached profile is used for all subsequent inferences — no network call at inference time
# What the SDK emits when no profile is found:
# ⚠ model_profile_missing (Medium)
#   No Primust operator-bound profile found for model sha256:a4f9...
#   This check is issuing at Execution level.
#   To upgrade to Bounded Inference, request profile calibration at
#   app.primust.com/policy/registry

Supported models

The initial registry ships with a DistilBERT-class profile covering the most common governance classifiers.

CategoryModelsProfile
PII detection distilbert-base-uncased, bert-base-NER, dbmdz/bert-large-cased-finetuned-conll03 primust/distilbert-class/v1.2.0
Toxicity unitary/toxic-bert, martin-ha/toxic-comment-model, s-nlp/roberta_toxicity_classifier primust/distilbert-class/v1.2.0
Prompt injection deepset/deberta-v3-base-injection, protectai/deberta-v3-base-prompt-injection primust/distilbert-class/v1.2.0
Bias detection d4data/bias-detection-model, valurank/distilroberta-bias primust/distilbert-class/v1.2.0
Content moderation facebook/roberta-hate-speech-dynabench-r4-target, cardiffnlp/twitter-roberta-base-offensive primust/distilbert-class/v1.2.0

Primust does NOT host these models. You download from HuggingFace normally. The registry contains only calibration profiles — metadata about how the models behave across hardware, not the model weights themselves.

Request calibration for your model

If your model isn't in the registry, submit a calibration request. Primust calibrates the drift profile, publishes it, and the model_profile_missing gap auto-resolves.

Via dashboard

  1. Go to app.primust.com/policy/registry
  2. Models detected in your manifests appear with status ✅ or ⚠
  3. Click "Request calibration" for models showing ⚠
  4. Optionally provide HuggingFace model ID to speed up identification
  5. Select desired GPU classes
  6. You'll receive an email when the profile is published (SLA: 7 days standard, 48h Enterprise)

Via API

curl -X POST https://api.primust.com/api/v1/registry/calibration-requests \
  -H "Authorization: Bearer pk_live_xxx" \
  -H "Content-Type: application/json" \
  -d '{
    "onnx_model_hash": "sha256:...",
    "huggingface_model_id": "my-org/my-classifier",
    "desired_gpu_classes": ["cpu", "a10g", "a100"]
  }'

# Response:
{
  "request_id": "cal_req_abc123",
  "status": "queued",
  "estimated_completion": "7d",
  "notify_email": "admin@your-org.com"
}

Verify a profile

Profiles are offline-verifiable against Primust's public key, the same way VPECs are.

# Download the profile
curl https://api.primust.com/api/v1/registry/profiles?model_hash=class_profile_distilbert \
  -H "Authorization: Bearer pk_sb_xxx" > profile.json

# Verify the profile signature offline
curl https://primust.com/.well-known/primust-pubkey.pem > primust-pubkey.pem
primust verify-profile profile.json --trust-root primust-pubkey.pem

# Output:
# ✓ Profile signature valid (Ed25519, primust.com)
# Profile: primust/distilbert-class/v1.2.0
# Calibration: 2026-03-17 · 1000 passes · A10G + A100 + H100 + CPU

API reference

List profiles

GET /api/v1/registry/profiles
Authorization: Bearer pk_live_xxx

# Response:
{
  "profiles": [
    {
      "profile_id": "primust/distilbert-class/v1.2.0",
      "model_class": "distilbert",
      "architecture": "DistilBertForSequenceClassification",
      "calibrated_gpu_classes": ["cpu", "a10g", "a100", "h100"],
      "calibration_date": "2026-03-17",
      "safety_margin": 2.0,
      "profile_signature": "Ed25519:...",
      "signed_by": "primust.com/.well-known/primust-pubkey.pem"
    }
  ]
}
# Note: operators{} thresholds omitted in list view. Include ?model_hash= to get full detail.

Lookup by model hash

GET /api/v1/registry/lookup?hash={onnx_model_hash}
Authorization: Bearer pk_live_xxx

# 200 Found:
{
  "found": true,
  "profile_id": "primust/distilbert-class/v1.2.0",
  "model_class": "distilbert",
  "architecture": "DistilBertForSequenceClassification",
  "profile_signature": "Ed25519:...",
  "signed_by": "primust.com/.well-known/primust-pubkey.pem"
}

# 404 Not found:
{ "found": false }

Request calibration

POST /api/v1/registry/calibration-requests
Authorization: Bearer pk_live_xxx
Content-Type: application/json

{
  "onnx_model_hash": "sha256:...",           // required
  "model_description": "Internal bias classifier for loan decisions",
  "huggingface_model_id": "my-org/my-model", // optional — helps Primust identify model
  "desired_gpu_classes": ["cpu", "a10g"]     // which GPU classes to calibrate for
}

# Response:
{
  "request_id": "cal_req_abc123",
  "status": "queued",
  "estimated_completion": "7d",    // "48h" for Enterprise tier
  "notify_email": "admin@your-org.com"
}