Security Model
What the chain trusts, what it verifies, and what happens if any single off-chain component lies.
In one paragraph
- Alpha trust scope is small and explicit: 4 validators, 1 full prover, 2 lite provers, 1 enclave.
- Lite provers are an MVP trade-off: signature- only verification provides no computational integrity. Phased out by mainnet.
- Provers are economically accountable: staking, capacity claims, deadlines, and latency classes — misbehavior costs more than honest participation.
- DoS defense is layered: pre-dispatch filtering of invalid submissions, conservative weight modelling on KZG verification, strict size bounds, priority routing for prover extrinsics.
Trust assumptions at alpha
Alpha launches with explicit centralization trade-offs. They’re acknowledged, bounded, and scheduled for removal through progressive decentralization.
Validators (4)
Standard BFT assumptions. Safety requires < 1/3 byzantine (< 2 of 4). Liveness requires ≥ 2/3 (≥ 3 of 4). The small set means individual validator compromise has outsized impact on liveness in particular.
Full prover (1, Theseus)
The only source of cryptographic inference integrity. If compromised, false proofs could pass KZG verification — mitigated by the on-chain verification path being deterministic and auditable by any full node. Capacity for independent re-verification grows with Beta open registration.
Lite provers (2, external)
Signature-only verification provides no computational integrity guarantee. A lite prover can return arbitrary outputs with a valid signature. This is an explicit MVP trade-off for model breadth and throughput while full verification rolls out. The credential’s recentRuns.grade field surfaces this to verifiers.
Blessed enclave (1, Theseus)
Single point of failure for credential security. If the TEE is compromised, all agent credentials stored on-chain could be decrypted. Mitigation: the enclave is Theseus-operated with TEE attestation, and the roadmap moves to multi-party attestation in Beta and a decentralized enclave network in mainnet.
Prover accountability
In a system where off-chain provers do the actual AI compute, the chain needs mechanisms to ensure honest behavior — deliver on time, report capacity truthfully, never submit false proofs. The target design enforces this through economic pressure.
Staking
Registered provers post a bond. Misbehavior triggers slashing.
Capacity claims
Provers declare hardware (VRAM, RAM, supported models). Failure to deliver assigned jobs signals misreported capacity or operational unreliability — penalties follow.
Deadlines
Each inference job has a block-based deadline derived from its latency class. Missing the deadline triggers slashing.
Latency classes
Jobs grouped into RT (real-time), Interactive, and Bulk classes with differentiated deadlines and fees.
At alpha with a small prover set, these mechanisms operate in a simplified form. Full staking and slashing activate with open prover registration in Beta.
DoS and spam mitigations
submit_inference_result triggers KZG proof verification. Verification is constant-time but still expensive relative to a no-op transaction, which creates an asymmetry: a malicious actor could submit many invalid proofs that each consume validator CPU before being rejected. The chain defends against this in layers.
Pre-dispatch filtering
validate() checks pending job existence, metadata consistency, and encoded size bounds before dispatching the expensive verification. Invalid submissions are rejected cheaply.
Conservative weight model
KZG verification cost (W_kzg) is modelled as a large, fixed constant per proof — deliberately overestimated relative to measured performance. A “full” block of prover extrinsics still executes within the target block time.
Strict size bounds
All dynamically sized fields in InferenceResult (output, proof) are statically bounded, preventing large-allocation attacks.
Priority mechanism
Proof submissions return high-priority ValidTransactions so they’re included ahead of regular traffic, ensuring timely verification even under congestion.
KZG verification cost model
A key property of KZG commitments is that verification is O(1) in the size of the committed data. Regardless of how large the model or how many tokens were generated, verifying a single proof requires a constant number of elliptic curve pairings. On BLS12-381 this takes a few milliseconds on commodity validator hardware — well within 6-second block times even when verifying multiple proofs per block.
The Substrate weight model bounds maximum verification density per block:
W_submit_inference_result ≈
W_base
+ n_proofs · W_kzg
+ W_storage
where:
n_proofs is bounded by a per-extrinsic maximum
W_kzg is derived from WCET-style benchmarking
on minimum validator hardware