Practical Applications: Proof of Network
(Part 2 of 3 — same chapter in the PDF; split for the web site.)
• x402 payment flows: Delula uses credit-based billing, not per-request HTTP 402 settlement. The x402 integration exists in the Scrypted API but has not been stress-tested through consumer traffic. • Auction-based routing: Provider selection is configuration-driven, not auction-driven. The economic model that ranks by bid, quality, and reputation is designed but not yet exercised at consumer scale. These gaps define the bridge to Sidelines. 10.2 Sidelines: prediction markets as network validation Where Delula exercises the orchestration and fulfillment stack, Sidelines exercises the verification and training stack. Sidelines is a prediction-market application where humans and AI agents compete on directional calls over cryptocurrency price movements. Every prediction is verified through the same CRPC protocol that secures the Scrypted Network’s LoRA training pipeline— and every market resolution provides ground truth that no centralized oracle controls. Strategic Takeaway Sidelines turns CRPC verification from a whitepaper protocol into a live game mechanic: three nodes running a thousand simulations will converge within an empirically predictable range—malicious or lazy actors will not. The market itself is the oracle. 10.2.1 Market structure Sidelines presents two prediction horizons: 15-minute windows: Short-term directional calls on cryptocurrency pairs—will the price be higher or lower at the close of the window? Fast, high-frequency, suited to reactive agents and engaged human traders. 12-hour windows: Longer-term macro calls that require reasoning about trends, volume patterns, and cross-chain correlation. Suited to models with broader context windows and humans with domain expertise. Both humans and agents participate on the same leaderboard. There is no separate “AI league”: a LoRA fine-tuned on order-flow data competes directly against a human trader reading charts, and vice versa. Leaderboard scoring tracks accuracy, consistency, and calibration over time, producing a reputation signal that maps naturally to the network’s reputation registry (§11.2). The dual-horizon design is deliberate. The 15-minute window produces high-frequency verification events—hundreds of market resolutions per day—generating rapid statistical feedback on model quality. The 12-hour window tests a fundamentally different capability: macro reasoning, cross-chain correlation, and resistance to noise. Together they stress-test both the fast-inference and deep-reasoning paths of the Scrypted orchestration engine. 10.2.2 LoRA fine-tuning on blockchain data Sidelines’ predictive models are LoRA adapters fine-tuned on blockchain trading data sourced from Allium [21], a cross-chain analytics platform providing normalized on-chain transaction, DeFi, and price data across major L1 and L2 networks. The training pipeline exercises the output-based verification path described in Chapter 12, §12.3: 52
- Data specification. A training run specifies the data window (e.g., 90 days of 1-minute OHLCV candles for a token pair), feature engineering parameters, and target variable (directional movement at the prediction horizon).
- Milestone-driven training. The network defines milestones—measurable output targets such as directional accuracy on a held-out evaluation window or calibration score above a threshold. Nodes train independently. The network does not prescribe how nodes reach the milestone; it prescribes what the milestone is. Results speak for themselves.
- Output verification via CRPC. Each node runs its trained model against a standardized evaluation set, producing N prediction simulations (e.g., 1,000 runs). CRPC commit–reveal operates on these output distributions, not on weight tensors. Honest nodes that trained on the same data to the same milestone produce outputs that converge within an empirically predictable ε. Malicious or lazy nodes—those that skipped training, submitted garbage weights, or trained on corrupted data—produce output distributions that diverge by orders of magnitude beyond honest-node variance.
- Inference deployment. Verified LoRA adapters are deployed as Scrypted ingredients, callable through the standard recipe invocation path. Each prediction request is a job; each job’s output (directional call + confidence) is a verifiable commitment.
- Market resolution. When the prediction window closes, on-chain price feeds resolve the market. Model accuracy feeds back into both leaderboard scoring and, in the network vision, on-chain reputation via ERC-8004 feedback entries. The milestone-based approach has a practical advantage beyond verification: it creates a natural improvement cadence. As the network publishes new milestones—tighter accuracy targets, new token pairs, longer prediction horizons—nodes must retrain to remain competitive. The leaderboard reflects who actually improved, not who claims to have. Stale models drop in ranking organically; the network self-selects for quality without requiring a central authority to judge training runs. 10.2.3 Eliminating centralized oracles A conventional prediction market requires a trusted oracle to resolve outcomes. Sidelines eliminates this dependency through two mechanisms: On-chain price as ground truth. For directional predictions over cryptocurrency pairs, the resolution data is the blockchain itself. Price at t0 and price at t0 + ∆t are observable by any node with chain access. No centralized oracle is needed to determine whether BTC/USD moved up or down in a 15-minute window—the answer is in the block history. The prediction market resolves against the same data infrastructure it predicts, creating a closed loop. CRPC as decentralized output verification. For training verification—where the question is not “what did the market do?” but “did this node actually train a good model?”— CRPC provides the answer without a central judge. Three nodes independently training on the same data specification and then each running 1,000 inference simulations will produce output distributions that converge within a predictable statistical range. This convergence is the verification signal. The statistical argument is concrete. Let Φi be the vector of 1,000 directional predictions from node i’s trained model on the standardized evaluation set. For honest nodes i, j that trained on the same specification: d(Φi, Φj ) < εhonest (empirically calibratable) For a malicious node m that skipped training or submitted garbage: 53
d(Φi, Φm) ≫ εhonest The gap between honest-node variance and malicious-node divergence is not marginal—it is typically orders of magnitude, because a model that was not trained on the relevant data has no basis for producing a correlated output distribution. This makes ε calibration practical rather than theoretical: a few test runs establish the honest-node range, and the detection threshold sits comfortably between honest variance and malicious divergence. No single node is trusted. No centralized oracle judges quality. The committee’s independent convergence is the proof. 10.2.4 Prediction trees as CRPC proofs Sidelines uses structured prediction trees as CRPC proof artifacts: each node represents a directional call at a specific time horizon, and branches encode conditional predictions (“if BTC is above $X at t1, predict ETH direction at t2”). The underlying time-series forecasting draws on foundation models such as Sundial [22] (THUML; ICML 2025 Oral), a generative forecasting family pre-trained on one trillion time points via flow-matching loss, capable of zero-shot probabilistic prediction across domains. These prediction trees serve as CRPC proof artifacts for the Scrypted Network testnet: • Commitment phase. Before a prediction window opens, participating agents commit hashes of their prediction trees. The tree structure (not just a single directional call) provides richer comparison material for pairwise verification—and makes it harder to game, because the agent must commit to a distribution of conditional beliefs, not a single binary bet. • Reveal and comparison. After commitment, agents reveal their trees. CRPC pairwise comparison operates on the tree structure: agreement on trunk predictions (high-confidence, short-horizon) weighs more heavily than agreement on branch predictions (conditional, longerhorizon). The distance metric is weighted by tree depth, so shallow consensus matters more than deep-branch divergence. • Resolution and reputation. Market resolution provides ground truth for the trunk predictions. Agents whose revealed trees diverge significantly from both peer consensus and market outcome accumulate negative reputation signals. Agents whose trees converge with peers and align with outcomes earn the strongest reputation boost—verified skill, not just verified honesty. This gives the Scrypted Network its first closed-loop CRPC deployment: training produces LoRA adapters (verified by output-distribution comparison), inference produces prediction trees (verified by structural comparison), and market resolution provides ground truth that the verification chain can measure itself against—no centralized oracle required. 10.2.5 Community and open science Sidelines is designed to be a public arena for model quality, not a closed competition: • Hugging Face model publishing. Top-performing LoRA adapters from the Sidelines leaderboard will be published on Hugging Face with full training provenance: dataset specification, milestone definition, CRPC verification receipts, and leaderboard history. This makes Sidelines a proving ground whose best outputs become public goods—and demonstrates that verified decentralized training produces models worth using. 54
• Kaggle community competitions. Sidelines will host community model competitions through Kaggle’s community competition feature, inviting external researchers and teams to submit prediction models against Sidelines’ evaluation framework. Winning models can be onboarded as Scrypted ingredients, creating a pipeline from open competition to production deployment. 10.2.6 Network primitives exercised Sidelines feature Scrypted primitive exercised LoRA fine-tuning pipeline CRPC output-based training verification (§12.3) Prediction-tree commitments CRPC commit–reveal (§12.2) Simulation-based ε Decentralized verification without centralized oracle Milestone-driven training Quality via measurable results, not weight inspection Human + agent leaderboard Reputation registry, mixed-participant scoring (§11.2) Allium data ingestion External data as ingredient input (Ch 5) Staking on predictions Crypto-economic security, slashing (§12.5) Multi-horizon market structure Attention auctions with time-varying value (§3.5) Trained LoRA as ingredient Agent-as-IP, LoRA as deployable capability (§2.7) On-chain price resolution Ground truth grounding CRPC consensus Where Delula’s gaps include cryptographic verification and auction-based routing, Sidelines is designed to close them: CRPC is exercised on both training outputs and inference predictions, staking creates real economic consequences for dishonest attestation, the leaderboard produces reputation data that feeds the network’s discovery and routing mechanisms, and the entire verification chain resolves against publicly observable market data rather than a trusted third party. 10.3 Chibi Clash: world models and game AI Chibi Clash is a Web3 auto-battler game that joined the Scrypted family to integrate agentic AI into its game mechanics. Where Delula exercises content creation and Sidelines exercises prediction markets, Chibi Clash exercises the world-model thesis: that training AI agents inside game simulations—with clear rules, measurable outcomes, and economic stakes—produces transferable intelligence applicable beyond games. 10.3.1 The world-model thesis The founding team’s core belief is that games showed the way: the challenges in building an agentic economy are inherently similar to those faced building the first generation of MMORPGs. World models—whether JEPA-based physical analogs or neuro-symbolic like Stratus X1—are the key to unlocking agentic intelligence. Game environments provide the training ground: • Clear reward signals. Win/loss, damage dealt, resources gathered—game mechanics produce unambiguous feedback that RL and fine-tuning pipelines can consume without human labeling. 55
Source: transcribed from the compiled Scrypted Network Design whitepaper PDF for web reading. Layout, figures, and pagination may differ from the PDF.