Reinforcement Learning: The Paradigm Shift of Decentralized AI
Author: 0xjacobzhao | https://linktr.ee/0xjacobzhao This independent research report is supported by IOSG Ventures. The research and writing process was inspired by Sam Lehman (Pantera Capital) ’s work on reinforcement learning. Thanks to Ben Fielding (Gensyn.ai), Gao Yuan(Gradient), Samuel Dare & Erfan Miahi (Covenant AI), Shashank Yadav (Fraction AI), Chao Wang for their valuable suggestions on this article. This article strives for objectivity and accuracy, but some viewpoints involve subjective judgment and may contain biases. We appreciate the readers' understanding. Artificial intelligence is shifting from pattern-based statistical learning toward structured reasoning systems, with post-training—especially reinforcement learning—becoming central to capability scaling. DeepSeek-R1 signals a paradigm shift: reinforcement learning now demonstrably improves reasoning depth and complex decision-making, evolving from a mere alignment tool into a continuous intelligence-enhancement pathway. In parallel, Web3 is reshaping AI production via decentralized compute and crypto incentives, whose verifiability and coordination align naturally with reinforcement learning’s needs. This report examines AI training paradigms and reinforcement learning fundamentals, highlights the structural advantages of “Reinforcement Learning × Web3,” and analyzes Prime Intellect, Gensyn, Nous Research, Gradient, Grail and Fraction AI. I. Three Stages of AI Training Modern LLM training spans three stages—pre-training, supervised fine-tuning (SFT), and post-training/reinforcement learning—corresponding to building a world model, injecting task capabilities, and shaping reasoning and values. Their computational and verification characteristics determine how compatible they are with decentralization. Pre-training: establishes the core statistical and multimodal foundations via massive self-supervised learning, consuming 80–95% of total cost and requiring tightly synchronized, homogeneous GPU clusters and high-bandwidth data access, making it inherently centralized.Supervised Fine-tuning (SFT): adds task and instruction capabilities with smaller datasets and lower cost (5–15%), often using PEFT methods such as LoRA or Q-LoRA, but still depends on gradient synchronization, limiting decentralization.Post-training: Post-training consists of multiple iterative stages that shape a model’s reasoning ability, values, and safety boundaries. It includes both RL-based approaches (e.g. RLHF, RLAIF, GRPO), non-RL preference optimization (e.g. DPO), and process reward models (PRM). With lower data and cost requirements (around 5–10%), computation focuses on rollouts and policy updates. Its native support for asynchronous, distributed execution—often without requiring full model weights—makes post-training the phase best suited for Web3-based decentralized training networks when combined with verifiable computation and on-chain incentives.
II. Reinforcement Learning Technology Landscape 2.1 System Architecture of Reinforcement Learning Reinforcement learning enables models to improve decision-making through a feedback loop of environment interaction, reward signals, and policy updates. Structurally, an RL system consists of three core components: the policy network, rollout for experience sampling, and the learner for policy optimization. The policy generates trajectories through interaction with the environment, while the learner updates the policy based on rewards, forming a continuous iterative learning process. Policy Network (Policy): Generates actions from environmental states and is the decision-making core of the system. It requires centralized backpropagation to maintain consistency during training; during inference, it can be distributed to different nodes for parallel operation.Experience Sampling (Rollout): Nodes execute environment interactions based on the policy, generating state-action-reward trajectories. This process is highly parallel, has extremely low communication, is insensitive to hardware differences, and is the most suitable component for expansion in decentralization.Learner: Aggregates all Rollout trajectories and executes policy gradient updates. It is the only module with the highest requirements for computing power and bandwidth, so it is usually kept centralized or lightly centralized to ensure convergence stability.
2.2 Reinforcement Learning Stage Framework Reinforcement learning can usually be divided into five stages, and the overall process as follows:
Data Generation Stage (Policy Exploration): Given a prompt, the policy samples multiple reasoning chains or trajectories, supplying the candidates for preference evaluation and reward modeling and defining the scope of policy exploration.Preference Feedback Stage (RLHF / RLAIF):RLHF (Reinforcement Learning from Human Feedback): trains a reward model from human preferences and then uses RL (typically PPO) to optimize the policy based on that reward signal.RLAIF (Reinforcement Learning from AI Feedback): replaces humans with AI judges or constitutional rules, cutting costs and scaling alignment—now the dominant approach for Anthropic, OpenAI, and DeepSeek.Reward Modeling Stage (Reward Modeling): Learns to map outputs to rewards based on preference pairs. RM teaches the model "what is the correct answer," while PRM teaches the model "how to reason correctly."RM (Reward Model): Used to evaluate the quality of the final answer, scoring only the output.Process Reward Model (PRM): scores step-by-step reasoning, effectively training the model’s reasoning process (e.g., in o1 and DeepSeek-R1).Reward Verification (RLVR / Reward Verifiability): A reward-verification layer constrains reward signals to be derived from reproducible rules, ground-truth facts, or consensus mechanisms. This reduces reward hacking and systemic bias, and improves auditability and robustness in open and distributed training environments.Policy Optimization Stage (Policy Optimization): Updates policy parameters $\theta$ under the guidance of signals given by the reward model to obtain a policy $\pi_{\theta'}$ with stronger reasoning capabilities, higher safety, and more stable behavioral patterns. Mainstream optimization methods include:PPO (Proximal Policy Optimization): the standard RLHF optimizer, valued for stability but limited by slow convergence in complex reasoning. GRPO (Group Relative Policy Optimization): introduced by DeepSeek-R1, optimizes policies using group-level advantage estimates rather than simple ranking, preserving value magnitude and enabling more stable reasoning-chain optimization.DPO (Direct Preference Optimization): bypasses RL by optimizing directly on preference pairs—cheap and stable for alignment, but ineffective at improving reasoning.New Policy Deployment Stage (New Policy Deployment): the updated model shows stronger System-2 reasoning, better preference alignment, fewer hallucinations, and higher safety, and continues to improve through iterative feedback loops.
2.3 Industrial Applications of Reinforcement Learning Reinforcement Learning (RL) has evolved from early game intelligence to a core framework for cross-industry autonomous decision-making. Its application scenarios, based on technological maturity and industrial implementation, can be summarized into five major categories: Game & Strategy: The earliest direction where RL was verified. In environments with "perfect information + clear rewards" like AlphaGo, AlphaZero, AlphaStar, and OpenAI Five, RL demonstrated decision intelligence comparable to or surpassing human experts, laying the foundation for modern RL algorithms.Robotics & Embodied AI: Through continuous control, dynamics modeling, and environmental interaction, RL enables robots to learn manipulation, motion control, and cross-modal tasks (e.g., RT-2, RT-X). It is rapidly moving towards industrialization and is a key technical route for real-world robot deployment.Digital Reasoning / LLM System-2: RL + PRM drives large models from "language imitation" to "structured reasoning." Representative achievements include DeepSeek-R1, OpenAI o1/o3, Anthropic Claude, and AlphaGeometry. Essentially, it performs reward optimization at the reasoning chain level rather than just evaluating the final answer.Scientific Discovery & Math Optimization: RL finds optimal structures or strategies in label-free, complex reward, and huge search spaces. It has achieved foundational breakthroughs in AlphaTensor, AlphaDev, and Fusion RL, showing exploration capabilities beyond human intuition.Economic Decision-making & Trading: RL is used for strategy optimization, high-dimensional risk control, and adaptive trading system generation. Compared to traditional quantitative models, it can learn continuously in uncertain environments and is an important component of intelligent finance. III. Natural Match Between Reinforcement Learning and Web3 Reinforcement learning and Web3 are naturally aligned as incentive-driven systems: RL optimizes behavior through rewards, while blockchains coordinate participants through economic incentives. RL’s core needs—large-scale heterogeneous rollouts, reward distribution, and verifiable execution—map directly onto Web3’s structural strengths. Decoupling of Reasoning and Training: Reinforcement learning separates into rollout and update phases: rollouts are compute-heavy but communication-light and can run in parallel on distributed consumer GPUs, while updates require centralized, high-bandwidth resources. This decoupling lets open networks handle rollouts with token incentives, while centralized updates maintain training stability.Verifiability: ZK (Zero-Knowledge) and Proof-of-Learning provide means to verify whether nodes truly executed reasoning, solving the honesty problem in open networks. In deterministic tasks like code and mathematical reasoning, verifiers only need to check the answer to confirm the workload, significantly improving the credibility of decentralized RL systems.Incentive Layer, Token Economy-Based Feedback Production Mechanism: Web3 token incentives can directly reward RLHF/RLAIF feedback contributors, enabling transparent, permissionless preference generation, with staking and slashing enforcing quality more efficiently than traditional crowdsourcing.Potential for Multi-Agent Reinforcement Learning (MARL): Blockchains form open, incentive-driven multi-agent environments with public state, verifiable execution, and programmable incentives, making them a natural testbed for large-scale MARL despite the field still being early. IV. Analysis of Web3 + Reinforcement Learning Projects Based on the above theoretical framework, we will briefly analyze the most representative projects in the current ecosystem: Prime Intellect: Asynchronous Reinforcement Learning prime-rl Prime Intellect aims to build an open global compute market and open-source superintelligence stack, spanning Prime Compute, the INTELLECT model family, open RL environments, and large-scale synthetic data engines. Its core prime-rl framework is purpose-built for asynchronous distributed RL, complemented by OpenDiLoCo for bandwidth-efficient training and TopLoc for verification. Prime Intellect Core Infrastructure Components Overview
Technical Cornerstone: prime-rl Asynchronous Reinforcement Learning Framework prime-rl is Prime Intellect's core training engine, designed for large-scale asynchronous decentralized environments. It achieves high-throughput inference and stable updates through complete Actor–Learner decoupling. Executors (Rollout Workers) and Learners (Trainers) do not block synchronously. Nodes can join or leave at any time, only needing to continuously pull the latest policy and upload generated data:
Actor (Rollout Workers): Responsible for model inference and data generation. Prime Intellect innovatively integrated the vLLM inference engine at the Actor end. vLLM's PagedAttention technology and Continuous Batching capability allow Actors to generate inference trajectories with extremely high throughput.Learner (Trainer): Responsible for policy optimization. The Learner asynchronously pulls data from the shared Experience Buffer for gradient updates without waiting for all Actors to complete the current batch.Orchestrator: Responsible for scheduling model weights and data flow. Key Innovations of prime-rl: True Asynchrony: prime-rl abandons the traditional synchronous paradigm of PPO, does not wait for slow nodes, and does not require batch alignment, enabling any number and performance of GPUs to access at any time, establishing the feasibility of decentralized RL.Deep Integration of FSDP2 and MoE: Through FSDP2 parameter sharding and MoE sparse activation, prime-rl allows tens of billions of parameters models to be efficiently trained in distributed environments. Actors only run active experts, significantly reducing VRAM and inference costs.GRPO+ (Group Relative Policy Optimization): GRPO eliminates the Critic network, significantly reducing computation and VRAM overhead, naturally adapting to asynchronous environments. prime-rl's GRPO+ ensures reliable convergence under high latency conditions through stabilization mechanisms. INTELLECT Model Family: A Symbol of Decentralized RL Technology Maturity INTELLECT-1 (10B, Oct 2024): Proved for the first time that OpenDiLoCo can train efficiently in a heterogeneous network across three continents (communication share < 2%, compute utilization 98%), breaking physical perceptions of cross-region training.INTELLECT-2 (32B, Apr 2025): As the first Permissionless RL model, it validates the stable convergence capability of prime-rl and GRPO+ in multi-step latency and asynchronous environments, realizing decentralized RL with global open computing participation.INTELLECT-3 (106B MoE, Nov 2025): Adopts a sparse architecture activating only 12B parameters, trained on 512×H200 and achieving flagship inference performance (AIME 90.8%, GPQA 74.4%, MMLU-Pro 81.9%, etc.). Overall performance approaches or surpasses centralized closed-source models far larger than itself. Prime Intellect has built a full decentralized RL stack: OpenDiLoCo cuts cross-region training traffic by orders of magnitude while sustaining ~98% utilization across continents; TopLoc and Verifiers ensure trustworthy inference and reward data via activation fingerprints and sandboxed verification; and the SYNTHETIC data engine generates high-quality reasoning chains while enabling large models to run efficiently on consumer GPUs through pipeline parallelism. Together, these components underpin scalable data generation, verification, and inference in decentralized RL, with the INTELLECT series demonstrating that such systems can deliver world-class models in practice. Gensyn: RL Core Stack RL Swarm and SAPO Gensyn seeks to unify global idle compute into a trustless, scalable AI training network, combining standardized execution, P2P coordination, and on-chain task verification. Through mechanisms like RL Swarm, SAPO, and SkipPipe, it decouples generation, evaluation, and updates across heterogeneous GPUs, delivering not just compute, but verifiable intelligence. RL Applications in the Gensyn Stack
RL Swarm: Decentralized Collaborative Reinforcement Learning Engine RL Swarm demonstrates a brand new collaboration mode. It is no longer simple task distribution, but an infinite loop of a decentralized generate–evaluate–update loop inspired by collaborative learning simulating human social learning: Solvers (Executors): Responsible for local model inference and Rollout generation, unimpeded by node heterogeneity. Gensyn integrates high-throughput inference engines (like CodeZero) locally to output complete trajectories rather than just answers.Proposers: Dynamically generate tasks (math problems, code questions, etc.), enabling task diversity and curriculum-like adaptation to adapt training difficulty to model capabilities.Evaluators: Use frozen "Judge Models" or rules to check output quality, forming local reward signals evaluated independently by each node. The evaluation process can be audited, reducing room for malice. The three form a P2P RL organizational structure that can complete large-scale collaborative learning without centralized scheduling.
SAPO: Policy Optimization Algorithm Reconstructed for Decentralization SAPO (Swarm Sampling Policy Optimization) centers on sharing rollouts while filtering those without gradient signal, rather than sharing gradients. By enabling large-scale decentralized rollout sampling and treating received rollouts as locally generated, SAPO maintains stable convergence in environments without central coordination and with significant node latency heterogeneity. Compared to PPO (which relies on a critic network that dominates computational cost) or GRPO (which relies on group-level advantage estimation rather than simple ranking), SAPO allows consumer-grade GPUs to participate effectively in large-scale RL optimization with extremely low bandwidth requirements. Through RL Swarm and SAPO, Gensyn demonstrates that reinforcement learning—particularly post-training RLVR—naturally fits decentralized architectures, as it depends more on diverse exploration via rollouts than on high-frequency parameter synchronization. Combined with PoL and Verde verification systems, Gensyn offers an alternative path toward training trillion-parameter models: a self-evolving superintelligence network composed of millions of heterogeneous GPUs worldwide.
Nous Research: Reinforcement Learning Environment Atropos Nous Research is building a decentralized, self-evolving cognitive stack, where components like Hermes, Atropos, DisTrO, Psyche, and World Sim form a closed-loop intelligence system. Using RL methods such as DPO, GRPO, and rejection sampling, it replaces linear training pipelines with continuous feedback across data generation, learning, and inference. Nous Research Components Overview
Model Layer: Hermes and the Evolution of Reasoning Capabilities The Hermes series is the main model interface of Nous Research facing users. Its evolution clearly demonstrates the industry path migrating from traditional SFT/DPO alignment to Reasoning RL: Hermes 1–3: Instruction Alignment & Early Agent Capabilities: Hermes 1–3 relied on low-cost DPO for robust instruction alignment and leveraged synthetic data and the first introduction of Atropos verification mechanisms in Hermes 3.Hermes 4 / DeepHermes: Writes System-2 style slow thinking into weights via Chain-of-Thought, improving math and code performance with Test-Time Scaling, and relying on "Rejection Sampling + Atropos Verification" to build high-purity reasoning data.DeepHermes further adopts GRPO to replace PPO (which is hard to implement mainly), enabling Reasoning RL to run on the Psyche decentralized GPU network, laying the engineering foundation for the scalability of open-source Reasoning RL. Atropos: Verifiable Reward-Driven Reinforcement Learning Environment Atropos is the true hub of the Nous RL system. It encapsulates prompts, tool calls, code execution, and multi-turn interactions into a standardized RL environment, directly verifying whether outputs are correct, thus providing deterministic reward signals to replace expensive and unscalable human labeling. More importantly, in the decentralized training network Psyche, Atropos acts as a "judge" to verify if nodes truly improved the policy, supporting auditable Proof-of-Learning, fundamentally solving the reward credibility problem in distributed RL.
DisTrO and Psyche: Optimizer Layer for Decentralized Reinforcement Learning Traditional RLF (RLHF/RLAIF) training relies on centralized high-bandwidth clusters, a core barrier that open source cannot replicate. DisTrO reduces RL communication costs by orders of magnitude through momentum decoupling and gradient compression, enabling training to run on internet bandwidth; Psyche deploys this training mechanism on an on-chain network, allowing nodes to complete inference, verification, reward evaluation, and weight updates locally, forming a complete RL closed loop. In the Nous system, Atropos verifies chains of thought; DisTrO compresses training communication; Psyche runs the RL loop; World Sim provides complex environments; Forge collects real reasoning; Hermes writes all learning into weights. Reinforcement learning is not just a training stage, but the core protocol connecting data, environment, models, and infrastructure in the Nous architecture, making Hermes a living system capable of continuous self-improvement on an open computing network. Gradient Network: Reinforcement Learning Architecture Echo Gradient Network aims to rebuild AI compute via an Open Intelligence Stack: a modular set of interoperable protocols spanning P2P communication (Lattica), distributed inference (Parallax), decentralized RL training (Echo), verification (VeriLLM), simulation (Mirage), and higher-level memory and agent coordination—together forming an evolving decentralized intelligence infrastructure.
Echo — Reinforcement Learning Training Architecture Echo is Gradient's reinforcement learning framework. Its core design principle lies in decoupling training, inference, and data (reward) pathways in reinforcement learning, running them separately in heterogeneous Inference Swarm and Training Swarm, maintaining stable optimization behavior across wide-area heterogeneous environments with lightweight synchronization protocols. This effectively mitigates the SPMD failures and GPU utilization bottlenecks caused by mixing inference and training in traditional DeepSpeed RLHF / VERL. Echo uses an "Inference-Training Dual Swarm Architecture" to maximize computing power utilization. The two swarms run independently without blocking each other: Maximize Sampling Throughput: The Inference Swarm consists of consumer-grade GPUs and edge devices, building high-throughput samplers via pipeline-parallel with Parallax, focusing on trajectory generation.Maximize Gradient Computing Power: The Training Swarm can run on centralized clusters or globally distributed consumer-grade GPU networks, responsible for gradient updates, parameter synchronization, and LoRA fine-tuning, focusing on the learning process. To maintain policy and data consistency, Echo provides two types of lightweight synchronization protocols: Sequential and Asynchronous, managing bidirectional consistency of policy weights and trajectories: Sequential Pull Mode (Accuracy First): The training side forces inference nodes to refresh the model version before pulling new trajectories to ensure trajectory freshness, suitable for tasks highly sensitive to policy staleness.Asynchronous Push–Pull Mode (Efficiency First): The inference side continuously generates trajectories with version tags, and the training side consumes them at its own pace. The coordinator monitors version deviation and triggers weight refreshes, maximizing device utilization. At the bottom layer, Echo is built upon Parallax (heterogeneous inference in low-bandwidth environments) and lightweight distributed training components (e.g., VERL), relying on LoRA to reduce cross-node synchronization costs, enabling reinforcement learning to run stably on global heterogeneous networks. Grail: Reinforcement Learning in the Bittensor Ecosystem Bittensor constructs a huge, sparse, non-stationary reward function network through its unique Yuma consensus mechanism. Covenant AI in the Bittensor ecosystem builds a vertically integrated pipeline from pre-training to RL post-training through SN3 Templar, SN39 Basilica, and SN81 Grail. Among them, SN3 Templar is responsible for base model pre-training, SN39 Basilica provides a distributed computing power market, and SN81 Grail serves as the "verifiable inference layer" for RL post-training, carrying the core processes of RLHF / RLAIF and completing the closed-loop optimization from base model to aligned policy.
GRAIL cryptographically verifies RL rollouts and binds them to model identity, enabling trustless RLHF. It uses deterministic challenges to prevent pre-computation, low-cost sampling and commitments to verify rollouts, and model fingerprinting to detect substitution or replay—establishing end-to-end authenticity for RL inference trajectories. Grail’s subnet implements a verifiable GRPO-style post-training loop: miners produce multiple reasoning paths, validators score correctness and reasoning quality, and normalized results are written on-chain. Public tests raised Qwen2.5-1.5B MATH accuracy from 12.7% to 47.6%, showing both cheat resistance and strong capability gains; in Covenant AI, Grail serves as the trust and execution core for decentralized RLVR/RLAIF. Fraction AI: Competition-Based Reinforcement Learning RLFC Fraction AI reframes alignment as Reinforcement Learning from Competition, using gamified labeling and agent-versus-agent contests. Relative rankings and AI judge scores replace static human labels, turning RLHF into a continuous, competitive multi-agent game. Core Differences Between Traditional RLHF and Fraction AI's RLFC:
RLFC’s core value is that rewards come from evolving opponents and evaluators, not a single model, reducing reward hacking and preserving policy diversity. Space design shapes the game dynamics, enabling complex competitive and cooperative behaviors. In system architecture, Fraction AI disassembles the training process into four key components: Agents: Lightweight policy units based on open-source LLMs, extended via QLoRA with differential weights for low-cost updates.Spaces: Isolated task domain environments where agents pay to enter and earn rewards by winning.AI Judges: Immediate reward layer built with RLAIF, providing scalable, decentralized evaluation.Proof-of-Learning: Binds policy updates to specific competition results, ensuring the training process is verifiable and cheat-proof. Fraction AI functions as a human–machine co-evolution engine: users act as meta-optimizers guiding exploration, while agents compete to generate high-quality preference data, enabling trustless, commercialized fine-tuning. Comparison of Web3 Reinforcement Learning Project Architectures
V. The Path and Opportunity of Reinforcement Learning × Web3 Across these frontier projects, despite differing entry points, RL combined with Web3 consistently converges on a shared “decoupling–verification–incentive” architecture—an inevitable outcome of adapting reinforcement learning to decentralized networks. General Architecture Features of Reinforcement Learning: Solving Core Physical Limits and Trust Issues Decoupling of Rollouts & Learning (Physical Separation of Inference/Training) — Default Computing Topology: Communication-sparse, parallelizable Rollouts are outsourced to global consumer-grade GPUs, while high-bandwidth parameter updates are concentrated in a few training nodes. This is true from Prime Intellect's asynchronous Actor–Learner to Gradient Echo's dual-swarm architecture.Verification-Driven Trust — Infrastructuralization: In permissionless networks, computational authenticity must be forcibly guaranteed through mathematics and mechanism design. Representative implementations include Gensyn's PoL, Prime Intellect's TopLoc, and Grail's cryptographic verification.Tokenized Incentive Loop — Market Self-Regulation: Computing supply, data generation, verification sorting, and reward distribution form a closed loop. Rewards drive participation, and Slashing suppresses cheating, keeping the network stable and continuously evolving in an open environment. Differentiated Technical Paths: Different "Breakthrough Points" Under Consistent Architecture Although architectures are converging, projects choose different technical moats based on their DNA: Algorithm Breakthrough School (Nous Research): Tackles distributed training’s bandwidth bottleneck at the optimizer level—DisTrO compresses gradient communication by orders of magnitude, aiming to enable large-model training over home broadband.Systems Engineering School (Prime Intellect, Gensyn, Gradient): Focuses on building the next generation "AI Runtime System." Prime Intellect's ShardCast and Gradient's Parallax are designed to squeeze the highest efficiency out of heterogeneous clusters under existing network conditions through extreme engineering means.Market Game School (Bittensor, Fraction AI): Focuses on the design of Reward Functions. By designing sophisticated scoring mechanisms, they guide miners to spontaneously find optimal strategies to accelerate the emergence of intelligence. Advantages, Challenges, and Endgame Outlook Under the paradigm of Reinforcement Learning combined with Web3, system-level advantages are first reflected in the rewriting of cost structures and governance structures. Cost Reshaping: RL Post-training has unlimited demand for sampling (Rollout). Web3 can mobilize global long-tail computing power at extremely low costs, a cost advantage difficult for centralized cloud providers to match.Sovereign Alignment: Breaking the monopoly of big tech on AI values (Alignment). The community can decide "what is a good answer" for the model through Token voting, realizing the democratization of AI governance. At the same time, this system faces two structural constraints: Bandwidth Wall: Despite innovations like DisTrO, physical latency still limits the full training of ultra-large parameter models (70B+). Currently, Web3 AI is more limited to fine-tuning and inference.Reward Hacking (Goodhart's Law): In highly incentivized networks, miners are extremely prone to "overfitting" reward rules (gaming the system) rather than improving real intelligence. Designing cheat-proof robust reward functions is an eternal game.Malicious Byzantine workers: refer to the deliberate manipulation and poisoning of training signals to disrupt model convergence. The core challenge is not the continual design of cheat-resistant reward functions, but mechanisms with adversarial robustness. RL and Web3 are reshaping intelligence via decentralized rollout networks, on-chain assetized feedback, and vertical RL agents with direct value capture. The true opportunity is not a decentralized OpenAI, but new intelligence production relations—open compute markets, governable rewards and preferences, and shared value across trainers, aligners, and users.
Disclaimer: This article was completed with the assistance of AI tools ChatGPT-5 and Gemini 3. The author has made every effort to proofread and ensure information authenticity and accuracy, but omissions may still exist. Please understand. It should be specially noted that the crypto asset market often experiences divergences between project fundamentals and secondary market price performance. The content of this article is for information integration and academic/research exchange only and does not constitute any investment advice, nor should it be considered a recommendation to buy or sell any tokens.
This independent research report is supported by IOSG Ventures. The research and writing process was inspired by related work from Raghav Agarwal (LongHash) and Jay Yu (Pantera). Thanks to Lex Sokolin @ Generative Ventures , Jordan@AIsa, Ivy @PodOur2Cents for their valuable suggestions on this article. Feedback was also solicited from project teams such as Nevermined, Skyfire, Virtuals Protocol, AIsa, Heurist, AEON during the writing process. This article strives for objective and accurate content, but some viewpoints involve subjective judgment and may inevitably contain deviations. Readers' understanding is appreciated. Agentic Commerce refers to a full-process commercial system where AI agents autonomously complete service discovery, credibility judgment, order generation, payment authorization, and final settlement. It no longer relies on step-by-step human operation or information input, but rather involves agents automatically collaborating, placing orders, paying, and fulfilling in a cross-platform and cross-system environment, thereby forming a commercial closed loop of autonomous execution between machines (M2M Commerce).
In the crypto ecosystem, the most practically valuable applications today are concentrated in stablecoin payments and DeFi. Therefore, as AI and Crypto converge, two high-value development paths are emerging: Short term: AgentFi, built on today’s mature DeFi protocolsMid to long term: Agent Payment, built around stablecoin settlement and progressively standardized by protocols such as ACP, AP2, x402, and ERC-8004 Agentic Commerce is difficult to scale quickly in the short term due to factors such as protocol maturity, regulatory differences, and merchant/user acceptance. However, from a long-term perspective, payment is the underlying anchor of all commercial closed loops, making Agentic Commerce the most valuable in the long run. I. Agentic Commerce Payment Systems and Application Scenarios In the Agentic Commerce system, the real-world merchant network is the largest value scenario. Regardless of how AI Agents evolve, the traditional fiat payment system (Stripe, Visa, Mastercard, bank transfers) and the rapidly growing stablecoin system (USDC, x402) will coexist for a long time, jointly constituting the base of Agentic Commerce. Comparison: Traditional Fiat Payment vs. Stablecoin Payment
Real-world merchants—from e-commerce, subscriptions, and SaaS to travel, paid content, and enterprise procurement—carry trillion-dollar demand and are also the core value source for AI Agents to automatically compare prices, renew subscriptions, and procure. In the short term, mainstream consumption and enterprise procurement will still be dominated by the traditional fiat payment system for a long time. The core obstacle to the scaling of stablecoins in real-world commerce is not just technology, but regulation (KYC/AML, tax, consumer protection), merchant accounting (stablecoins are non-legal tender), and the lack of dispute resolution mechanisms caused by irreversible payments. Due to these structural limitations, it is difficult for stablecoins to enter high-regulation industries such as healthcare, aviation, e-commerce, government, and utilities in the short term. Their implementation will mainly focus on digital content, cross-border payments, Web3 native services, and machine economy (M2M/IoT/Agent) scenarios where regulatory pressure is lower or are native on-chain—this is precisely the opportunity window for Web3-native Agentic Commerce to achieve scale breakthroughs first. However, regulatory institutionalization is advancing rapidly in 2025: the US stablecoin bill has achieved bipartisan consensus, Hong Kong and Singapore have implemented stablecoin licensing frameworks, the EU MiCA has officially come into effect, Stripe supports USDC, and PayPal has launched PYUSD. The clarity of the regulatory structure means that stablecoins are being accepted by the mainstream financial system, opening up policy space for future cross-border settlement, B2B procurement, and the machine economy. Best Application Scenario Matching for Agentic Commerce
The core of Agentic Commerce is not to let one payment rail replace another, but to hand over the execution subject of "order—authorization—payment" to AI Agents, allowing the traditional fiat payment system (AP2, authorization credentials, identity compliance) and the stablecoin system (x402, CCTP, smart contract settlement) to leverage their respective advantages. It is neither a zero-sum competition between fiat and stablecoins nor a substitution narrative of a single rail, but a structural opportunity to expand the capabilities of both: fiat payments continue to support human commerce, while stablecoin payments accelerate machine-native and on-chain native scenarios. The two complement and coexist, becoming the twin engines of the agent economy.
II. Agentic Commerce Protocol Standards Panorama The protocol stack of Agentic Commerce consists of six layers, forming a complete machine commerce link from "capability discovery" to "payment delivery". A2A Catalog and MCP Registry are responsible for capability discovery, ERC-8004 provides on-chain verifiable identity and reputation; ACP and AP2 undertake structured ordering and authorization instructions respectively; the payment layer is composed of traditional fiat rails (AP2) and stablecoin rails (x402) in parallel; the delivery layer currently has no unified standard.
Discovery Layer: Solves "How Agents discover and understand callable services". The AI side builds standardized capability catalogs through A2A Catalog and MCP Registry; Web3 relies on ERC-8004 to provide addressable identity guidance. This layer is the entrance to the entire protocol stack.Trust Layer: Answers "Is the other party credible". There is no universal standard on the AI side yet. Web3 builds a unified framework for verifiable identity, reputation, and execution records through ERC-8004, which is a key advantage of Web3.Ordering Layer: Responsible for "How orders are expressed and verified". ACP (OpenAI × Stripe) provides a structured description of goods, prices, and settlement terms to ensure merchants can fulfill contracts. Since it is difficult to express real-world commercial contracts on-chain, this layer is basically dominated by Web2.Authorization Layer: Handles "Whether the Agent has obtained legal user authorization". AP2 binds intent, confirmation, and payment authorization to the real identity system through verifiable credentials. Web3 signatures do not yet have legal effect, so they cannot bear the contract and compliance responsibilities of this layer.Payment Layer: Decides "Which rail completes the payment". AP2 covers traditional payment networks such as cards and banks; x402 provides native API payment interfaces for stablecoins, enabling assets like USDC to be embedded in automated calls. The two types of rails form functional complementarity here.Fulfillment Layer: Answers "How to safely deliver content after payment is completed". Currently, there is no unified protocol: the real world relies on merchant systems to complete delivery, and Web3's encrypted access control has not yet formed a cross-ecosystem standard. This layer is still the largest blank in the protocol stack and is most likely to incubate the next generation of infrastructure protocols. III. Agentic Commerce Core Protocols In-Depth Explanation Focusing on the five key links of service discovery, trust judgment, structured ordering, payment authorization, and final settlement in Agentic Commerce, institutions such as Google, Anthropic, OpenAI, Stripe, Ethereum, and Coinbase have all proposed underlying protocols in corresponding links, jointly building the core protocol stack of the next generation Agentic Commerce. Agent-to-Agent (A2A) – Agent Interoperability Protocol (Google) A2A is an open-source protocol initiated by Google and donated to the Linux Foundation. It aims to provide unified communication and collaboration standards for AI Agents built by different vendors and frameworks. Based on HTTP + JSON-RPC, A2A implements secure, structured message and task exchange, enabling Agents to conduct multi-turn dialogue, collaborative decision-making, task decomposition, and state management in a native way. Its core goal is to build an "Internet of Agents", allowing any A2A-compatible Agent to be automatically discovered, called, and combined, thereby forming a cross-platform, cross-organization distributed Agent network. Model Context Protocol (MCP) – Unified Tool Data Access Protocol (Anthropic) MCP launched by Anthropic, is an open protocol connecting LLM / Agents with external systems, focusing on unified tool and data access interfaces. It abstracts databases, file systems, remote APIs, and proprietary tools into standardized resources, enabling Agents to access external capabilities securely, controllably, and auditably. MCP's design emphasizes low integration costs and high scalability: developers only need to connect once to let the Agent use the entire tool ecosystem. Currently, MCP has been adopted by many leading AI vendors and has become the de facto standard for agent-tool interaction.
MCP focuses on "How Agents use tools"—providing models with unified and secure external resource access capabilities (such as databases, APIs, file systems, etc.), thereby standardizing agent-tool / agent-data interaction methods.A2A solves "How Agents collaborate with other Agents"—establishing native communication standards for cross-vendor, cross-framework agents, supporting multi-turn dialogue, task decomposition, state management, and long-lifecycle execution. It is the basic interoperability layer between agents.
Agentic Commerce Protocol (ACP) – Ordering and Checkout Protocol (OpenAI × Stripe) ACP (Agentic Commerce Protocol) is an open ordering standard (Apache 2.0) proposed by OpenAI and Stripe. It establishes a structured ordering process that can be directly understood by machines for Buyer—AI Agent—Merchant. The protocol covers product information, price and term verification, settlement logic, and payment credential transmission, enabling AI to safely initiate purchases on behalf of users without becoming a merchant itself. Its core design is: AI calls the merchant's checkout interface in a standardized way, while the merchant retains full commercial and legal control. ACP enables merchants to enter the AI shopping ecosystem without transforming their systems by using structured orders (JSON Schema / OpenAPI), secure payment tokens (Stripe Shared Payment Token), compatibility with existing e-commerce backends, and supporting REST and MCP publishing capabilities. Currently, ACP has been used for ChatGPT Instant Checkout, becoming an early deployable payment infrastructure. Agent Payments Protocol (AP2) – Digital Authorization and Payment Instruction Protocol (Google) AP2 is an open standard jointly launched by Google and multiple payment networks and technology companies. It aims to establish a unified, compliant, and auditable process for AI Agent-led payments. It binds the user's payment intent, authorization scope, and compliance identity through cryptographically signed digital authorization credentials, providing merchants, payment institutions, and regulators with verifiable evidence of "who is spending money for whom". AP2 takes "Payment-Agnostic" as its design principle, supporting credit cards, bank transfers, real-time payments, and accessing stablecoin and other crypto payment rails through extensions like x402. In the entire Agentic Commerce protocol stack, AP2 is not responsible for specific goods and ordering details, but provides a universal Agent payment authorization framework for various payment channels.
ERC-8004 – On-chain Agent Identity / Reputation / Verification Standard (Ethereum) ERC-8004 is an Ethereum standard jointly proposed by MetaMask, Ethereum Foundation, Google, and Coinbase. It aims to build a cross-platform, verifiable, trustless identity and reputation system for AI Agents. The protocol consists of three on-chain parts: Identity Registry: Mints a chain identity similar to NFT for each Agent, which can link cross-platform information such as MCP / A2A endpoints, ENS/DID, wallets, etc.Reputation Registry: Standardizes recording of scores, feedback, and behavioral signals, making the Agent's historical performance auditable, aggregatable, and composable.Validation Registry: Supports verification mechanisms such as stake re-execution, zkML, TEE, providing verifiable execution records for high-value tasks. Through ERC-8004, the Agent's identity, reputation, and behavior are preserved on-chain, forming a cross-platform discoverable, tamper-proof, and verifiable trust base, which is an important infrastructure for Web3 to build an open and trusted AI economy. ERC-8004 is in the Review stage, meaning the standard is basically stable and feasible, but is still soliciting broad community opinion and has not been finalized. x402 – Stablecoin Native API Payment Rail (Coinbase) x402 is an open payment standard (Apache-2.0) proposed by Coinbase. It turns the long-idle HTTP 402 Payment Required into a programmable on-chain payment handshake mechanism, allowing APIs and AI Agents to achieve accountless, frictionless, pay-per-use on-chain settlement without accounts, credit cards, or API Keys.
HTTP 402 Payment Flow. Source: Jay Yu@Pantera Capital Core Mechanism: The x402 protocol revives the HTTP 402 status code left over from the early internet. Its workflow is: Request & Negotiation: Client (Agent) initiates request -> Server returns 402 status code and payment parameters (e.g., amount, receiving address).Autonomous Payment: Agent locally signs the transaction and broadcasts it (usually using stablecoins like USDC), without human intervention.Verification & Delivery: After the server or third-party "Facilitator" verifies the on-chain transaction, resources are released instantly. x402 introduces the Facilitator role as middleware connecting Web2 APIs and the Web3 settlement layer. The Facilitator is responsible for handling complex on-chain verification and settlement logic, allowing traditional developers to monetize APIs with minimal code. The server side does not need to run nodes, manage signatures, or broadcast transactions; it only needs to rely on the interface provided by the Facilitator to complete on-chain payment processing. Currently, the most mature Facilitator implementation is provided by the Coinbase Developer Platform. The technical advantages of x402 are: supporting on-chain micropayments as low as 1 cent, breaking the limitation of traditional payment gateways unable to handle high-frequency small-amount calls in AI scenarios; completely removing accounts, KYC, and API Keys, enabling AI to autonomously complete M2M payment closed loops; and achieving gasless USDC authorized payments through EIP-3009, natively compatible with Base and Solana, possessing multi-chain scalability. Based on the introduction of the core protocol stack of Agentic Commerce, the following table summarizes the positioning, core capabilities, main limitations, and maturity assessment of the protocols at each level, providing a clear structural perspective for building a cross-platform, executable, and payable agent economy.
IV. Web3 Agentic Commerce Ecosystem Representative Projects Currently, the Web3 ecosystem of Agentic Commerce can be divided into three layers: Business Payment Systems Layer (L3): Includes projects like Skyfire, Payman, Catena Labs, Nevermined, providing payment encapsulation, SDK integration, quota and permission governance, human approval, and compliance access. They connect to traditional financial rails (banks, card organizations, PSP, KYC/KYB) to varying degrees, building a bridge between payment business and the machine economy.Native Payment Protocol Layer (L2): Consists of protocols like x402, Virtual ACP and their ecosystem projects. Responsible for charge requests, payment verification, and on-chain settlement. This is the core that truly achieves automated, end-to-end clearing in the Agent economy. x402 relies completely on no banks, card organizations, or payment service providers, providing on-chain native M2M/A2A payment capabilities.Infrastructure Layer (L1): Includes Ethereum, Base, Solana, and Kite AI, providing the trusted technical stack base for payment and identity systems, such as on-chain execution environments, key systems, MPC/AA, and permission Runtimes.
L3 - Skyfire: Identity and Payment Credentials for AI Agents Skyfire takes KYA + Pay as its core, abstracting "Identity Verification + Payment Authorization" into JWT credentials usable by AI, providing verifiable automated access and deduction capabilities for websites, APIs, and MCP services. The system automatically generates Buyer/Seller Agents and custodial wallets for users, supporting top-ups via cards, banks, and USDC. At the system level, Skyfire generates Buyer/Seller Agents and custodial wallets for each user. Its biggest advantage is full compatibility with Web2 (JWT/JWKS, WAF, API Gateway can be used directly), providing "identity-bearing automated paid access" for content sites, data APIs, and tool SaaS. Skyfire is a realistically usable Agent Payment middle layer, but identity and asset custody are centralized solutions. L3 - Payman: AI Native Fund Authority Risk Control Payman provides four capabilities: Wallet, Payee, Policy, Approval, building a governable and auditable "Fund Authority Layer" for AI. AI can execute real payments, but all fund actions must meet quotas, policies, and approval rules set by users. Core interaction is done through the payman.ask() natural language interface, where the system is responsible for intent parsing, policy verification, and payment execution. Payman's key value lies in: "AI can move money, but never oversteps authority." It migrates enterprise-level fund governance to the AI environment: automated payroll, reimbursement, vendor payments, bulk transfers, etc., can all be completed within clearly defined permission boundaries. Payman is suitable for internal financial automation of enterprises and teams (salary, reimbursement, vendor payment, etc.), positioned as a Controlled Fund Governance Layer, and does not attempt to build an open Agent-to-Agent payment protocol. L3 - Catena Labs: Agent Identity/Payment Standard Catena uses AI-Native financial institutions (custody, clearing, risk control, KYA) as the commercial layer and ACK (Agent Commerce Kit) as the standard layer to build the Agent's unified identity protocol (ACK-ID) and Agent-native payment protocol (ACK-Pay). The goal is to fill the missing verifiable identity, authorization chain, and automated payment standards in the machine economy. ACK-ID establishes the Agent's ownership chain and authorization chain based on DID/VC; ACK-Pay defines payment request and verifiable receipt formats decoupled from underlying settlement networks (USDC, Bank, Arc). Catena emphasizes long-term cross-ecosystem interoperability, and its role is closer to the "TLS/EMV layer of the Agent economy", with strong standardization and a clear vision. L3 - Nevermined: Metering, Billing and Micropayment Settlement Nevermined focuses on the AI usage-based economic model, providing Access Control, Metering, Credits System, and Usage Logs for automated metering, pay-per-use, revenue sharing, and auditing. Users can top up credits via Stripe or USDC, and the system automatically verifies usage, deducts fees, and generates auditable logs for each API call. Its core value lies in supporting sub-cent real-time micropayments and Agent-to-Agent automated settlement, allowing data purchase, API calls, workflow scheduling, etc., to run in a "pay-per-call" manner. Nevermined does not build a new payment rail, but builds a metering/billing layer on top of payment: promoting AI SaaS commercialization in the short term, supporting A2A marketplace in the medium term, and potentially becoming the micropayment fabric of the machine economy in the long term.
Skyfire, Payman, Catena Labs, and Nevermined belong to the business payment layer and all need to connect to banks, card organizations, PSPs, and KYC/KYB to varying degrees. But their real value is not in "accessing fiat", but in solving machine-native needs that traditional finance cannot cover—identity mapping, permission governance, programmatic risk control, and pay-per-use. Skyfire (Payment Gateway): Provides "Identity + Auto-deduction" for Websites/APIs (On-chain identity mapping to Web2 identity).Payman (Financial Governance): Policy, quota, permission, and approval for internal enterprise use (AI can spend money but not overstep).Catena Labs (Financial Infrastructure): Combines with banking system, building (AI Compliance Bank) through KYA, custody, and clearing services.Nevermined (Cashier): Does metering and billing on top of payment; payment relies on Stripe/USDC. In contrast, x402 is at a lower level and is the only native on-chain payment protocol that does not rely on banks, card organizations, or PSPs. It can directly complete on-chain deduction and settlement via the 402 workflow. Upper-layer systems like Skyfire, Payman, and Nevermined can call x402 as a settlement rail, thereby providing Agents with a truly M2M / A2A automated native payment closed loop. L2 - x402 Ecosystem: From Client to On-chain Settlement The x402 native payment ecosystem can be divided into four levels: Client, Server, Payment Execution Layer (Facilitators), and Blockchain Settlement Layer. The Client is responsible for allowing Agents or Apps to initiate payment requests; the Server provides data, reasoning, or storage API services to Agents on a per-use basis; the Payment Execution Layer completes on-chain deduction, verification, and settlement, serving as the core execution engine of the entire process; the Blockchain Settlement Layer undertakes the final token deduction and on-chain confirmation, realizing tamper-proof payment finality.
x402 Payment Flow Source: x402 Whitepaper Client-Side Integrations / The Payers: Enable Agents or Apps to initiate x402 payment requests, the "starting point" of the entire payment process. Representative projects:thirdweb Client SDK: The most commonly used x402 client standard in the ecosystem, actively maintained, multi-chain support, default tool for developers to integrate x402.Nuwa AI: Enables AI to directly pay for x402 services without coding, representative project of "Agent Payment Entrance".Others like Axios/Fetch, Mogami Java SDK, Tweazy are early clients.Current status: Existing clients are still in the "SDK Era", essentially developer tools. More advanced forms like Browser/OS clients, Robot/IoT clients, or Enterprise systems managing multi-wallet/multi-Facilitator have not yet appeared.Services / Endpoints / The Sellers: Sell data, storage, or reasoning services to Agents on a per-use basis. Representative projects:AIsa: provides payment and settlement infrastructure for real AI Agents to access data, content, compute, and third-party services on a per-call, per-token, or usage basis, and is currently the top project by x402 request volume.Firecrawl: Web parsing and structured crawler entrance most frequently consumed by AI Agents.Pinata: Mainstream Web3 storage infrastructure, x402 covers real underlying storage costs, not lightweight API.Gloria AI: Provides high-frequency real-time news and structured market signals, intelligence source for Trading and Analytical Agents.AEON: Extends x402 + USDC to online & offline merchant acquiring in Southeast Asia / LatAm / Africa. Reaching up to 50 million merchants.Neynar: Farcaster social graph infrastructure, opening social data to Agents via x402.Current status: Server side is concentrated in crawler/storage/news APIs. Critical layers like financial transaction execution APIs, ad delivery APIs, Web2 SaaS gateways, or APIs executing real-world tasks are almost undeveloped.Facilitators / The Processors: Complete on-chain deduction, verification, and settlement. The core execution engine of x402. Representative projects:Coinbase Facilitator (CDP): Enterprise-grade trusted executor, Base mainnet zero fees + built-in OFAC/KYT, strongest choice for production environment.PayAI Facilitator: Execution layer project with widest multi-chain coverage and fastest growth (Solana, Polygon, Base, Avalanche, etc.), highest usage multi-chain Facilitator in the ecosystem.Daydreams: Project combining payment execution with LLM reasoning routing, currently the fastest-growing "AI Reasoning Payment Executor", becoming the third pole in the x402 ecosystem.Others: According to x402scan data, there are long-tail Facilitators/Routers like Dexter, Virtuals Protocol, OpenX402, CodeNut, Heurist, Thirdweb, etc., but volume is significantly lower than the top three.Blockchain Settlement Layer: The final destination of the x402 payment workflow. Responsible for actual token deduction and on-chain confirmation.Base: Promoted by CDP official Facilitator, USDC native, stable fees, currently the settlement network with the largest transaction volume and number of sellers.Solana: Key support from multi-chain Facilitators like PayAI, fastest growing in high-frequency reasoning and real-time API scenarios due to high throughput and low latency.Trend: The chain itself doesn't participate in payment logic. With more Facilitators expanding, x402's settlement layer will show a stronger multi-chain trend.
In the x402 payment system, the Facilitator is the only role that truly executes on-chain payments and is closest to "protocol-level revenue": responsible for verifying payment authorization, submitting and tracking on-chain transactions, generating auditable settlement proofs, and handling replay, timeout, multi-chain compatibility, and basic compliance checks. Unlike Client SDKs (Payers) and API Servers (Sellers) which only handle HTTP requests, it is the final clearing outlet for all M2M/A2A transactions, controlling traffic entrance and settlement charging rights, thus being at the core of value capture in the Agent economy. However, reality is that most projects are still in testnet or small-scale Demo stages, essentially lightweight "Payment Executors", lacking moats in key capabilities like identity, billing, risk control, and multi-chain steady-state handling, showing obvious low-threshold and high-homogeneity characteristics. As the ecosystem matures, facilitators backed by Coinbase, with strong advantages in stability and compliance, do enjoy a clear early lead. However, as CDP facilitators begin charging fees while others may remain free or experiment with alternative monetization models, the overall market structure and share distribution still have significant room to evolve. In the long run, x402 is still an interface layer and cannot carry core value. What truly possesses sustainable competitiveness are comprehensive platforms capable of building identity, billing, risk control, and compliance systems on top of settlement capabilities. L2 - Virtual Agent Commerce Protocol Virtual's Agent Commerce Protocol (ACP) provides a common commercial interaction standard for autonomous AI. Through a four-stage process of Request → Negotiation → Transaction → Evaluation, it enables independent agents to request services, negotiate terms, complete transactions, and accept quality assessments in a secure and verifiable manner. ACP uses blockchain as a trusted execution layer to ensure the interaction process is auditable and tamper-proof, and establishes an incentive-driven reputation system by introducing Evaluator Agents, allowing heterogeneous and independent professional Agents to form an "autonomous commercial body" and conduct sustainable economic activities without central coordination. Currently, ACP has moved beyond the purely experimental stage. Adoption through the Virtuals ecosystem suggests early network effects, looking more than "multi-agent commercial interaction standards". L1 Infrastructure Layer - Emerging Agent Native Payment Chain Mainstream general public chains like Ethereum, Base (EVM), and Solana provide the most core execution environment, account system, state machine, security, and settlement foundation for Agents, possessing mature account models, stablecoin ecosystems, and broad developer bases. Kite AI is a representative "Agent Native L1" infrastructure, specifically designing the underlying execution environment for Agent payment, identity, and permission. Its core is based on the SPACE framework (Stablecoin native, Programmable constraints, Agent-first certification, Compliance audit, Economically viable micropayments), and implements fine-grained risk isolation through a three-layer key system of Root→Agent→Session. Combined with optimized state channels to build an "Agent Native Payment Railway", it suppresses costs to $0.000001 and latency to the hundred-millisecond level, making API-level high-frequency micropayments feasible. As a general execution layer, Kite is upward compatible with x402, Google A2A, Anthropic MCP, and downward compatible with OAuth 2.1, aiming to become a unified Agent payment and identity base connecting Web2 and Web3. AIsaNet integrates x402 and L402 (the Lightning Network–based 402 payment protocol standard developed by Lightning Labs) as a micro-payment and settlement layer for AI Agents, supporting high-frequency transactions, cross-protocol call coordination, settlement path selection, and transaction routing, enabling Agents to perform cross-service, cross-chain automated payments without understanding the underlying complexity. V. Summary and Outlook: From Payment Protocols to Reconstruction of Machine Economic Order
Agentic Commerce is the establishment of a completely new economic order dominated by machines. It is not as simple as "AI placing orders automatically", but a reconstruction of the entire cross-subject link: how services are discovered, how credibility is established, how orders are expressed, how permissions are authorized, how value is cleared, and who bears disputes. The emergence of A2A, MCP, ACP, AP2, ERC-8004, and x402 standardizes the "commercial closed loop between machines". Along this evolutionary path, future payment infrastructure will diverge into two parallel tracks: one is the Business Governance Track based on traditional fiat logic, and the other is the Native Settlement Track based on the x402 protocol. The value capture logic between the two is different. 1. Business Governance Track: Web3 Business Payment System Layer Applicable Scenarios: Low-frequency, non-micropayment real-world transactions (e.g., procurement, SaaS subscription, physical e-commerce).Core Logic: Traditional fiat will dominate for a long time. Agents are just smarter front-ends and process coordinators, not replacements for Stripe / Card Organizations / Bank Transfers. The hard obstacles for stablecoins to enter the real commercial world on a large scale are regulation and taxation.The value of projects like Skyfire, Payman, Catena Labs lies not in underlying payment routing (usually done by Stripe/Circle), but in "Machine Governance-as-a-Service". That is, solving machine-native needs that traditional finance cannot cover—identity mapping, permission governance, programmatic risk control, liability attribution, and M2M / A2A micropayment (settlement per token / second). The key is who can become the "AI Financial Steward" trusted by enterprises. 2. Native Settlement Track: x402 Protocol Ecosystem and the Endgame of Facilitators Applicable Scenarios: High-frequency, micropayment, M2M/A2A digital native transactions (API billing, resource stream payments).Core Logic: x402 as an open standard achieves atomic binding of payment and resources through the HTTP 402 status code. In programmable micropayment and M2M / A2A scenarios, x402 is currently the protocol with the most complete ecosystem and most advanced implementation (HTTP native + on-chain settlement). Its status in the Agent economy is expected to be analogous to 'Stripe for agents'.Simply accessing x402 on the Client or Service side does not bring sector premium; what truly has growth potential are upper-layer assets that can precipitate long-term repurchases and high-frequency calls, such as OS-level Agent clients, Robot/IoT wallets, and high-value API services (market data, GPU reasoning, real-world task execution, etc.).Facilitator, as the protocol gateway assisting Client and Server to complete payment handshake, invoice generation, and fund clearing, controls both traffic and settlement fees, and is the link closest to "revenue" in the current x402 Stack. Most Facilitators are essentially just "Payment Executors" with obvious low-threshold and homogeneity characteristics. Giants with availability and compliance advantages (like Coinbase) will form a dominant pattern. The core value to avoid marginalization will move up to the "Facilitator + X" service layer: providing high-margin capabilities such as arbitration, risk control, and treasury management by building verifiable service catalogs and reputation systems.
We believe that a "Dual-Track Parallel of Fiat System and Stablecoin System" will form in the future: the former supports mainstream human commerce, while the latter carries machine-native and on-chain native high-frequency, cross-border, and micropayment scenarios. The role of Web3 is not to replace traditional payments, but to provide underlying capabilities of Verifiable Identity, Programmable Clearing, and Global Stablecoins for the Agent era. Ultimately, Agentic Commerce is not limited to payment optimization, but is a reconstruction of the machine economic order. When billions of micro-transactions are automatically completed by Agents in the background, those protocols and companies that first provide trust, coordination, and optimization capabilities will become the core forces of the next generation of global commercial infrastructure.
Disclaimer: This article was completed with the assistance of AI tools ChatGPT-5 and Gemini 3 during the creation process. The author has made every effort to proofread and ensure the information is true and accurate, but omissions may still exist, and understanding is appreciated. It is important to note that the crypto asset market generally has a divergence between project fundamentals and secondary market price performance. The content of this article is for information integration and academic/research exchange only, does not constitute any investment advice, and should not be considered as a recommendation for buying or selling any tokens.
在智能体商业(Agentic Commerce)体系中,真实世界的商户网络才是最大的价值场景。无论 AI Agent 如何演进,传统法币支付体系(Stripe、Visa、Mastercard、银行转账)与快速增长的稳定币体系(USDC、x402)都将长期并存,共同构成智能体商业的底座。 传统法币支付 vs 稳定币支付对比
智能体商业(Agentic Commerce)的核心不是让一种支付轨道取代另一种,而是将“下单—授权—支付”的执行主体交给 AI Agent,使传统法币支付体系(AP2、授权凭证、身份合规)与稳定币体系(x402、CCTP、智能合约结算)各自发挥优势。它既不是法币 vs 稳定币的零和竞争,也不是单一轨道的替代叙事,而是一个同时扩张双方能力的结构性机会:法币支付继续支撑人类商业,稳定币支付加速机器原生与链上原生场景,两者互补共生,成为智能体经济的双引擎。 二、智能体商业底层协议标准全景
x402 的技术优势在于:支持低至 1 美分的链上微支付,突破传统支付网关在 AI 场景下无法处理高频小额调用的限制;完全移除账户、KYC 与 API Key,使 AI 能自主完成 M2M 支付闭环;并通过 EIP-3009 实现无 Gas 的 USDC 授权支付,原生兼容 Base 与 Solana,具备多链可扩展性。
The Convergent Evolution of Automation, AI, and Web3 in the Robotics Industry
Author: 0xjacobzhao | https://linktr.ee/0xjacobzhao This independent research report is supported by IOSG Ventures. The author thanks Hans (RoboCup Asia-Pacific), Nichanan Kesonpat(1kx), Robert Koschig (1kx), Amanda Young (Collab+Currency) , Jonathan Victor (Ansa Research), Lex Sokolin (Generative Ventures), Jay Yu (Pantera Capital) , Jeffrey Hu (Hashkey Capital) for their valuable comments, as well as contributors from OpenMind, BitRobot, peaq, Auki Labs, XMAQUINA, GAIB, Vader, Gradient, Tashi Network and CodecFlow for their constructive feedback. While every effort has been made to ensure objectivity and accuracy, some insights inevitably reflect subjective interpretation, and readers are encouraged to engage with the content critically.
I. Robotics: From Industrial Automation to Humanoid Intelligence The traditional robotics industry has developed a vertically integrated value chain, comprising four main layers: core components, control systems, complete machines, and system integration & applications. Core components (controllers, servos, reducers, sensors, batteries, etc.) have the highest technical barriers, defining both performance ceilings and cost floors.Control systems act as the robot’s “brain and cerebellum,” responsible for decision-making and motion planning.Complete machine manufacturing reflects the ability to integrate complex supply chains.System integration and application development determine the depth of commercialization and are becoming the key sources of value creation. Globally, robotics is evolving along a clear trajectory — from industrial automation → scenario-specific intelligence → general-purpose intelligence — forming five major categories: industrial robots, mobile robots, service robots, special-purpose robots, and humanoid robots. Industrial Robots: Currently the only fully mature segment, industrial robots are widely deployed in welding, assembly, painting, and handling processes across manufacturing lines. The industry features standardized supply chains, stable margins, and well-defined ROI. Within this category, collaborative robots (cobots)—designed for safe human–robot collaboration, lightweight operation, and rapid deployment. Representative companies: ABB, Fanuc, Yaskawa, KUKA, Universal Robots, JAKA, and AUBOMobile Robots: Including AGV (Automated Guided Vehicles) and AMR (Autonomous Mobile Robots), this category is widely adopted in logistics, e-commerce fulfillment, and factory transport. It is the most mature segment for B2B applications. Representative companies: Amazon Robotics, Geek+, Quicktron, Locus Robotics.Service Robots: Targeting consumer and commercial sectors—such as cleaning,food service, and education—this is the fastest-growing category on the consumer side. Cleaning robots now follow a consumer electronics logic, while medical and delivery robots are rapidly commercializing. A new wave of more general manipulators (e.g., two-arm systems like Dyna) is emerging—more flexible than task-specific products, yet not as general as humanoids. Representative companies: Ecovacs, Roborock, Pudu Robotics,KEENON Robotics, iRobot, Dyna. Special-Purpose Robots: Designed for high-risk or niche applications—healthcare, military, construction, marine, and aerospace—these robots serve small but profitable markets with strong entry barriers, typically relying on government or enterprise contracts. Representative companies: Intuitive Surgical, Boston Dynamics, ANYbotics, NASA Valkyrie, Honeybee RoboticsHumanoid Robots: Regarded as the future “universal labor platform,” humanoid robots are drawing the most attention at the frontier of embodied intelligence. Representative companies: Tesla (Optimus), Figure AI (Figure 01), Sanctuary AI (Phoenix), Agility Robotics (Digit), Apptronik (Apollo), 1X Robotics, Neura Robotics, Unitree, UBTECH, Agibot The core value of humanoid robots lies in their human-like morphology, allowing them to operate within existing social and physical environments without infrastructure modification. Unlike industrial robots that pursue peak efficiency, humanoids emphasize general adaptability and task transferability, enabling seamless deployment across factories, homes, and public spaces. Most humanoid robots remain in the technical demonstration stage, focused on validating dynamic balance, locomotion, and manipulation capabilities. While limited deployments have begun to appear in highly controlled factory settings (e.g., Figure × BMW, Agility Digit), and additional vendors such as 1X are expected to enter early distribution starting in 2026, these are still narrow-scope, single-task applications—not true general-purpose labor integration. Meaningful large-scale commercialization is still years away. The core bottlenecks span several layers: Multi-DOF coordination and real-time dynamic balance remain challenging;Energy and endurance are constrained by battery density and actuator efficiency;Perception–decision pipelines often destabilize in open environments and fail to generalize;A significant data gap limits the training of generalized policies;Cross-embodiment transfer is not yet solved;Hardware supply chains and cost curves—especially outside China—remain substantial barriers, making low-cost, large-scale deployment difficult. The commercialization of humanoid robotics will advance in three stages: Demo-as-a-Service in the short term, driven by pilots and subsidies; Robotics-as-a-Service (RaaS) in the mid term, as task and skill ecosystems emerge; and a Labor Cloud model in the long term, where value shifts from hardware to software and networked services. Overall, humanoid robotics is entering a pivotal transition from demonstration to self-learning. Whether the industry can overcome the intertwined barriers of control, cost, and intelligence will determine if embodied intelligence can truly become a scalable economic force.
II. AI × Robotics: The Dawn of the Embodied Intelligence Era Traditional automation relies heavily on pre-programmed logic and pipeline-based control architectures—such as the DSOP paradigm (perception–planning–control)—which function reliably only in structured environments. The real world, however, is far more complex and unpredictable. The new generation of Embodied AI follows an entirely different paradigm: leveraging large models and unified representation learning to give robots cross-scene capabilities for understanding, prediction, and action. Embodied intelligence emphasizes the dynamic coupling of the body (hardware), the brain (models), and the environment (interaction). The robot is merely the vehicle—intelligence is the true core. Generative AI represents intelligence in the symbolic and linguistic world—it excels at understanding language and semantics. Embodied AI, by contrast, represents intelligence in the physical world—it masters perception and action. The two correspond to the “brain” and “body” of AI evolution, forming two parallel but converging frontiers. From an intelligence hierarchy perspective, Embodied AI is a higher-order capability than generative AI, but its maturity lags far behind. LLMs benefit from abundant internet-scale data and a well-defined “data → compute → deployment” loop. Robotic intelligence, however, requires egocentric, multimodal, action-grounded data—teleoperation trajectories, first-person video, spatial maps, manipulation sequences—which do not exist by default and must be generated through real-world interaction or high-fidelity simulation. This makes data far scarcer, costlier, and harder to scale. While simulated and synthetic data help, they cannot fully replace real sensorimotor experience. This is why companies like Tesla and Figure must operate teleoperation factories, and why data-collection farms have emerged in SEA. In short, LLMs learn from existing data; robots must create their own through physical interaction.
In the next 5–10 years, both will deeply converge through Vision–Language–Action (VLA) models and Embodied Agent architectures—LLMs will handle high-level cognition and planning, while robots will execute real-world actions, forming a bidirectional loop between data and embodiment, thus propelling AI from language intelligence toward true general intelligence (AGI).
The Core Technology Stack of Embodied Intelligence Embodied AI can be conceptualized as a bottom-up intelligence stack, comprising: VLA (Perception Fusion), RL/IL/SSL (Learning), Sim2Real (Reality Transfer), World Model (Cognitive Modeling), and Swarm & Reasoning (Collective Intelligence and Memory).
Perception & Understanding: Vision–Language–Action (VLA) The VLA model integrates Vision, Language, and Action into a unified multimodal system, enabling robots to understand human instructions and translate them into physical operations. The execution pipeline includes semantic parsing, object detection, path planning, and action execution, completing the full loop of “understand semantics → perceive world → complete task.” Representative projects: Google RT-X, Meta Ego-Exo, and Figure Helix, showcasing breakthroughs in multimodal understanding, immersive perception, and language-conditioned control.
VLA systems are still in an early stage and face four fundamental bottlenecks: Semantic ambiguity and weak task generalization: models struggle to interpret vague or open-ended instructions;Unstable vision–action alignment: perception errors are amplified during planning and execution;Sparse and non-standardized multimodal data: collection and annotation remain costly, making it difficult to build large-scale data flywheels;Long-horizon challenges across temporal and spatial axes: long temporal horizons strain planning and memory, while large spatial horizons require reasoning about out-of-perception elements—something current VLAs lack due to limited world models and cross-space inference. These issues collectively constrain VLA’s cross-scenario generalization and limit its readiness for large-scale real-world deployment.
Learning & Adaptation: SSL, IL, and RL Self-Supervised Learning (SSL): Enables robots to infer patterns and physical laws directly from perception data—teaching them to “understand the world.”Imitation Learning (IL): Allows robots to mimic human or expert demonstrations—helping them “act like humans.”Reinforcement Learning (RL): Uses reward-punishment feedback loops to optimize policies—helping them “learn through trial and error.” In Embodied AI, these paradigms form a layered learning system: SSL provides representational grounding, IL provides human priors, and RL drives policy optimization, jointly forming the core mechanism of learning from perception to action.
Sim2Real: Bridging Simulation and Reality Simulation-to-Reality (Sim2Real) allows robots to train in virtual environments before deployment in the real world. Platforms like NVIDIA Isaac Sim, Omniverse, and DeepMind MuJoCo produce vast amounts of synthetic data—reducing cost and wear on hardware. The goal is to minimize the “reality gap” through: Domain Randomization: Randomly altering lighting, friction, and noise to improve generalization.Physical Calibration: Using real sensor data to adjust simulation physics for realism.Adaptive Fine-tuning: Rapid on-site retraining for stability in real environments. Sim2Real forms the central bridge for embodied AI deployment. Despite strong progress, challenges remain around reality gap, compute costs, and real-world safety. Nevertheless, Simulation-as-a-Service (SimaaS) is emerging as a lightweight yet strategic infrastructure for the Embodied AI era—via PaaS (Platform Subscription), DaaS (Data Generation), and VaaS (Validation) business models.
Cognitive Modeling: World Model — The Robot’s “Inner World” A World Model serves as the inner brain of robots, allowing them to simulate environments and outcomes internally—predicting and reasoning before acting. By learning environmental dynamics, it enables predictive and proactive behavior. Representative projects: DeepMind Dreamer, Google Gemini + RT-2, Tesla FSD V12, NVIDIA WorldSim. Core techniques include: Latent Dynamics Modeling: Compressing high-dimensional observations into latent states.Imagination-based Planning: Virtual trial-and-error for path prediction.Model-based Reinforcement Learning: Replacing real-world trials with internal simulations. World Models mark the transition from reactive to predictive intelligence, though challenges persist in model complexity, long-horizon stability, and standardization.
Swarm Intelligence & Reasoning: From Individual to Collective Cognition Multi-Agent Collaboration and Memory-Reasoning Systems represent the next frontier—extending intelligence from individual agents to cooperative and cognitive collectives. Multi-Agent Systems (MAS): Enable distributed cooperation among multiple robots via cooperative RL frameworks (e.g., OpenAI Hide-and-Seek, DeepMind QMIX / MADDPG). These have proven effective in logistics, inspection, and coordinated swarm control.Memory & Reasoning: Equip agents with long-term memory and causal understanding—crucial for cross-task generalization and self-planning. Research examples include DeepMind Gato, Dreamer, and Voyager, enabling continuous learning and “remembering the past, simulating the future.” Together, these components lay the foundation for robots capable of collective learning, memory, and self-evolution.
Global Embodied AI Landscape: Collaboration and Competition
The global robotics industry is entering an era of cooperative competition. China leads in supply-chain efficiency, manufacturing, and vertical integration, with companies like Unitree and UBTECH already mass-producing humanoids. However, its algorithmic and simulation capabilities still trail the U.S. by several years.The U.S. dominates frontier AI models and software (DeepMind, OpenAI, NVIDIA), yet this advantage does not fully extend to robotics hardware—where Chinese players often iterate faster and demonstrate stronger real-world performance. This hardware gap partly explains U.S. industrial-reshoring efforts under the CHIPS Act and IRA.Japan remains the global leader in precision components and motion-control systems, though its progress in AI-native robotics remains conservative.Korea distinguishes itself through advanced consumer-robotics adoption, driven by LG, NAVER Labs, and a mature service-robot ecosystem.Europe maintains strong engineering culture, safety standards, and research depth; while much manufacturing has moved abroad, Europe continues to excel in collaboration frameworks and robotics standardization. Together, these regional strengths are shaping the long-term equilibrium of the global embodied intelligence industry.
III. Robots × AI × Web3: Narrative Vision vs. Practical Pathways In 2025, a new narrative emerged in Web3 around the fusion of robotics and AI. While Web3 is often framed as the base protocol for a decentralized machine economy, its real integration value and feasibility vary markedly by layer: Hardware manufacturing & service layer: Capital-intensive with weak data flywheels; Web3 can currently play only a supporting role in edge cases such as supply-chain finance or equipment leasing.Simulation & software ecosystem: Higher compatibility; simulation data and training jobs can be put on-chain for attribution, and agents/skill modules can be assetized via NFTs or Agent Tokens.Platform layer: Decentralized labor and collaboration networks show the greatest potential—Web3 can unite identity, incentives, and governance to gradually build a credible “machine labor market,” laying the institutional groundwork for a future machine economy.
Long-term vision. The Orchestration and Platform layer is the most valuable direction for integrating Web3 with robotics and AI. As robots gain perception, language, and learning capabilities, they are evolving into intelligent actors that can autonomously decide, collaborate, and create economic value. For these “intelligent workers” to truly participate in the economy, four core hurdles must be cleared: identity, trust, incentives, and governance. Identity: Machines require attributable, traceable digital identities. With Machine DIDs, each robot, sensor, or UAV can mint a unique verifiable on-chain “ID card,” binding ownership, activity logs, and permission scopes to enable secure interaction and accountability.Trust: “Machine labor” must be verifiable, measurable, and priceable. Using smart contracts, oracles, and audits—combined with Proof of Physical Work (PoPW), Trusted Execution Environments (TEE), and Zero-Knowledge Proofs (ZKP)—task execution can be proven authentic and traceable, giving machine behavior accounting value.Incentives: Web3 enables automated settlement and value flow among machines via token incentives, account abstraction, and state channels. Robots can use micropayments for compute rental and data sharing, with staking/slashing to secure performance; smart contracts and oracles can coordinate a decentralized machine coordination marketplace with minimal human dispatch.Governance: As machines gain long-term autonomy, Web3 provides transparent, programmable governance: DAOs co-decide system parameters; multisigs and reputation maintain safety and order. Over time, this pushes toward algorithmic governance—humans set goals and bounds, while contracts mediate machine-to-machine incentives and checks. The ultimate vision of Web3 × Robotics: a real-world evaluation network—distributed robot fleets acting as “physical-world inference engines” to continuously test and benchmark model performance across diverse, complex environments; and a robotic workforce—robots executing verifiable physical tasks worldwide, settling earnings on-chain, and reinvesting value into compute or hardware upgrades. Pragmatic path today. The fusion of embodied intelligence and Web3 remains early; decentralized machine-intelligence economies are largely narrative- and community-driven. Viable near-term intersections concentrate in three areas: Data crowdsourcing & attribution — on-chain incentives and traceability encourage contributors to upload real-world data.Global long-tail participation — cross-border micropayments and micro-incentives reduce the cost of data collection and distribution.Financialization & collaborative innovation — DAO structures can enable robot assetization, revenue tokenization, and machine-to-machine settlement.
Overall, the integration of robotics and Web3 will progress in phases: in the short term, the focus will be on data collection and incentive mechanisms; in the mid term, breakthroughs are expected in stablecoin-based payments, long-tail data aggregation, and the assetization and settlement of RaaS models; and in the long term, as humanoids scale, Web3 could evolve into the institutional foundation for machine ownership, revenue distribution, and governance, enabling a truly decentralized machine economy.
IV. Web3 Robotics Landscape & Curated Cases Based on three criteria—verifiable progress, technical openness, and industrial relevance—this section maps representative projects at the intersection of Web3 × Robotics, organized into five layers: Model & Intelligence, Machine Economy, Data Collection, Perception & Simulation Infrastructure, and Robot Asset & Yield (RobotFi / RWAiFi). To remain objective, we have removed obvious hype-driven or insufficiently documented projects; please point out any omissions.
Model & Intelligence Layer OpenMind — Building Android for Robots (https://openmind.org/) OpenMind is an open-source Robot OS for Embodied AI & control, aiming to build the first decentralized runtime and development platform for robots. Two core components: OM1: A modular, open-source AI agent runtime layer built on top of ROS2, orchestrating perception, planning, and action pipelines for both digital and physical robots.FABRIC: A distributed coordination layer connecting cloud compute, models, and real robots so developers can control/train robots in a unified environment.
OpenMind acts as the intelligent middleware between LLMs and the robotic world—turning language intelligence into embodied intelligence and providing a scaffold from understanding (Language → Action) to alignment (Blockchain → Rules). Its multi-layered system forms a full collaboration loop: humans provide feedback/labels via the OpenMind App (RLHF data); the Fabric Network handles identity, task allocation, and settlement; OM1 robots execute tasks and conform to an on-chain “robot constitution” for behavior auditing and payments—completing a decentralized cycle of human feedback → task collaboration → on-chain settlement.
Progress & Assessment. OpenMind is in an early “technically working, commercially unproven” phase. OM1 Runtime is open-sourced on GitHub with multimodal inputs and an NL data bus for language-to-action parsing—original but experimental. Fabric and on-chain settlement are interface-level designs so far. Ecosystem ties include Unitree, UBTECH, TurtleBot, and universities (Stanford, Oxford, Seoul Robotics) for education/research; no industrial rollouts yet. The App is in beta; incentives/tasks are early.
Business model: OM1 (open-source) + Fabric (settlement) + Skill Marketplace (incentives). No revenue yet; relies on ~$20M early financing (Pantera, Coinbase Ventures, DCG). Technically ambitious with long path and hardware dependence; if Fabric lands, it could become the “Android of Embodied AI.”
CodecFlow — The Execution Engine for Robotics (https://codecflow.ai) CodecFlow is a decentralized Execution Layer for Robotics on Solana, providing on-demand runtime environments for AI agents and robotic systems—giving each agent an “Instant Machine.” Three modules: Fabric: Cross-cloud and DePIN compute aggregator (Weaver + Shuttle + Gauge) that spins up secure VMs, GPU containers, or robot control nodes in seconds.optr SDK: A Python framework that abstract hardware connectors, training algorithms and blockchain integration. To enable creating “Operators” that control desktops, sims, or real robots.Token Incentives: On-chain incentives for the open source contributors, buyback from revenue, and future economy for the marketplace Goal: Unify the fragmented robotics ecosystem with a single execution layer that gives builders hardware abstraction, fine‑tuning tools, cloud simulation infrastructure, and onchain economics so they can launch and scale revenue generating operators for robots and desktop. Progress & Assessment. Early Fabric (Go) and optr SDK (Python) are live; web/CLI can launch isolated compute instances, Integration with NRN, ChainLink, peaq. Operator Marketplace targets late-2025, serving AI devs, robotics labs, and automation operators.
Machine Economy Layer BitRobot — The World’s Open Robotics Lab (https://bitrobot.ai) A decentralized research & collaboration network for Embodied AI and robotics, co-initiated by FrodoBots Labs and Protocol Labs. Vision: an open architecture of Subnets + Incentives + Verifiable Robotic Work (VRW). VRW: Define & verify the real contribution of each robotic task.ENT (Embodied Node Token): On-chain robot identity & economic accountability.Subnets: Organize cross-region collaboration across research, compute, devices, and operators.Senate + Gandalf AI: Human-AI co-governance for incentives and research allocation.
Since its 2025 whitepaper, BitRobot has run multiple subnets (e.g., SN/01 ET Fugi, SN/05 SeeSaw by Virtuals), enabling decentralized teleoperation and real-world data capture, and launched a $5M Grand Challenges fund to spur global research on model development. peaq — The Machine Economy Computer (https://www.peaq.xyz/) peaq is a Layer-1 chain built for the Machine Economy, providing machine identities, wallets, access control, and time-sync (Universal Machine Time) for millions of robots and devices. Its Robotics SDK lets builders make robots “Machine Economy–ready” with only a few lines of code, enabling vendor-neutral interoperability and peer-to-peer interaction. The network already hosts the world’s first tokenized robotic farm and 60+ real-world machine applications. peaq’s tokenization framework allows robotics companies to raise liquidity for capital-intensive hardware and broaden participation beyond traditional B2B/B2C buyers. Its protocol-level incentive pools, funded by network fees, subsidize machine onboarding and support builders—creating a growth flywheel for robotics projects.
Data Layer Purpose: unlock scarce, costly real-world data for embodied training via teleoperation (PrismaX, BitRobot Network), first-person & motion capture (Mecka, BitRobot Network, Sapien、Vader、NRN), and simulation/synthetic pipelines (BitRobot Network) to build scalable, generalizable training corpora. Note: Web3 doesn’t produce data better than Web2 giants; its value lies in redistributing data economics. With stablecoin rails + crowdsourcing, permissionless incentives and on-chain attribution enable low-cost micro-settlement, provenance, and automatic revenue sharing. Open crowdsourcing still faces quality control and buyer demand gaps. PrismaX (https://gateway.prismax.ai) A decentralized teleoperation & data economy for Embodied AI—aiming to build a global robot labor market where human operators, robots, and AI models co-evolve via on-chain incentives. Teleoperation Stack: Browser/VR UI + SDK connects global arms/service robots for real-time control & data capture.Eval Engine: CLIP + DINOv2 + optical-flow semantic scoring to grade each trajectory and settle on-chain. Completes the loop teleop → data capture → model training → on-chain settlement, turning human labor into data assets.
Progress & Assessment. Testnet live since Aug 2025 (gateway.prismax.ai). Users can teleop arms for grasping tasks and generate training data. Eval Engine running internally. Clear positioning and high technical completeness; strong candidate for a decentralized labor & data protocol for the embodied era, but near-term scale remains a challenge. BitRobot Network (https://bitrobot.ai/) BitRobot Network subnets power data collection across video, teleoperation, and simulation. With SN/01 ET Fugi users remotely control robots to complete tasks, collecting navigation & perception data in a “real-world Pokemon Gogame”. The game led to the creation of FrodoBots-2K, one of the largest open human-robot navigation datasets, used by UC Berkeley RAIL and Google DeepMind. SN/05 SeeSaw crowdsources egocentric video data via iPhone from real-world environments at scale. Other announced subnets RoboCap and Rayvo focus on egocentric video data collection via low-cost embodiments. Mecka (https://www.mecka.ai) Mecka is a robotics data company that crowdsources egocentric video, motion, and task demonstrations—via gamified mobile capture and custom hardware rigs—to build large-scale multimodal datasets for embodied AI training. Sapien (https://www.sapien.io/) A crowdsourcing platform for human motion data to power robot intelligence. Via wearables and mobile apps, Sapien gathers human pose and interaction data to train embodied models—building a global motion data network. Vader (https://www.vaderai.ai) Vader crowdsources egocentric video and task demonstrations through EgoPlay, a real-world MMO where users record daily activities from a first-person view and earn $VADER. Its ORN pipeline converts raw POV footage into privacy-safe, structured datasets enriched with action labels and semantic narratives—optimized for humanoid policy training. NRN Agents (https://www.nrnagents.ai/) A gamified embodied-RL data platform that crowdsources human demonstrations through browser-based robot control and simulated competitions. NRN generates long-tail behavioral trajectories for imitation learning and continual RL, using sport-like tasks as scalable data primitives for sim-to-real policy training. Embodied Data Collection — Project Comparison
Middleware & Simulation The Middleware & Simulation layer forms the backbone between physical sensing and intelligent decision-making, covering localization, communication, spatial mapping, and large-scale simulation. The field is still early: projects are exploring high-precision positioning, shared spatial computing, protocol standardization, and distributed simulation, but no unified standard or interoperable ecosystem has yet emerged. Middleware & Spatial Infrastructure Core robotic capabilities—navigation, localization, connectivity, and spatial mapping—form the bridge between the physical world and intelligent decision-making. While broader DePIN projects (Silencio, WeatherXM, DIMO) now mention “robotics,” the projects below are the ones most directly relevant to embodied AI. RoboStack — Cloud-Native Robot Operating Stack (https://robostack.io) Cloud-native robot OS & control stack integrating ROS2, DDS, and edge computing. Its RCP (Robot Control Protocol) aims to make robots callable/orchestrable like cloud services.GEODNET — Decentralized GNSS Network (https://geodnet.com) A global decentralized satellite-positioning network offering cm-level RTK/GNSS. With distributed base stations and on-chain incentives, it supplies high-precision positioning for drones, autonomous driving, and robots—becoming the Geo-Infra Layer of the machine economy.Auki — Posemesh for Spatial Computing (https://www.auki.com) A decentralized Posemesh network that generates shared real-time 3D maps via crowdsourced sensors & compute, enabling AR, robot navigation, and multi-device collaboration—key infra fusing AR × Robotics.Tashi Network — Real-Time Mesh Coordination for Robots (https://tashi.network) A decentralized mesh network enabling sub-30ms consensus, low-latency sensor exchange, and multi-robot state synchronization. Its MeshNet SDK supports shared SLAM, swarm coordination, and robust map updates for real-time embodied AI.Staex — Decentralized Connectivity & Telemetry (https://www.staex.io) A decentralized connectivity and device-management layer from Deutsche Telekom R&D, providing secure communication, trusted telemetry, and device-to-cloud routing. Staex enables robot fleets to exchange data reliably and interoperate across operators.
Distributed Simulation & Learning Systems Gradient – Towards Open Intelligence(https://gradient.network/) Gradient is an AI R&D lab dedicated to building Open Intelligence, enabling distributed training, inference, verification, and simulation on a decentralized infrastructure. Its current technology stack includes Parallax (distributed inference), Echo (distributed reinforcement learning and multi-agent training), and Gradient Cloud (enterprise AI solutions). In robotics, Gradient is developing Mirage — a distributed simulation and robotic learning platform designed to build generalizable world models and universal policies, supporting dynamic interactive environments and large-scale parallel training. Mirage is expected to release its framework and model soon, and the team has been in discussions with NVIDIA regarding potential collaboration. Robot Asset & Yield (RobotFi / RWAiFi) This layer converts robots from productive tools into financializable assets through tokenization, revenue distribution, and decentralized governance, forming the financial infrastructure of the machine economy. XmaquinaDAO — Physical AI DAO (https://www.xmaquina.io) XMAQUINA is a decentralized ecosystem providing global, liquid exposure to leading private humanoid-robotics and embodied-AI companies—bringing traditionally VC-only opportunities onchain. Its token DEUS functions as a liquid index and governance asset, coordinating treasury allocations and ecosystem growth. The DAO Portal and Machine Economy Launchpad enable the community to co-own and support emerging Physical AI ventures through tokenized machine assets and structured onchain participation. GAIB — The Economic Layer for AI Infrastructure (https://gaib.ai/) GAIB provides a unified Economic Layer for real-world AI infrastructure such as GPUs and robots, connecting decentralized capital to productive AI infra assets and making yields verifiable, composable, and on-chain. For robotics, GAIB does not “sell robot tokens.” Instead, it financializes robot equipment and operating contracts (RaaS, data collection, teleop) on-chain—converting real cash flows → composable on-chain yield assets. This spans equipment financing (leasing/pledge), operational cash flows (RaaS/data services), and data-rights revenue (licensing/contracts), making robot assets and their income measurable, priceable, and tradable. GAIB uses AID / sAID as settlement/yield carriers, backed by structured risk controls (over-collateralization, reserves, insurance). Over time it integrates with DeFi derivatives and liquidity markets to close the loop from “robot assets” to “composable yield assets.” The goal: become the economic backbone of intelligence in the AI era.
Web3 Robotics Stack Link: https://fairy-build-97286531.figma.site/ V. Conclusion: Present Challenges and Long-Term Opportunities From a long-term perspective, the fusion of Robotics × AI × Web3 aims to build a decentralized machine economy (DeRobot Economy), moving embodied intelligence from “single-machine automation” to networked collaboration that is ownable, settleable, and governable. The core logic is a self-reinforcing loop—“Token → Deployment → Data → Value Redistribution”—through which robots, sensors, and compute nodes gain on-chain ownership, transact, and share proceeds. That said, at today’s stage this paradigm remains early-stage exploration, still far from stable cash flows and a scaled commercial flywheel. Many projects are narrative-led with limited real deployment. Robotics manufacturing and operations are capital-intensive; token incentives alone cannot finance infrastructure expansion. While on-chain finance is composable, it has not yet solved real-asset risk pricing and cash-flow realization. In short, the “self-sustaining machine network” remains idealized, and its business model requires real-world validation. Model & Intelligence Layer. This is the most valuable long-term direction. Open-source robot operating systems represented by OpenMind seek to break closed ecosystems and unify multi-robot coordination with language-to-action interfaces. The technical vision is clear and systemically complete, but the engineering burden is massive, validation cycles are long, and industry-level positive feedback has yet to form.Machine Economy Layer. Still pre-market: the real-world robot base is small, and DID-based identity plus incentive networks struggle to form a self-consistent loop. We remain far from a true “machine labor economy.” Only after embodied systems are deployed at scale will the economic effects of on-chain identity, settlement, and collaboration networks become evident.Data Layer. Barriers are relatively lower—and this is closest to commercial viability today. Embodied data collection demands spatiotemporal continuity and high-precision action semantics, which determine quality and reusability. Balancing crowdscale with data reliability is the core challenge. PrismaX offers a partially replicable template by locking in B-side demand first and then distributing capture/validation tasks, but ecosystem scale and data markets will take time to mature.Middleware & Simulation Layer. Still in technical validation with no unified standards and limited interoperability. Simulation outputs are hard to standardize for real-world transfer; Sim2Real efficiency remains constrained.RobotFi / RWAiFi Layer. Web3’s role is primarily auxiliary—enhancing transparency, settlement, and financing efficiency in supply-chain finance, equipment leasing, and investment governance, rather than redefining robotics economics itself. Even so, we believe the intersection of Robotics × AI × Web3 marks the starting point of the next intelligent economic system. It is not only a fusion of technical paradigms; it is also an opportunity to recast production relations. Once machines possess identity, incentives, and governance, human–machine collaboration can evolve from localized automation to networked autonomy. In the short term, this domain will remain driven by narratives and experimentation, but the emerging institutional and incentive frameworks are laying groundwork for the economic order of a future machine society. In the long run, combining embodied intelligence with Web3 will redraw the boundaries of value creation—elevating intelligent agents into ownable, collaborative, revenue-bearing economic actors.
Disclaimer: This article was assisted by AI tools (ChatGPT-5 and Deepseek). The author has endeavored to proofread and ensure accuracy, but errors may remain. Note that crypto asset markets often exhibit divergence between project fundamentals and secondary-market price action. This content is for information synthesis and academic/research exchange only and does not constitute investment advice or a recommendation to buy or sell any token.
中间件与空间基建(Middleware & Spatial Infra) 机器人核心能力——导航、定位、连接性与空间建模——构成了连接物理世界与智能决策的关键桥梁。尽管更广泛的 DePIN 项目(Silencio、WeatherXM、DIMO)开始提及“机器人,但下列项目与具身智能最直接相关。 RoboStack – Cloud-Native Robot Operating Stack (https://robostack.io) RoboStack 是云原生机器人中间件,通过 RCP(Robot Context Protocol)实现机器人任务的实时调度、远程控制与跨平台互操作,并提供云端仿真、工作流编排与 Agent 接入能力。 GEODNET – Decentralized GNSS Network (https://geodnet.com) GEODNET 是全球去中心化 GNSS 网络,提供厘米级 RTK 高精度定位。通过分布式基站和链上激励,为无人机、自动驾驶与机器人提供实时“地理基准层”。 Auki – Posemesh for Spatial Computing (https://www.auki.com) Auki 构建了去中心化的 Posemesh 空间计算网络,通过众包传感器与计算节点生成实时 3D 环境地图,为 AR、机器人导航和多设备协作提供共享空间基准。它是连接 虚拟空间与现实场景 的关键基础设施,推动 AR × Robotics 的融合。 Tashi Network — 机器人实时网格协作网络 (https://tashi.network) 去中心化实时网格网络,实现亚 30ms 共识、低延迟传感器交换与多机器人状态同步。其 MeshNet SDK 支持共享 SLAM、群体协作与鲁棒地图更新,为具身 AI 提供高性能实时协作层。 Staex — 去中心化连接与遥测网络 (https://www.staex.io) 源自德国电信研发部门的去中心化连接层,提供安全通信、可信遥测与设备到云的路由能力,使机器人车队能够可靠交换数据并跨不同运营方协作。 仿真与训练系统(Distributed Simulation & Learning) Gradient - Towards Open Intelligence(https://gradient.network/) Gradient 是建设“开放式智能(Open Intelligence)”的 AI 实验室,致力于基于去中心化基础设施实现分布式训练、推理、验证与仿真;其当前技术栈包括 Parallax(分布式推理)、Echo(分布式强化学习与多智能体训练) 以及 Gradient Cloud(面向企业的AI 解决方案)。在机器人方向,Mirage 平台面向具身智能训练提供 分布式仿真、动态交互环境与大规模并行学习 能力,用于加速世界模型与通用策略的训练落地。Mirage 正在与 NVIDIA 探讨与其 Newton 引擎的潜在协作方向。
机器人资产收益层(RobotFi / RWAiFi) 这一层聚焦于将机器人从“生产性工具”转化为“可金融化资产”的关键环节,通过 资产代币化、收益分配与去中心化治理,构建机器经济的金融基础设施。代表项目包括: XmaquinaDAO – Physical AI DAO (https://www.xmaquina.io) XMAQUINA 是一个去中心化生态系统,为全球用户提供对顶尖人形机器人与具身智能公司的高流动性参与渠道,将原本只属于风险投资机构的机会带上链。其代币 DEUS 既是流动化指数资产,也是治理载体,用于协调国库分配与生态发展。通过 DAO Portal 与 Machine Economy Launchpad,社区能够通过机器资产的代币化与结构化的链上参与,共同持有并支持新兴的 Physical AI 项目。 GAIB – The Economic Layer for AI Infrastructure (https://gaib.ai/) GAIB 致力于为 GPU 与机器人等实体 AI 基础设施提供统一的 经济层,将去中心化资本与真实AI基建资产连接起来,构建可验证、可组合、可收益的智能经济体系。 在机器人方向上,GAIB 并非“销售机器人代币”,而是通过将机器人设备与运营合同(RaaS、数据采集、遥操作等)金融化上链,实现“真实现金流 → 链上可组合收益资产”的转化。这一体系涵盖硬件融资(融资租赁 / 质押)、运营现金流(RaaS / 数据服务)与数据流收益(许可 / 合约)等环节,使机器人资产及其现金流变得 可度量、可定价、可交易。 GAIB 以 AID / sAID 作为结算与收益载体,通过结构化风控机制(超额抵押、准备金与保险)保障稳健回报,并长期接入 DeFi 衍生品与流动性市场,形成从“机器人资产”到“可组合收益资产”的金融闭环。目标是成为 AI 时代的经济主干(Economic Backbone of Intelligence)
免责声明:本文在创作过程中借助了 ChatGPT-5 与Deepseek的 AI 工具辅助完成,作者已尽力校对并确保信息真实与准确,但仍难免存在疏漏,敬请谅解。需特别提示的是,加密资产市场普遍存在项目基本面与二级市场价格表现背离的情况。本文内容仅用于信息整合与学术/研究交流,不构成任何投资建议,亦不应视为任何代币的买卖推荐。
Brevis Research Report: The Infinite Verifiable Computing Layer of zkVM and ZK Data Coprocessor
The paradigm of Verifiable Computing—“off-chain computation + on-chain verification”—has become the universal computational model for blockchain systems. It allows blockchain applications to achieve near-infinite computational freedom while maintaining decentralization and trustlessness as core security guarantees. Zero-knowledge proofs (ZKPs) form the backbone of this paradigm, with applications primarily in three foundational directions: scalability, privacy, and interoperability & data integrity. Scalability was the first ZK application to reach production, moving execution off-chain and verifying concise proofs on-chain for high throughput and low-cost trustless scaling.
The evolution of ZK verifiable computing can be summarized as L2 zkRollup → zkVM → zkCoprocessor → L1 zkEVM. L2 zkRollups moved execution off-chain while posting validity proofs on-chain, achieving scalability and cost efficiency.zkVMs expanded into general-purpose verifiable computing, enabling cross-chain validation, AI inference, and cryptographic workloads.zkCoprocessors modularized this model into plug-and-play proof services for DeFi, RWA, and risk management.L1 zkEVMs brought this to Layer 1 Realtime Proving (RTP), integrating proofs directly into Ethereum’s execution pipeline. Together, these advances mark blockchain’s shift from scalability to verifiability—ushering in an era of trustless computation. I. Ethereum’s zkEVM Scaling Path: From L2 Rollups to L1 Realtime Proving Ethereum’s zkEVM scalability journey can be divided into two phases: Phase 1 (2022–2024): L2 zkRollups migrated execution to Layer 2 and posted validity proofs on Layer 1—achieving lower costs and higher throughput, but introducing liquidity and state fragmentation while L1 remained constrained by N-of-N re-execution.Phase 2 (2025– ): L1 Realtime Proving (RTP) replaces full re-execution (N-of-N) with 1-of-N proof generation + lightweight network-wide verification, boosting throughput without compromising decentralization—an approach still under active development. L2 zkRollups: Balancing Compatibility and Performance In the flourishing Layer 2 ecosystem of 2022, Ethereum co-founder Vitalik Buterin classified ZK-EVMs into four types—Type 1–4—highlighting the structural trade-offs between compatibility and performance. This framework established the coordinates for zkRollup design:
Type 1: Fully Ethereum-equivalent — replicates Ethereum exactly with no protocol changes, ensuring perfect compatibility but resulting in the slowest proving performance (e.g., Taiko).Type 2: Fully EVM-equivalent — identical to the EVM at the execution level but allows limited modifications to data structures for faster proof generation (e.g., Scroll, Linea).Type 2.5: EVM-equivalent except for gas costs — adjusts gas pricing for ZK-unfriendly operations to improve prover efficiency while maintaining broad compatibility (e.g., Polygon zkEVM, Kakarot).Type 3: Almost EVM-equivalent — simplifies or removes some hard-to-prove features such as precompiles, enabling faster proofs but requiring minor app-level adjustments (e.g., zkSync Era).Type 4: High-level-language equivalent — compiles Solidity or Vyper directly to ZK-friendly circuits, achieving the best performance but sacrificing bytecode compatibility and requiring ecosystem rebuilds (e.g., StarkNet / Cairo). Today, the L2 zkRollup model is mature: execution runs off-chain, proofs are verified on-chain, maintaining Ethereum’s ecosystem and tooling while delivering high throughput and low cost. Yet, liquidity fragmentation and L1’s re-execution bottleneck remain persistent issues.
L1 zkEVM: Realtime Proving Redefines Ethereum’s Light-Verification Logic In July 2025, the Ethereum Foundation published “Shipping an L1 zkEVM #1: Realtime Proving”, formally proposing the L1 zkEVM roadmap. L1 zkEVM upgrades Ethereum from an N-of-N re-execution model to a 1-of-N proving + constant-time verification paradigm: a small number of provers re-execute entire blocks to generate succinct proofs, and all other nodes verify them instantly. This enables Realtime Proving (RTP) at the L1 level—enhancing throughput, raising gas limits, and lowering hardware requirements—all while preserving decentralization. The rollout plan envisions zk clients running alongside traditional execution clients, eventually becoming the protocol default once performance, security, and incentive models stabilize.
L1 zkEVM Roadmap: Three Core Tracks Realtime Proving (RTP): Achieving block-level proof generation within a 12-second slot via parallelization and hardware acceleration.Client & Protocol Integration: Standardizing proof-verification interfaces—initially optional, later default.Incentive & Security Design: Establishing a prover marketplace and fee model to reinforce censorship resistance and network liveness. L1 zkEVM’s Realtime Proving (RTP) uses zkVMs to re-execute entire blocks off-chain and produce cryptographic proofs, allowing validators to verify results in under 10 seconds—replacing “re-execution” with “verify instead of execute” to drastically enhance Ethereum’s scalability and trustless validation efficiency. According to the Ethereum Foundation’s zkEVM Tracker, the main teams participating in the L1 zkEVM RTP roadmap include: SP1 Turbo (Succinct Labs), Pico (Brevis), Risc Zero, ZisK, Airbender (zkSync), OpenVM (Axiom), and Jolt (a16z).
II. Beyond Ethereum: General-Purpose zkVMs and zkCoprocessors Beyond the Ethereum ecosystem, zero-knowledge proof (ZKP) technology has expanded into the broader field of Verifiable Computing, giving rise to two core technical systems: zkVMs and zkCoprocessors. zkVM: General-Purpose Verifiable Computing Layer A zkVM (zero-knowledge virtual machine) serves as a verifiable execution engine for arbitrary programs, typically built on instruction set architectures such as RISC-V, MIPS, or WASM. Developers can compile business logic into the zkVM, where provers execute it off-chain and generate zero-knowledge proofs (ZKPs) that can be verified on-chain. This enables applications ranging from Ethereum L1 block proofs to cross-chain validation, AI inference, cryptographic computation, and complex algorithmic verification. Its key advantages lie in generality and flexibility, supporting a wide range of use cases; however, it also entails high circuit complexity and proof generation costs, requiring multi-GPU parallelism and deep engineering optimization. Representative projects include Risc Zero, Succinct SP1, and Brevis Pico / Prism.
zkCoprocessor: Scenario-Specific Verifiable Module A zkCoprocessor provides plug-and-play computation and proof services for specific business scenarios. These platforms predefine data access and circuit logic—such as historical on-chain data queries, TVL calculations, yield settlement, and identity verification—so that applications can simply call SDKs or APIs to receive both computation results and on-chain proofs. This model offers fast integration, high performance, and low cost, though it sacrifices generality. Representative projects include Brevis zkCoprocessor, Axiom.
Comparative Logic and Core Differences Overall, both zkVMs and zkCoprocessors follow the “off-chain computation + on-chain verification” paradigm of verifiable computing, where zero-knowledge proofs are used to validate off-chain results on-chain. Their economic logic rests on a simple premise: the cost of executing computations directly on-chain is significantly higher than the combined cost of off-chain proof generation and on-chain verification. In terms of generality vs. engineering complexity: zkVM — a general-purpose computing infrastructure suitable for complex, cross-domain, or AI-driven tasks, offering maximum flexibility.zkCoprocessor — a modular verification service tailored for high-frequency, reusable scenarios such as DeFi, RWA, and risk management, offering low-cost, directly callable proof interfaces. In terms of business models: zkVM follows a Proving-as-a-Service model, charging per proof (ZKP). It mainly serves L2 Rollups and infrastructure providers, characterized by large contracts, long cycles, and stable gross margins.zkCoprocessor operates under a Proof-API-as-a-Service model, charging per task via API or SDK integration—similar to SaaS—targeting DeFi and application-layer protocols with fast integration and high scalability. Overall, zkVMs are the foundational engines of verifiable computation, while zkCoprocessors are the application-layer verification modules. The former builds the technical moat, and the latter drives commercial adoption—together forming a universal trustless computing network.
III. Brevis: Product Landscape and Technical Roadmap Starting from Ethereum’s L1 Realtime Proving (RTP), zero-knowledge (ZK) technology is evolving toward an era of Verifiable Computing built upon the architectures of general-purpose zkVMs and zkCoprocessors. Brevis Network represents a fusion of these two paradigms — a universal verifiable computing infrastructure that combines high performance, programmability, and zero-knowledge verification — an Infinite Compute Layer for Everything. 3.1 Pico zkVM: Modular Proof Architecture for General-Purpose Verifiable Computing In 2024, Vitalik Buterin proposed the concept of “Glue and Coprocessor Architectures”, envisioning a structure that separates general-purpose execution layers from specialized coprocessor acceleration layers. Complex computations can thus be divided into flexible business logic (e.g., EVM, Python, RISC-V) and performance-focused structured operations (e.g., GPU, ASIC, hash modules). This “general + specialized” dual-layer model is now converging across blockchain, AI, and cryptographic computing: EVM accelerates via precompiles; AI leverages GPU parallelism; ZK proofs combine general-purpose VMs with specialized circuits. The future lies in optimizing the “glue layer” for security and developer experience, while letting the “coprocessor layer” focus on efficient execution—achieving a balance among performance, security, and openness.
Pico zkVM, developed by Brevis, is a representative realization of this idea. It integrates a general-purpose zkVM with hardware-accelerated coprocessors, merging programmability with high-performance ZK computation. Its modular architecture supports multiple proof backends (KoalaBear, BabyBear, Mersenne31), freely combining execution, recursion, and compression modules into a ProverChain.Developers can write business logic in Rust, automatically generating cryptographic proofs without prior ZK knowledge—significantly lowering the entry barrier.The architecture supports continuous evolution by introducing new proof systems and application-level coprocessors (for on-chain data, zkML, or cross-chain verification). Compared to Succinct’s SP1 (a relatively monolithic RISC-V zkVM) and Risc Zero R0VM (a universal RISC-V execution model), Pico’s Modular zkVM + Coprocessor System decouples execution, recursion, and compression phases, supports backend switching, and enables coprocessor integration—yielding superior performance and extensibility.
3.2 Pico Prism: Multi-GPU Cluster Breakthrough Pico Prism marks a major leap for Brevis in multi-server GPU architecture, setting new records under the Ethereum Foundation’s RTP (Realtime Proving) framework. It achieves 6.9-second average proof time and 96.8% RTP coverage on a 64×RTX 5090 GPU cluster, leading the zkVM performance benchmarks. This demonstrates the transition of zkVMs from research prototypes to production-grade infrastructure through optimizations at the architectural, engineering, hardware, and system levels. Architecture: Traditional zkVMs (SP1, R0VM) focus on single-machine GPU optimization. Pico Prism pioneers cluster-level zkProving—multi-server, multi-GPU parallel proving—scaling ZK computation through multithreading and sharding orchestration.Engineering: Implements an asynchronous multi-stage pipeline (Execution / Recursion / Compression), cross-layer data reuse (proof chunk caching, embedding reuse), and multi-backend flexibility—boosting throughput dramatically.Hardware: On a 64×RTX 5090 ($128K) setup, achieves 6.0–6.9s average proving time and 96.8% RTP coverage, delivering a 3.4× performance-to-cost improvement over SP1 Hypercube (160×4090, 10.3s).System Evolution: As the first zkVM to meet EF RTP benchmarks (>96% sub-10s proofs, <$100K hardware), Pico Prism establishes zk proving as mainnet-ready infrastructure for Rollups, DeFi, AI, and cross-chain verification scenarios. 3.3 ZK Data Coprocessor: Intelligent ZK Layer for Blockchain Data Traditional smart contracts “lack memory”—they cannot access historical states, recognize user behavior over time, or analyze cross-chain data. Brevis addresses this with a high-performance ZK Data Coprocessor, enabling contracts to query, compute, and verify historical blockchain data in a trustless way. This empowers data-driven DeFi, active liquidity management, reward distribution, and cross-chain identity verification. Brevis workflow: Data Access: Contracts call APIs to retrieve historical data trustlessly.Computation Execution: Developers define logic via SDK; Brevis performs off-chain computation and generates ZK proofs.Result Verification: Proofs are verified on-chain, triggering subsequent contract logic.
Brevis supports both Pure-ZK and coChain (Optimistic) models: The former achieves full trustlessness at higher cost.The latter introduces PoS verification with ZK challenge-response, lowering costs while maintaining verifiability. Validators stake on Ethereum and are slashed if ZK challenges succeed—striking a balance between security and efficiency. Through the integration of ZK + PoS + SDK, Brevis builds a scalable and verifiable data computation layer. Currently, Brevis powers PancakeSwap, Euler, Usual, Linea, and other protocols. All zkCoprocessor partnerships operate under the Pure-ZK model, providing trusted data support for DeFi incentives, reward distribution, and on-chain identity systems, enabling smart contracts to truly gain “memory and intelligence.” 3.4 Incentra: ZK-Powered Verifiable Incentive Distribution Layer Incentra, built on the Brevis zkCoprocessor, is a verifiable incentive platform that uses ZK proofs for secure, transparent, and on-chain reward distribution. It enables trustless, low-cost, cross-chain automation, allowing anyone to verify rewards directly while supporting compliant, access-controlled execution. Supported incentive models: Token Holding: Rewards based on ERC-20 time-weighted average balances (TWA).Concentrated Liquidity: Rewards tied to AMM DEX fee ratios; compatible with Gamma, Beefy, and other ALM protocols.Lending & Borrowing: Rewards derived from average balances and debt ratios. Already integrated by PancakeSwap, Euler, Usual, and Linea, Incentra enables a fully verifiable on-chain incentive loop—a foundational ZK-level infrastructure for DeFi rewards. 3.5 Brevis: Complete Product and Technology Stack Overview
IV. Brevis zkVM: Technical Benchmarks and Performance Breakthroughs The Ethereum Foundation (EF)’s L1 zkEVM Realtime Proving (RTP) standard has become the de facto benchmark and entry threshold for zkVMs seeking mainnet integration. Its core evaluation criteria include: Latency: <= 10s for P99 of mainnet blocksOn-prem CAPEX: <= 100k USDOn-prem power: <= 10kWCode: Fully open sourceSecurity: >= 128 bitsProof size: <= 300KiB with no trusted setups
In October 2025, Brevis released the report “Pico Prism — 99.6% Real-Time Proving for 45M Gas Ethereum Blocks on Consumer Hardware,” announcing that Pico Prism became the first zkVM to fully meet the Ethereum Foundation’s RTP standard for block-level proving. Running on a 64×RTX 5090 GPU cluster (~$128K), Pico Prism achieved: Average latency: 6.9 seconds96.8% <10s coverage, 99.6% <12s coverage for 45M gas blocks, significantly outperforming Succinct SP1 Hypercube (36M gas, 10.3s average, 40.9% <10s coverage) With 71% lower latency and half the hardware cost, Pico Prism demonstrated a 3.4× improvement in performance-per-dollar efficiency.Public recognition from Ethereum Foundation, Vitalik Buterin, and Justin Drake.
V. Brevis Ecosystem Expansion and Application Deployment The Brevis zkCoprocessor handles complex computations that dApps cannot efficiently perform—such as analyzing historical user behavior, aggregating cross-chain data, or performing large-scale analytics—and outputs zero-knowledge proofs (ZKPs) that can be verified on-chain. This allows on-chain applications to trustlessly consume results by verifying a small proof, dramatically reducing gas, latency, and trust costs. Unlike traditional oracles that merely deliver data, Brevis provides mathematical assurance that the data is correct. Its application scenarios can be broadly categorized as follows: Intelligent DeFi: Data-driven incentives and personalized user experiences based on behavioral and market history (e.g., PancakeSwap, Uniswap, MetaMask).RWA & Stable Token Growth: Automated distribution of real-world yield and stablecoin income via ZK verification (e.g., OpenEden, Usual Money, MetaMask USD).Privacy-Preserving DEX (Dark Pools): Off-chain matching with on-chain verification—upcoming deployment.Cross-Chain Interoperability: Cross-chain restaking and Rollup–L1 verification, building a shared security layer (e.g., Kernel, Celer, 0G).Blockchain Bootstrap: ZK-based incentive mechanisms accelerating new chain ecosystems (e.g., Linea, TAC).High-Performance Blockchains (100× Faster L1s): Leveraging Realtime Proving (RTP) to enhance mainnet throughput (e.g., Ethereum, BNB Chain).Verifiable AI: Privacy-preserving and verifiable inference for the AgentFi and data-intelligence economy (e.g., Kaito, Trusta).
Network Scale and Metrics, According to Brevis Explorer (as of October 2025): Over 125 million ZK proofs generatedCovering ~95,000 on-chain addresses and ~96,000 application requestsCumulative incentive distribution: $223 million+TVL supported: >$2.8 billionTotal verified transaction volume: >$1 billion Brevis’s ecosystem currently focuses on DeFi incentive distribution and liquidity optimization, with computing power mainly consumed by Usual Money, PancakeSwap, Linea Ignition, and Incentra, which together account for over 85% of network load. Usual Money (46.6M proofs): Demonstrates long-term stability in large-scale incentive distribution.PancakeSwap (20.6M): Highlights Brevis’s performance in real-time fee and discount computation.Linea Ignition (20.4M): Validates Brevis’s high-concurrency capacity for L2 ecosystem campaigns.Incentra (15.2% share): Marks Brevis’s transition from SDK toolkit to standardized incentive platform.
DeFi Incentive Layer: Through Incentra, Brevis supports multiple protocols with transparent and continuous reward allocation: Usual Money — Annual incentives exceeding $300M, sustaining stablecoin and LP yields.OpenEden & Bedrock — CPI-based models for automated U.S. Treasury and Restaking yield distribution.Euler, Aave, BeraBorrow — ZK-verified lending positions and reward calculations.
Liquidity Optimization: Protocols such as PancakeSwap, QuickSwap, THENA, and Beefy employ Brevis’s dynamic fee and ALM incentive plugins for trade discounts and cross-chain yield aggregation. Jojo Exchange and the Uniswap Foundation use ZK verification to build safer, auditable trading incentive systems. Cross-Chain & Infrastructure Layer: Brevis has expanded from Ethereum to BNB Chain, Linea, Kernel DAO, TAC, and 0G, offering verifiable computation and cross-chain proof capabilities across multiple ecosystems. Projects like Trusta AI, Kaito AI, and MetaMask are integrating Brevis’s ZK Data Coprocessor to power privacy-preserving loyalty programs, reputation scoring, and reward systems, advancing data intelligence within Web3. At the infrastructure level, Brevis leverages the EigenLayer AVS network for restaking security, and integrates NEBRA’s Universal Proof Aggregation (UPA) to compress multiple ZK proofs into single submissions—reducing on-chain verification cost and latency. Overall, Brevis now spans the full application cycle—from long-term incentive programs and event-based rewards to transaction verification and platform-level services. Its high-frequency verification tasks and reusable circuit templates provide Pico/Prism with real-world performance pressure and optimization feedback, which in turn can reinforce the L1 zkVM Realtime Proving (RTP) system at both the engineering and ecosystem levels—forming a two-way flywheel between technology and application VI. Team Background and Project Funding Mo Dong | Co-founder, Brevis Network Dr. Mo Dong is the co-founder of Brevis Network. He holds a Ph.D. in Computer Science from the University of Illinois at Urbana–Champaign (UIUC). His research has been published in top international conferences, adopted by major technology companies such as Google, and cited thousands of times. As an expert in algorithmic game theory and protocol mechanism design, Dr. Dong focuses on integrating zero-knowledge computation (ZK) with decentralized incentive mechanisms, aiming to build a trustless Verifiable Compute Economy. He also serves as a Venture Partner at IOSG Ventures, where he actively supports early-stage investments in Web3 infrastructure. The Brevis team was founded by cryptography and computer science Ph.D. holders from UIUC, MIT, and UC Berkeley. The core members have years of research experience in zero-knowledge proof systems (ZKP) and distributed systems, with multiple peer-reviewed publications in the field. Brevis has received technical recognition from the Ethereum Foundation, with its core modules regarded as foundational components for on-chain scalability infrastructure.
In November 2024, Brevis completed a $7.5 million seed round, co-led by Polychain Capital and Binance Labs, with participation from IOSG Ventures, Nomad Capital, HashKey, Bankless Ventures, and strategic angel investors from Kyber, Babylon, Uniswap, Arbitrum, and AltLayer. VII. Competitive Landscape: zkVM and zkCoprocessor Markets The Ethereum Foundation–backed ETHProofs.org has become the primary public platform tracking the L1 zkEVM Realtime Proving (RTP) roadmap, providing open data on zkVM performance, security, and mainnet readiness.
RTP Track: Four Core Competitive Dimensions Maturity: Succinct SP1 leads in production deployment; Brevis Pico demonstrates the strongest performance, nearing mainnet readiness; RISC Zero is stable but has not yet disclosed RTP benchmarks.Performance: Pico’s proof size (~990 kB) is about 33% smaller than SP1’s (1.48 MB), reducing cost and latency.Security & Audit: RISC Zero and SP1 have both undergone independent audits; Pico is currently completing its formal audit process.Developer Ecosystem:Most zkVMs use the RISC-V instruction set; SP1 leverages its Succinct Rollup SDK for broad ecosystem integration; Pico supports Rust-based auto proof generation, with a rapidly maturing SDK. Market Structure: Two Leading Tiers Tier 1 — Brevis Pico (+ Prism) & Succinct SP1 Hypercube Both target the EF RTP P99 ≤ 10 s benchmark.Pico innovates through a distributed multi-GPU architecture, delivering superior performance and cost efficiency.SP1 maintains robustness with a monolithic system and ecosystem maturity. → Pico represents architectural innovation and performance leadership, while SP1 represents *production readiness and ecosystem dominance.Tier 2 — RISC Zero, ZisK, ZKM These projects focus on lightweight and compatibility-first designs but have not published complete RTP metrics (latency, power, CAPEX, security bits, proof size, reproducibility). Scroll (Ceno) and Matter Labs (Airbender) are extending Rollup proof systems to the L1 verification layer, signaling a shift from L2 scaling toward L1 verifiable computing. 2025 zkVM field has converged on RISC-V standardization, modular evolution, recursive proof standardization, and parallel hardware acceleration. The Verifiable Compute Layer can be categorized into three main archetypes: Performance-oriented: Brevis Pico, SP1, Jolt, ZisK — focus on low-latency, realtime proving via recursive STARKs and GPU acceleration.Modular / Extensible: OpenVM, Pico, SP1 — emphasize plug-and-play modularity and coprocessor integration.Ecosystem / Developer-friendly: RISC Zero, SP1, ZisK — prioritize SDK completeness and language compatibility for mass adoption.
zkVM Project Comparison (as of Oct 2025)
zkCoprocessor Landscape The zk-Coprocessor market is now led by Brevis, Axiom, Herodotus, and Lagrange. Brevis stands out with a hybrid architecture combining a ZK Data Coprocessor + General-Purpose zkVM, enabling historical data access, programmable computation, and L1 Realtime Proving (RTP) capability.Axiom specializes in verifiable queries and circuit callbacks.Herodotus focuses on provable access to historical blockchain states.Lagrange adopts a ZK + Optimistic hybrid design to improve cross-chain computation efficiency. Overall, zk-Coprocessors are emerging as “Verifiable Service Layers” that bridge DeFi, RWA, AI, and digital identity through trustless computational APIs.
VIII. Conclusion: Business Logic, Engineering Implementation, and Potential Risks Business Logic: Performance-Driven Flywheel at Dual Layers Brevis builds a multi-chain verifiable computing layer by integrating its general-purpose zkVM (Pico/Prism) with a data coprocessor (zkCoprocessor). zkVM addresses verifiability of arbitrary computation,zkCoprocessor enables business deployment for historical and cross-chain data. This creates a “Performance → Ecosystem → Cost” positive feedback loop: as Pico Prism’s RTP performance attracts leading protocol integrations, proof volume scales up and per-proof cost declines, forming a self-reinforcing dual flywheel. Brevis’s core competitive advantages can be summarized as: Reproducible performance — verified within the Ethereum Foundation’s ETHProofs RTP framework;Architectural moat — modular design with multi-GPU parallel scalability;Commercial validation — large-scale deployment across incentive distribution, dynamic fee modeling, and cross-chain verification. Engineering Implementation: Verification-as-Execution Through its Pico zkVM and Prism parallel proving framework, Brevis achieves 6.9-second average latency and P99 < 10 seconds for 45M gas blocks (on a 64×5090 GPU setup, <$130K CAPEX) — maintaining top-tier performance and cost efficiency. The zkCoprocessor module supports historical data access, circuit generation, and on-chain proof verification, flexibly switching between Pure-ZK and Hybrid (Optimistic + ZK) modes. Overall, its performance now aligns closely with the Ethereum RTP hardware and latency benchmarks. Potential Risks and Key Considerations Technical & Compliance: Brevis must validate power use, security level, proof size, and trusted setup via third-party audits. Performance tuning and potential EIP changes remain key challenges.Competition: Succinct (SP1/Hypercube) leads in ecosystem maturity, while RISC Zero, Axiom, OpenVM, Scroll, and zkSync continue to compete strongly.Revenue Concentration: Proof volume is ~80% concentrated in four apps; diversification across chains and sectors is needed. GPU price volatility may also affect margins.
Overall, Brevis has established an initial moat across both technical reproducibility and commercial deployment: Pico/Prism firmly leads the L1 RTP track, while the zkCoprocessor unlocks high-frequency, reusable business applications. Going forward, Brevis should aim to fully meet the Ethereum Foundation’s RTP benchmarks, continue to standardize coprocessor products and expand ecosystem integration, and advance third-party reproducibility, security audits, and cost transparency. By balancing infrastructure and SaaS-based revenues, Brevis can build a sustainable commercial growth loop.
Disclaimer: This report was prepared with assistance from the AI tool ChatGPT-5. The author has made every effort to ensure factual accuracy and reliability; however, minor errors may remain. Please note that crypto asset markets often show a disconnect between project fundamentals and secondary-market token performance. All content herein is intended for informational and academic/research purposes only, and does not constitute investment advice or a recommendation to buy or sell any token.
Brevis Forschungsbericht: Unendliche vertrauenswürdige Berechnungsschicht von ZKVM und Daten-Co-Prozessor
“Off-Chain Berechnung + On-Chain Verifizierung” ist das vertrauenswürdige Berechnungsparadigma (Verifiable Computing), das zum allgemeinen Berechnungsmodell für Blockchain-Systeme geworden ist. Es ermöglicht Blockchain-Anwendungen, nahezu unendliche Rechenfreiheit (computational freedom) zu erlangen, während Dezentralisierung und Minimierung von Vertrauen (trustlessness) in Bezug auf Sicherheit beibehalten werden. Zero-Knowledge-Beweise (ZKP) sind die zentralen Säulen dieses Paradigmas, dessen Anwendungen hauptsächlich auf drei grundlegende Richtungen konzentriert sind: Skalierbarkeit (Scalability), Privatsphäre (Privacy) sowie Interoperabilität und Datenintegrität (Interoperability & Data Integrity). Unter diesen ist Skalierbarkeit das früheste Anwendungsfeld der ZK-Technologie, das durch die Verlagerung der Transaktionsausführung auf Off-Chain und die Verwendung kurzer Beweise zur Verifizierung der Ergebnisse On-Chain hohe TPS und kostengünstige vertrauenswürdige Skalierung (trusted scaling) erreicht.
Cysic Research Report: The ComputeFi Path of ZK Hardware Acceleration
Author:0xjacobzhao | https://linktr.ee/0xjacobzhao Zero-Knowledge Proofs (ZK) — as a next-generation cryptographic and scalability infrastructure — are demonstrating immense potential across blockchain scaling, privacy computation, zkML, and cross-chain verification. However, the proof generation process is extremely compute-intensive and latency-heavy, forming the biggest bottleneck for industrial adoption. ZK hardware acceleration has therefore emerged as a core enabler. Within this landscape, GPUs excel in versatility and iteration speed, ASICs pursue ultimate efficiency and large-scale performance, while FPGAs serve as a flexible middle ground combining programmability with energy efficiency. Together, they form the hardware foundation powering ZK’s real-world adoption. I. The Industry Landscape of ZK Hardware Acceleration GPU, FPGA, and ASIC represent the three mainstream paths of hardware acceleration: GPU (Graphics Processing Unit): A general-purpose parallel processor, originally designed for graphics rendering but now widely used in AI, ZK, and scientific computing.FPGA (Field Programmable Gate Array): A reconfigurable hardware circuit that can be repeatedly configured at the logic-gate level “like LEGO blocks,” bridging between general-purpose processors and specialized circuits.ASIC (Application-Specific Integrated Circuit): A dedicated chip customized for a specific task. Once fabricated, its function is fixed — offering the highest performance and efficiency but the least flexibility. GPU Dominance: GPUs have become the backbone of both AI and ZK computation. In AI, GPUs’ parallel architecture and mature software ecosystem (CUDA, PyTorch, TensorFlow) make them nearly irreplaceable — the long-term mainstream choice for both training and inference. In ZK, GPUs currently offer the best trade-off between cost and availability, but their performance in big integer modular arithmetic, MSM, and FFT/NTT operations is limited by memory and bandwidth constraints. Their energy efficiency and scalability economics remain insufficient, suggesting the eventual need for more specialized hardware. FPGA Flexibility: Paradigm’s 2022 investment thesis highlighted FPGA as the “sweet spot” balancing flexibility, efficiency, and cost. Indeed, FPGAs are programmable, reusable, and quick to prototype, suitable for rapid algorithm iteration, low-latency environments (e.g., high-frequency trading, 5G base stations), edge computing under power constraints, and secure cryptographic tasks. However, FPGAs lag behind GPUs and ASICs in raw performance and scale economics. Strategically, they are best suited as development and iteration platforms before algorithm standardization, or for niche verticals requiring long-term customization. ASIC as the Endgame: ASICs are already dominant in crypto mining (e.g., Bitcoin’s SHA-256, Litecoin/Dogecoin’s Scrypt). By hardwiring algorithms directly into silicon, ASICs achieve orders of magnitude better performance and energy efficiency — becoming the exclusive infrastructure for mining. In ZK proving (e.g., Cysic) and AI inference (e.g., Google TPU, Cambricon), ASICs show similar potential. Yet, in ZK, algorithmic diversity and operator variability have delayed standardization and large-scale demand. Once standards solidify, ASICs could redefine ZK compute infrastructure — delivering 10–100× improvements in performance and efficiency with minimal marginal cost post-production. In AI, where training workloads evolve rapidly and rely on dynamic matrix operations, GPUs will remain the mainstream for training. Still, ASICs will hold irreplaceable value in fixed-task, large-scale inference scenarios. Dimension Comparison: GPU vs FPGA vs ASIC
In the evolution of ZK hardware acceleration, GPUs are currently the optimal solution — balancing cost, accessibility, and development efficiency, making them ideal for rapid deployment and iteration. FPGAs serve more as specialized tools, valuable in ultra-low-latency, small-scale interconnect, and prototyping scenarios, but unable to compete with GPUs in economic efficiency. In the long term, as ZK standards stabilize, ASICs will emerge as the industry’s core infrastructure, leveraging unmatched performance-per-cost and energy efficiency. Overall trajectory: Short term – rely on GPUs to capture market share and generate revenue; Mid term – use FPGAs for verification and interconnect optimization; Long term – bet on ASICs to build a sustainable compute moat. II. Hardware Perspective: The Underlying Technical Barriers of ZK Acceleration Cysic’s core strength lies in hardware acceleration for zero-knowledge proofs (ZK). In the representative paper “ZK Hardware Acceleration: The Past, the Present and the Future,” the team highlights that GPUs offer flexibility and cost efficiency, while ASICs outperform in energy efficiency and peak performance—but require trade-offs between development cost and programmability. Cysic adopts a dual-track strategy — combining ASIC innovation with GPU acceleration — driving ZK from “verifiable” to “real-time usable” through a full-stack approach from custom chips to general SDKs. 1. The ASIC Path: Cysic C1 Chip and Dedicated Devices Cysic’s self-developed C1 chip is built on a zkVM-based architecture, featuring high bandwidth and flexible programmability. Based on this, Cysic plans to launch two hardware products: ZK Air: a portable accelerator roughly the size of an iPad charger, plug-and-play, designed for lightweight verification and developer use;ZK Pro: a high-performance system integrating the C1 chip with front-end acceleration modules, targeting large-scale zkRollup and zkML workloads. Cysic’s research directly supports its ASIC roadmap. The team introduced Hypercube IR, a ZK-specific intermediate representation that abstracts proof circuits into standardized parallel patterns—reducing the difficulty of cross-hardware migration. It explicitly preserves modular arithmetic and memory access patterns in circuit logic, enabling better hardware recognition and optimization. In Million Keccak/s experiments, a single C1 chip achieved ~1.31M Keccak proofs per second (~13× acceleration), demonstrating the throughput and energy-efficiency potential of specialized hardware. In HyperPlonk hardware analysis, the team showed that MSM/MLE operations parallelize well, while Sumcheck remains a bottleneck. Overall, Cysic is developing a holistic methodology across compiler abstraction, hardware verification, and protocol adaptation, laying a strong foundation for productization. 2. The GPU Path: General SDK + ZKPoG End-to-End Stack On the GPU side, Cysic is advancing both a general-purpose acceleration SDK and a full ZKPoG (Zero-Knowledge Proof on GPU) stack: General GPU SDK: built on Cysic’s custom CUDA framework, compatible with Plonky2, Halo2, Gnark, Rapidsnark, and other backends. It surpasses existing open-source frameworks in performance, supports multiple GPU models, and emphasizes compatibility and ease of use.ZKPoG: developed in collaboration with Tsinghua University, it is the first end-to-end GPU stack covering the entire proof flow—from witness generation to polynomial computation. On consumer-grade GPUs, it achieves up to 52× speedup (average 22.8×) and expands circuit scale by 1.6×, verified across SHA256, ECDSA, and MVM applications.
Cysic’s key differentiator lies in its hardware–software co-design philosophy. Its in-house ZK ASICs, GPU clusters, and portable mining devices together form a full-stack compute infrastructure, enabling deep integration from the chip layer to the protocol layer. By leveraging the complementarity between ASICs’ extreme energy efficiency and scalability and GPUs’ flexibility and rapid iteration, Cysic has positioned itself as a leading ZKP hardware provider for high-intensity proof workloads — and is now extending this foundation toward the financialization of ZK hardware (ComputeFi) as its next industrial phase. III. Protocol Perspective: Cysic Network — A Universal Proof Layer under PoC Consensus On September 24, 2025, the Cysic team released the Cysic Network Whitepaper. The project centers on ComputeFi, financializing GPU, ASIC, and mining hardware into programmable, verifiable, and tradable computational assets. Built with Cosmos CDK, Proof-of-Compute (PoC) consensus, and an EVM execution layer, Cysic Network establishes a decentralized “task-matching + multi-verification” marketplace supporting ZK proving, AI inference, mining, and HPC workloads. By vertically integrating self-developed ZK ASICs, GPU clusters, and portable miners, and powered by a dual-token model ($CYS / $CGT), Cysic aims to unlock real-world compute liquidity — filling a key gap in Web3 infrastructure: verifiable compute power. Modular Architecture: Four Layers of ComputeFi Infrastructure Cysic Network adopts a bottom-up four-layer modular architecture, enabling cross-domain expansion and verifiable collaboration: Hardware Layer: Comprising CPUs, GPUs, FPGAs, ASIC miners, and portable devices — forming the network’s computational foundation.Consensus Layer: Built on Cosmos CDK, using a modified CometBFT + Proof-of-Compute (PoC) mechanism that integrates token staking and compute staking into validator weighting, ensuring both computational and economic security.Execution Layer: Handles task scheduling, workload routing, bridging, and voting, with EVM-compatible smart contracts enabling programmable, multi-domain computation.Product Layer: Serves as the application interface — integrating ZK proof markets, AI inference frameworks, crypto mining, and HPC modules, while supporting new task types and verification methods.
ZK Proof Layer: Decentralization Meets Hardware Acceleration Zero-knowledge proofs allow computation to be verified without revealing underlying data — but generating these proofs is time- and cost-intensive. Cysic Network enhances efficiency through decentralized Provers + GPU/ASIC acceleration, while off-chain verification and on-chain aggregation reduce latency and verification costs on Ethereum. Workflow: ZK projects publish proof tasks via smart contracts → decentralized Provers compete to generate proofs → Verifiers perform multi-party validation → results are settled via on-chain contracts. By combining hardware acceleration with decentralized orchestration, Cysic builds a scalable Proof Layer that underpins ZK Rollups, zkML, and cross-chain applications.
Node Roles: Cysic Prover Mechanism Within the network, Prover nodes are responsible for heavy-duty computation. Users can contribute their own compute resources or purchase Digital Harvester devices to perform proof tasks and earn $CYS / $CGT rewards. A Multiplier factor boosts task acquisition speed. Each node must stake 10 CYS as collateral, which may be slashed for misconduct. Currently, the main task is ETHProof Prover — generating ZK proofs for Ethereum mainnet blocks, advancing the base layer’s ZK scalability. Provers thus form the computational and security backbone of the Cysic Network, also providing trusted compute power for future AI inference and AgentFi applications. Node Roles: Cysic Verifier Mechanism Complementing Provers, Verifier nodes handle lightweight proof verification to enhance network security and scalability. Users can run Verifiers on a PC, server, or official Android app, with the Multiplier also boosting task efficiency and rewards. The participation barrier is much lower — requiring only 0.5 CYS as collateral. Verifiers can join or exit freely, making participation accessible and flexible. This low-cost, light-participation model expands Cysic’s reach to mobile and general users, strengthening decentralization and trustworthy verification across the network.
Network Status and Outlook As of October 15, 2025, the Cysic Network has reached a significant early milestone: ≈42,000 Prover nodes and 100,000+ Verifier nodes≈91,000 total tasks completed≈700,000 $CYS/$CGT distributed as rewards However, despite the impressive node count, activity and compute contribution remain uneven due to entry and hardware differences. Currently, the network is integrated with three external projects, marking the beginning of its ecosystem. Whether Cysic can evolve into a stable compute marketplace and core ComputeFi infrastructure will depend on further real-world integrations and partnerships in the coming phases. IV. AI Perspective: Cysic AI — Cloud Services, AgentFi, and Verifiable Inference Cysic AI’s business framework follows a three-tier structure — Product, Application, and Strategy: At the base, Serverless Inference offers standardized APIs to lower the barrier for AI model access; At the middle, the Agent Marketplace explores on-chain applications of AI Agents and autonomous collaboration; At the top, Verifiable AI integrates ZKP + GPU acceleration to enable trusted inference, representing the long-term vision of ComputeFi. 1. Standard Product Layer: Cloud Inference Service (Serverless Inference) Cysic AI provides instant-access, pay-as-you-go inference services, allowing users to call large language models via APIs without managing or maintaining compute clusters. This serverless design achieves low-cost and flexible intelligent integration for both developers and enterprises. Currently supported models include: Meta-Llama-3-8B-Instruct (task & dialogue optimization)QwQ-32B (reasoning-enhanced)Phi-4 (lightweight instruction model)Llama-Guard-3-8B (content safety review) These cover diverse needs — from general conversation and logical reasoning to compliance auditing and edge deployment. The service balances cost and efficiency, supporting both rapid prototyping for developers and large-scale inference for enterprises, forming a foundational layer in Cysic’s trusted AI infrastructure.
2. Application Layer: Decentralized Intelligent Agent Marketplace (Agent Marketplace) The Cysic Agent Marketplace functions as a decentralized platform for AI Agent applications. Users can simply connect their Phantom wallet, complete verification, and interact with various Agents — payments are handled automatically through Solana USDC. Currently, the platform integrates three core agents: X Trends Agent — analyzes real-time X (Twitter) trends and generates creative MEME coin concepts.Logo Generator Agent — instantly creates custom project logos from user descriptions.Publisher Agent — deploys MEME coins on the Solana network (e.g., via Pump.fun) with one click.
Technically, the marketplace leverages the Agent Swarm Framework to coordinate multiple autonomous agents into collaborative task groups (Swarms), enabling division of labor, parallelism, and fault tolerance. Economically, it employs the Agent-to-Agent Protocol, achieving on-chain payments and automated incentives where users pay only for successful actions. Together, these features form a complete on-chain loop — trend analysis → content generation → deployment, demonstrating how AI Agents can be financialized and integrated within the ComputeFi ecosystem. 3. Strategic Layer: Hardware-Accelerated Verifiable Inference (Verifiable AI) A core challenge in AI inference is trust — how to mathematically guarantee that an inference result is correct without exposing inputs or model weights. Verifiable AI addresses this through zero-knowledge proofs (ZKPs), ensuring cryptographic assurance over model outputs. However, traditional ZKML proof generation is too slow for real-time use. Cysic solves this via GPU hardware acceleration, introducing three key technical innovations: Parallelized Sumcheck Protocol: Breaks large polynomial computations into tens of thousands of CUDA threads running in parallel, achieving near-linear speedup relative to GPU core count.Custom Finite Field Arithmetic Kernels: Deeply optimized across register allocation, shared memory, and warp-level parallelism to overcome modular arithmetic memory bottlenecks — keeping GPUs consistently saturated and efficient.End-to-End ZKPoG Acceleration Stack: Covers the full chain — from witness generation to proof creation and verification, compatible with Plonky2 and Halo2 backends. Benchmarking shows up to 52× speedup over CPUs and ~10× acceleration on CNN-4M models.
Through this optimization suite, Cysic advances verifiable inference from being “theoretically possible but impractically slow” to “real-time deployable.” This dramatically reduces latency and cost, making Verifiable AI viable for the first time in real-world, latency-sensitive applications. The platform supports PyTorch and TensorFlow — developers can simply wrap their model in a VerifiableModule to receive both inference results and corresponding cryptographic proofs without changing existing code. On its roadmap, Cysic plans to extend support to CNN, Transformer, Llama, and DeepSeek models, release real-time demos for facial recognition and object detection, and open-source code, documentation, and case studies to foster community collaboration.
Cysic AI’s three-layer roadmap forms a bottom-up evolution logic: Serverless Inference solves “can it be used”,Agent Marketplace answers “can it be applied”,Verifiable AI ensures “can it be trusted.” The first two serve as transitional and experimental stages, while the true strategic differentiation lies in Verifiable AI — where Cysic integrates ZK hardware acceleration and decentralized compute networks to establish its long-term competitive edge within the ComputeFi ecosystem. V. Financialization Perspective: NFT-Based Compute Access and ComputeFi Nodes Cysic Network introduces the “Digital Compute Cube” Node NFT, which tokenizes high-performance compute assets such as GPUs and ASICs, creating a ComputeFi gateway accessible to mainstream users. Each NFT functions as a verifiable node license, simultaneously representing yield rights, governance rights, and participation rights. Users can delegate or proxy participation in ZK proving, AI inference, and mining tasks — without owning physical hardware — and earn $CYS rewards directly.
The total NFT supply is 29,000 units, with approximately 16.45 million CYS distributed (1.65% of total supply, within the community allocation cap of 9%). Vesting: 50% unlocked at TGE + 50% linearly over six months. Beyond fixed token allocations, holders enjoy Multiplier boosts (up to 1.2×), priority access to compute tasks, and governance weight. Public sales have ended, and the NFTs are now tradable on OKX NFT Marketplace. Unlike traditional cloud-compute rentals, the Compute Cube model represents on-chain ownership of physical compute infrastructure, combining: Fixed token yield: Each NFT secures a guaranteed allocation of $CYS.Real-time compute rewards: Node-connected workloads (ZK proving, AI inference, crypto mining) distribute earnings directly to holders’ wallets.Governance and priority rights: Holders gain voting power in compute scheduling and protocol upgrades, along with early access privileges.Positive feedback loop: More workloads → more rewards → greater staking → stronger governance influence. In essence, Node NFTs convert fragmented GPU/ASIC resources into liquid on-chain assets, opening a new investment market for compute power in the era of surging AI and ZK demand. This ComputeFi flywheel — more tasks → more rewards → stronger governance — serves as a key bridge for expanding Cysic’s compute network to retail participants. VI. Consumer Use Case: Home ASIC Miners (Dogecoin & Cysic) Dogecoin, launched in 2013, uses Scrypt PoW and has been merge-mined with Litecoin (AuxPoW) since 2014, sharing hashpower for stronger network security. Its tokenomics feature infinite supply with a fixed annual issuance of 5 billion DOGE, emphasizing community and payment utility. Among all ASIC-based PoW coins, Dogecoin remains the most popular after Bitcoin — its meme culture and loyal community sustain long-term ecosystem stickiness. On the hardware side, Scrypt ASICs have fully replaced GPU/CPU mining, with industrial miners like Bitmain Antminer L7/L9 dominating. However, unlike Bitcoin’s industrial-scale mining, Dogecoin still supports home mining, with devices such as Goldshell MiniDoge, Fluminer L1, and ElphaPex DG Home 1 catering to retail miners, combining cash flow and community engagement. For Cysic, entering the Dogecoin ASIC sector holds three strategic advantages: Lower technical threshold: Scrypt ASICs are simpler than ZK ASICs, allowing faster validation of mass production and delivery capabilities.Mature cash flow: Mining generates immediate and stable revenue streams.Supply chain & brand building: Dogecoin ASIC production strengthens Cysic’s manufacturing and market expertise, paving the way for future ZK/AI ASICs. Thus, home ASIC miners represent a pragmatic revenue base and a strategic stepping stone for Cysic’s long-term ZK/AI hardware roadmap. Cysic Portable Dogecoin Miner: A Home-Scale Innovation During Token2049, Cysic unveiled the DogeBox 1, a portable Scrypt ASIC miner for home and community users — designed as a verifiable consumer-grade compute terminal: Portable & energy-efficient: pocket-sized, 55 W power, suitable for households and small setups.Plug-and-play: managed via mobile app, built for global retail users.Dual functionality: mines DOGE and verifies DogeOS ZK proofs, achieving L1 + L2 security.Circular incentive: integrates DOGE mining + CYS rewards, forming a DOGE → CYS → DogeOS economic loop. This product synergizes with DogeOS (a ZK-based Layer-2 Rollup developed by the MyDoge team, backed by Polychain Capital) and MyDoge Wallet, enabling DogeBox users to mine DOGE and participate in ZK validation — combining DOGE rewards + CYS subsidies to reinforce engagement and integrate directly into the DogeOS ecosystem. The Cysic Dogecoin home miner thus serves as both a practical cashflow device and a strategic bridge to ZK/AI ASIC deployment. By merging mining + ZK verification, Cysic gains hands-on experience in market distribution and hardware scaling — while bringing a scalable, verifiable, community-driven L1 + L2 narrative to the Dogecoin ecosystem. VII. Ecosystem Expansion and Core Progress Collaboration with Succinct & Boundless Prover Networks: Cysic operates as a multi-node Prover within Succinct Network, leveraging its GPU clusters to handle SP1 zkVM real-time proofs and co-develop GPU optimization layers. It has also joined the Boundless Mainnet Beta, providing hardware acceleration for its Proof Marketplace.Early Partnership with Scroll: In early stages, Cysic provided high-performance ZK computation for Scroll, executing large-scale proving tasks on GPU clusters with low latency and cost, generating over 10 million proofs. This validated Cysic’s engineering capability and laid the foundation for its future computer-network development.Home Miner Debut at Token2049: Cysic’s DogeBox 1 portable ASIC miner officially entered the Dogecoin/Scrypt compute market. Specs: 55 W power, 125 MH/s hashrate, 100 × 100 × 35 mm, Wi-Fi + Bluetooth support, noise < 35 dB — ideal for home or community use. Beyond DOGE/LTC mining, it supports DogeOS ZK verification, achieving dual-layer (L1 + L2) security and forming a DOGE → CYS → DogeOS incentive loop.Testnet Completion & Mainnet Readiness: On Sept 18, 2025, Cysic completed Phase III: Ignition, marking the end of its testnet and transition toward mainnet launch. The testnet onboarded Succinct, Aleo, Scroll, and Boundless, attracting 55,000+ wallets, 8 million transactions, and 100,000+ reserved high-end GPU devices. 1.36 million registered users, 13 million transactions, ~223 k Verifiers + 41.8 k Provers = 260 k+ total nodes. 1.46 million total tokens distributed (733 k $CYS + 733 k $CGT + 4.6 million FIRE) and 48,000+ users staked, validating both incentive sustainability and network scalability. Ecosystem Integration Overview: According to Cysic’s official ecosystem map, the network is now interconnected with leading ZK and AI projects, underscoring its hardware-compatibility and openness across the decentralized compute stack. These integrations strengthen Cysic’s position as a foundational compute and hardware acceleration provider, supporting future expansion across ZK, AI, and ComputeFi ecosystems. Partner Categories:zkEVM / L2: zkSync, Scroll, Manta, Nil, KakarotzkVM / Prover Networks: Succinct, Risc0, Nexus, Axiomzk Coprocessors: Herodotus, AxiomInfra / Cross-chain: zkCloud, ZKM, Polyhedra, BrevisIdentity & Privacy: zkPass, Human.techOracles: Chainlink, BlocksenseAI Ecosystem: Talus, Modulus Labs, Gensyn, Aspecta, Inference Labs VIII. Token Economics Design
Cysic Network adopts a dual-token system: the network token $CYS and the governance token $CGT.
$CYS (Network Token): A native, transferable asset used for paying transaction fees, node staking, block rewards, and network incentives—ensuring network activity and economic security. $CYS is also the primary incentive for compute providers and verifiers. Users can stake $CYS to obtain governance weight and participate in resource allocation and governance decisions of the Computing Pool. $CGT (Governance Token): A non-transferable asset minted 1:1 by locking $CYS, with a longer unbonding period to participate in Computing Governance (CG). $CGT reflects compute contribution and long-term participation. Compute providers must maintain a reserve of $CGT as an admission bond to deter malicious behavior. During network operation, compute providers connect their resources to Cysic Network to serve ZK, AI, and crypto-mining workloads. Revenue sources include block rewards, external project incentives, and compute governance distributions. Scheduling and reward allocation are dynamically adjusted by multiple factors, with external project incentives (e.g., ZK, AI, Mining rewards) as a key weight. IX. Team Background & Fundraising Co-founder & CEO: Xiong (Leo) Fan. Previously an Assistant Professor of Computer Science at Rutgers University (USA); former researcher at Algorand and Postdoctoral Researcher at the University of Maryland; Ph.D. from Cornell University. Leo’s research focuses on cryptography and its intersections with formal verification and hardware acceleration, with publications at top venues such as IEEE S&P, ACM CCS, POPL, Eurocrypt, and Asiacrypt, spanning homomorphic encryption, lattice cryptography, functional encryption, and protocol verification. He has contributed to multiple academic and industry projects, combining theoretical depth with systems implementation, and has served on program committees of international cryptography conferences. According to public information on LinkedIn, the Cysic team blends backgrounds in hardware acceleration, cryptographic research, and blockchain applications. Core members have industry experience in chip design and systems optimization and academic training from leading institutions across the US, Europe, and Asia. The team’s strengths are complementary across hardware R&D, ZK optimization, and business operations.
Fundraising: In May 2024, Cysic announced a $12M Pre-A round co-led by HashKey Capital and OKX Ventures, with participation from Polychain, IDG, Matrix Partners, SNZ, ABCDE, Bit Digital, Coinswitch, Web3.com Ventures, as well as notable angels including George Lambeth (early investor in Celestia/Arbitrum/Avax) and Ken Li (Co-founder of Eternis). X. Competitive Landscape in ZK Hardware Acceleration 1) Direct Competitors (Hardware-Accelerated) In the hardware-accelerated prover and ComputeFi track, Cysic’s core peers include Ingonyama, Irreducible (formerly Ulvetanna), Fabric Cryptography, and Supernational—all focusing on “hardware + networks that accelerate ZK proving.” Cysic: Full-stack (GPU + ASIC + network) with a ComputeFi narrative. Strengths lie in the tokenization/financialization of compute; challenges include market education and hardware mass-production.Irreducible: Strong theory + engineering; exploring new algebraic structures (Binius) and zkASIC. High theoretical innovation; commercialization pace may be constrained by FPGA economics.Ingonyama: Open-source friendly; ICICLE SDK is a de-facto GPU ZK acceleration standard with high ecosystem adoption, but no in-house hardware.Fabric: “Hardware–software co-design” path; building a VPU (Verifiable Processing Unit) general crypto-compute chip—business model akin to “CUDA + NVIDIA,” targeting a broader cryptographic compute market.
2) Indirect Competitors (ZK Marketplace / Prover Network / zk Coprocessor) In ZK Marketplaces, Prover Networks, and zk Coprocessors, Cysic currently acts more as an upstream compute supplier, while Succinct, Boundless, Risc0, Axiom target the same end customers (L2s, zkRollups, zkML) via zkVMs, task routing, and open markets. Short term: Cooperation dominates. Succinct routes tasks; Cysic supplies high-performance provers. zk Coprocessors may offload tasks to Cysic.Long term: If Boundless and Succinct scale their marketplace models (auction vs. routing) while Cysic also builds a marketplace, direct competition at the customer access layer is likely. Similarly, a mature zk Coprocessor loop could disintermediate direct hardware access, risking Cysic’s marginalization as an “upstream contractor.”
XI. Conclusion: Business Logic, Engineering Execution, and Potential Risks Business Logic Cysic centers on the ComputeFi narrative—connecting compute from hardware production and network scheduling to financialized assets. Short term: Leverage GPU clusters to meet current ZK prover demand and generate revenue.Mid term: Enter a mature cash-flow market with Dogecoin home ASIC miners to validate mass production and tap community-driven retail hardware.Long term: Develop dedicated ZK/AI ASICs, combined with Node NFTs / Compute Cubes to assetize and marketize compute—building an infrastructure-level moat. Engineering Execution Hardware: Completed GPU-accelerated prover/verifier optimizations (MSM/FFT parallelization); disclosed ASIC R&D (1.3M Keccak/s prototype).Network: Built a Cosmos SDK-based validation chain for prover accounting and task distribution; tokenized compute via Compute Cube / Node NFTs.AI: Released the Verifiable AI framework; accelerated Sumcheck and finite-field arithmetic via GPU parallelism for trusted inference—though differentiation from peers remains limited. Potential Risks Market education & demand uncertainty: ComputeFi is new; it’s unclear whether customers will invest in compute via NFTs/tokens.Insufficient ZK demand: The prover market is early; current GPU capacity may satisfy most needs, limiting ASIC shipment scale and revenue.ASIC engineering & mass-production risk: Proving systems aren’t fully standardized; ASIC R&D takes 12–18 months with high tape-out costs and uncertain yields—impacting commercialization timelines.Home-miner capacity constraints: The household market is limited; electricity costs and community-driven behavior skew toward “enthusiast consumption,” hindering stable scale revenue.Limited AI differentiation: Despite GPU parallel optimizations, cloud inference services are commoditized and the Agent Marketplace has low barriers—overall defensibility remains modest.Competitive dynamics: Long-term clashes at the customer access layer with Succinct/Boundless (marketplaces) or mature zk Coprocessors could push Cysic into an upstream “contract manufacturer” role. Disclaimer: This article was produced with assistance from ChatGPT-5 as an AI tool. The author has endeavored to proofread and ensure the accuracy of all information, yet errors may remain. Note that in crypto markets, a project’s fundamentals often diverge from secondary-market price performance. The content herein is for information aggregation and academic/research exchange only; it does not constitute investment advice nor a recommendation to buy or sell any token. #ZK #GPU #asic #Cysic #DOGE
免责声明:本文在创作过程中借助了 ChatGPT-5 的 AI 工具辅助完成,作者已尽力校对并确保信息真实与准确,但仍难免存在疏漏,敬请谅解。需特别提示的是,加密资产市场普遍存在项目基本面与二级市场价格表现背离的情况。本文内容仅用于信息整合与学术/研究交流,不构成任何投资建议,亦不应视为任何代币的买卖推荐。
GAIB Research Report: The On-Chain Financialization of AI Infrastructure — RWAiFi
Written by 0xjacobzhao | https://linktr.ee/0xjacobzhao As AI becomes the fastest-growing tech wave, computing power is seen as a new “currency,” with GPUs turning into strategic assets. Yet financing and liquidity remain limited, while crypto finance needs real cash flow–backed assets. RWA tokenization is emerging as the bridge. AI infrastructure, combining high-value hardware + predictable cash flows, are viewed as the best entry point for non-standard RWAs — GPUs offer near-term practicality, while robotics represent the longer frontier. GAIB’s RWAiFi (RWA + AI + DeFi) introduces a new path to on-chain financialization, powering the flywheel of AI Infra (GPU & Robotics) × RWA × DeFi. I. Outlook for AI Asset RWAization In discussions around RWA (Real-World Asset) tokenization, the market generally believes that standard assets such as U.S. Treasuries, U.S. equities, and gold will remain at the core in the long term. These assets are highly liquid, have transparent valuations, and follow well-defined compliance pathways — making them the natural carriers of the on-chain “risk-free rate.” By contrast, the RWAization of non-standard assets faces greater uncertainty. Segments such as carbon credits, private credit, supply chain finance, real estate, and infrastructure all represent massive markets. However, they often suffer from opaque valuation, high execution complexity, long cycles, and strong policy dependence. The real challenge lies not in tokenization itself, but in enforcing off-chain asset execution — especially post-default recovery and liquidation, which still depend on due diligence, post-loan management, and traditional legal processes. Despite these challenges, RWAization remains significant for several reasons: On-chain transparency — contracts and asset pool data are publicly visible, avoiding the “black box” problem of traditional funds.Diversified yield structures — beyond interest income, investors can earn additional returns through mechanisms like Pendle PT/YT, token incentives, and secondary market liquidity.Bankruptcy protection — investors usually hold securitized shares via SPC structures rather than direct claims, providing a degree of insolvency isolation. Within AI assets, GPU hardware is widely regarded as the first entry point for RWAization due to its clear residual value, high degree of standardization, and strong demand. Beyond hardware, compute lease contracts offer an additional layer — their contractual and predictable cash flow models make them particularly suitable for securitization. Looking further, robotics hardware and service contracts also carry RWA potential. Humanoid and specialized robots, as high-value equipment, could be mapped on-chain via financing lease agreements. However, robotics is far more operationally intensive, making execution significantly harder than GPU-backed assets. In addition, data centers and energy contracts are worth attention. Data centers — including rack leasing, electricity, and bandwidth agreements — represent relatively stable infrastructure cash flows. Energy contracts, exemplified by green energy PPAs, provide not only long-term revenue but also ESG attributes, aligning well with institutional investor mandates. Overall, AI asset RWAization can be understood across different horizons: Short term: centered on GPU and related compute lease contracts.Mid term: expansion to data center and energy agreements.Long term: breakthrough opportunities in robotics hardware and service contracts. The common logic across all layers is high-value hardware + predictable cash flow, though the execution pathways vary.
II. The Priority Value of GPU Asset RWAization Among the many non-standard AI assets, GPUs may represent one of the most practical directions for exploration: Standardization & Clear Residual Value: Mainstream GPU models have transparent market pricing and well-defined residual value.Active Secondary Market: Strong resale liquidity ensures partial recovery in case of default.Real Productivity Attributes: GPU demand is directly tied to AI industry growth, providing real cash flow generation capacity.High Narrative Fit: Positioned at the intersection of AI and DeFi — two of the hottest narratives — GPUs naturally attract investor attention. As AI compute data centers remain a highly nascent industry, traditional banks often struggle to understand their operating models and are therefore unable to provide loan support. Only large enterprises such as CoreWeave and Crusoe can secure financing from major private credit institutions like Apollo, while small and mid-sized operators are largely excluded — highlighting the urgent need for financing channels that serve the mid-to-small enterprise segment. It should be noted, however, that GPU RWAization does not eliminate credit risk. Enterprises with strong credit profiles can typically obtain cheaper financing from banks, and may have little need for on-chain financing. Tokenized financing often appeals more to small and medium-sized enterprises, which inherently face higher default risk. This creates a structural paradox in RWA: high-quality borrowers do not need tokenization, while higher-risk borrowers are more inclined to adopt it. Nevertheless, compared to traditional equipment leasing, GPUs’ high demand, recoverability, and clear residual value make their risk-return profile more attractive. The significance of RWAization lies not in eliminating risk, but in making risk more transparent, priceable, and tradable. As the flagship of non-standard asset RWAs, GPUs embody both industrial value and experimental potential — though their success ultimately depends on off-chain due diligence and enforcement, rather than purely on-chain design. III. Frontier Exploration of Robotics Asset RWAization Beyond computing hardware, the robotics industry is also entering the RWAization landscape, with the market projected to exceed $185 billion by 2030, signaling immense potential. The rise of Industry 4.0 is ushering in an era of intelligent automation and human–machine collaboration. In the coming years, robots will become ubiquitous—across factories, logistics, retail, and even homes. By enabling the adoption and deployment of intelligent robots through structured, on-chain financing, while creating an investable product that allows users to participate in this global shift. Feasible pathways include: Robotics Hardware FinancingProvides capital for production and deployment.Returns come from leasing, direct sales, or Robot-as-a-Service (RaaS) models.Cash flows can be mapped on-chain through SPC structures with insurance coverage, reducing default and disposal risks.Data Stream FinancializationEmbodied AI requires large-scale real-world data.Financing can support sensor deployment and distributed data collection networks.Data usage rights or licensing revenues can be tokenized, giving investors exposure to the future value of data.Production & Supply Chain FinancingRobotics involves long value chains, including components, manufacturing capacity, and logistics.Unlock working capital through trade finance, and mapping future shipments and cash flows on-chain. Compared with GPU assets, robotics assets are far more dependent on operations and real-world deployment. Cash flows are more vulnerable to fluctuations in utilization, maintenance costs, and regulatory constraints. Therefore, it is recommended to adopt a shorter-term structure with higher overcollateralization and reserve ratios to ensure stable returns and liquidity safety. IV. GAIB Protocol: An Economic Layer Linking Off-Chain AI Assets and On-Chain DeFi The RWAization of AI assets is moving from concept to implementation. GPUs have emerged as the most practical on-chain asset class, while robotics financing represents a longer-term growth frontier. To give these assets true financial attributes, it is essential to build an economic layer that can bridge off-chain financing, generate yield-bearing instruments, and connect seamlessly with DeFi liquidity. GAIB was born in this context. Rather than directly tokenizing AI hardware, it brings on-chain the financing contracts collateralized by enterprise-grade GPUs or robots, thereby building an economic bridge between off-chain cash flows and on-chain capital markets. Off-chain, enterprise-grade GPU clusters or robotic assets purchased and used by cloud service providers and data centers serve as collateral; On-chain, AID is used for price stability and liquidity management (non-yield-bearing, fully backed by T-Bills), while sAID provides yield exposure and automatic compounding (underpinned by a financing portfolio plus T-Bills).
GAIB’s Off-Chain Financing Model GAIB partners with global cloud providers and data centers, using GPU clusters as collateral to design three types of financing agreements: Debt Model: Fixed interest payments (annualized ~10–20%).Equity Model: Revenue-sharing from GPU & Robotics income (annualized ~60–80%+).Hybrid Model: Combination of fixed interest and revenue-sharing. Risk management relies on over-collateralization of physical GPUs and bankruptcy-isolated legal structures, ensuring that in case of default, assets can be liquidated or reassigned to partnered data centers to continue generating cash flow. With enterprise-grade GPUs featuring short payback cycles, financing tenors are significantly shorter than traditional debt products, typically 3–36 months. To enhance security, GAIB works with third-party underwriters, auditors, and custodians to enforce strict due diligence and post-loan management. In addition, Treasury reserves serve as supplementary liquidity protection. On-Chain Mechanisms Minting & Redemption: Qualified users (Whitelist + KYC) can mint AID with stablecoins or redeem AID back into stablecoins via smart contracts.In addition, non-KYC users can also obtain it through secondary market trading.Staking & Yield: Users can stake AID to obtain sAID, which automatically accrues yield and appreciates over time.Liquidity Pools: GAIB will deploy AID liquidity pools on mainstream AMMs, enabling users to swap between AID and stablecoins. DeFi Use Cases Lending: AID can be integrated into lending protocols to improve capital efficiency.Yield Trading: sAID can be split into PT/YT (Principal/ Yield Tokens), supporting diverse risk-return strategies.Derivatives: AID and sAID can serve as yield-bearing primitives for derivatives such as options and futures.Custom Strategies: Vaults and yield optimizers can incorporate AID/sAID, allowing for personalized portfolio allocation. In essence, GAIB’s core logic is to convert off-chain real cash flows — backed by GPUs, Robotic Assets, and Treasuries — into on-chain composable assets. Through the design of AID/sAID and integration with DeFi protocols, GAIB enables the creation of markets for yield, liquidity, and derivatives. This dual foundation of real-world collateral + on-chain financial innovation builds a scalable bridge between the AI economy and crypto finance. V. Off-Chain: GPU Asset Tokenization Standards and Risk Management Mechanisms GAIB uses an SPC (Segregated Portfolio Company) structure to convert off-chain GPU financing into on-chain yield certificates. Investors deposit stablecoins to mint AI Synthetic Dollars (AID), which can be staked for sAID to earn returns from GAIB’s GPU and robotics financing. As repayments flow into the protocol, sAID appreciates in value, and holders can burn it to redeem principal and yield — creating a one-to-one link between on-chain assets and real cash flows. Tokenization Standards and Operational Workflow GAIB requires assets to be backed by robust collateral and guarantee mechanisms. Financing agreements must include monthly monitoring, delinquency thresholds, over-collateralization compliance, and require underwriters to have at least 2+ years of lending experience with full data disclosure. Process flow: Investor deposits stablecoins → Smart contract mints AID (non-yield-bearing, backed by T-Bills) → Holder stakes and receives sAID (yield-bearing) → the staked funds are used for GPU/robotics financing agreements → SPC repayments flow back into GAIB → the value of sAID appreciates over time → investors burn sAID to redeem their principal and yield. Risk Management Mechanisms Over-Collateralization — Financing pools maintain ~30% over-collateralization.Cash Reserves — ~5–7% of funds are allocated to independent reserve accounts for interest payments and default buffering.Credit Insurance — Cooperation with regulated insurers to partially transfer GPU provider default risk.Default Handling — In case of default, GAIB and underwriters may liquidate GPUs, transfer them to alternative operators, or place them under custodial management to continue generating cash flows. SPC’s bankruptcy isolation ensures each asset pool remains independent and unaffected by others. In addition, the GAIB Credit Committee is responsible for setting tokenization standards, credit evaluation frameworks, and underwriter admission criteria. Using a structured risk analysis framework — covering borrower fundamentals, external environment, transaction structure, and recovery rates — it enforces due diligence and post-loan monitoring to ensure security, transparency, and sustainability of transactions. Structured Risk Evaluation Framework (Illustrative reference only)
VI. On-Chain: AID Synthetic Dollar , sAID Yield Mechanism, and the Early Deposit Program GAIB Dual-Token Model: AID Synthetic Stablecoin and sAID Yield-Bearing Certificate GAIB introduces AID (AI Synthetic Dollar) — a synthetic asset backed by U.S. Treasury reserves. Its supply is dynamically linked to protocol capital: AID is minted when funds flow into the protocol.AID is burned when profits are distributed or redeemed. This ensures that AID’s scale always reflects the underlying asset value. AID itself only serves as a stable unit of account and medium of exchange, without directly generating yield. To capture yield, users stake AID to receive sAID. As a yield-bearing, transferable certificate, sAID appreciates over time in line with protocol revenues (GPU/robotics financing repayments, U.S. Treasury interest, etc.). Returns are reflected through the exchange ratio between sAID and AID. Holders automatically accumulate yield without any additional actions. At redemption, users can withdraw their initial principal and accrued rewards after a short cooldown period. AID provides stability and composability, making it suitable for trading, lending, and liquidity provision.sAID carries the yield property, both appreciating in value directly and supporting further composability in DeFi (e.g., splitting into PT/YT for risk-return customization). In summary, AID + sAID form GAIB’s dual-token economic layer: AID ensures stable circulation and sAID captures real yield tied to AI infrastructure. This design preserves the usability of a synthetic asset while giving users a yield gateway linked to the AI infrastructure economy.
GAIB AID / sAID vs. Ethena USDe / sUSDe vs. Lido stETH The relationship between AID and sAID is comparable to Ethena’s USDe / sUSDe and Lido’s ETH / stETH: The base asset (USDe, AID, ETH) itself is non-yield-bearing.Only after conversion to the yield-bearing version (sUSDe, sAID, stETH) does it automatically accrue yield. The key difference lies in the yield source: sAID derives yield from GPU financing agreement + US Treasuries. sUSDe yields come from derivatives hedging/arbitrage. and stETH yield comes from ETH staking.
AID Alpha: GAIB’s Liquidity Bootstrapping and Incentive Program (Pre-Mainnet) Launched on May 12, 2025, AID Alpha serves as GAIB’s early deposit program ahead of the AID mainnet, designed to bootstrap liquidity while rewarding early participants through extra incentives and gamified mechanics. Initial deposits are allocated to U.S. Treasuries for safety, then gradually shifted into GPU financing transactions, creating a transition from low-risk → high-yield. On the technical side, AID Alpha contracts follow the ERC-4626 standard, issuing AIDα receipt tokens (e.g., AIDaUSDC, AIDaUSDT) to represent deposits and ensure cross-chain composability. During the Final Spice stage, GAIB expanded deposit options to multiple stablecoins (USDC, USDT, USR, CUSDO, USD1). Each deposit generates a corresponding AIDα token, which serves as proof of deposit, automatically tracks yield and counts toward the Spice points system, which enhances rewards and governance allocation. Current AIDα Pools (TVL capped at $80M):
All AIDα deposits have a lock-up period of up to two months. After the campaign ends, users can choose to either convert their AIDα into mainnet AID and stake it as sAID to earn ongoing yields, or redeem their original assets while retaining the accumulated Spice points. Spice is GAIB’s incentive point system launched during the AID Alpha phase, designed to measure early participation and allocate future governance rights. The rule is “1 USD = 1 Spice per day”, with additional multipliers from various channels (e.g., 10× for deposits, 20× for Pendle YT, 30× for Resolv USR), up to a maximum of 30×, creating a dual incentive model of “yield + points.” In addition, a referral mechanism further amplifies rewards (Level 1: 20%, Level 2: 10%). After the Final Spice event concludes, all points will be locked and used for governance and reward distribution upon mainnet launch. Fremen Essence NFT: GAIB also issued 3,000 limited Fremen Essence NFTs as early supporter badges: Top 200 depositors automatically qualify.Remaining NFTs distributed via whitelist and minimum $1,500 deposit requirement. Minting is free (gas only).NFT holders gain exclusive mainnet rewards, priority product testing rights, and core community status. Currently, the NFTs are trading at around 0.1 ETH on secondary markets, with a total trading volume of 98 ETH. VII. GAIB Transparency: On-Chain Funds and Off-Chain Assets GAIB maintains a high standard of transparency across both assets and protocols. On-chain, users can track asset categories (USDC, USDT, USR, CUSDO, USD1), cross-chain distribution (Ethereum, Sei, Arbitrum, Base, etc.), TVL trends, and detailed breakdowns in real time via the official website, DefiLlama, and Dune dashboards. Off-chain, the official site discloses portfolio allocation ratios, active deal amounts, expected returns, and selected pipeline projects.GAIB Official Transparency Portal: https://aid.gaib.ai/transparencyDefiLlama: https://defillama.com/protocol/tvl/gaibDune: https://dune.com/gaibofficial
Asset Allocation Snapshot
As of October 7, 2025, GAIB manages a total of $175.29 million in assets. This “dual-layer allocation” balances stability with excess returns from AI infrastructure financing. Reserves account for 71% ($124.9M), mainly U.S. Treasuries, around 4% APYDeployed assets account for 29% ($50.4M), allocated to off-chain GPU and robotics financing with an average 15% APY.
On-chain fund distribution: According to the latest Dune Analytics data, Ethereum holds 83.2% of TVL, Sei 13.0%, while Base and Arbitrum together make up less than 4%. By asset type, deposits are dominated by USDC (52.4%) and USDT (47.4%), with smaller allocations to USD1 (~2%), USR (0.1%), and CUSDO (0.09%). Off-chain asset deployment: GAIB’s active deals are aligned with its capital allocation, including: Siam.AI (Thailand): $30M, 15% APYTwo Robotics Financing deals: $15M combined, 15% APYUS Neocloud Provider: $5.4M, 30% APY In addition, GAIB has also established approximately $725M in projects pipeline reserves, with a broader total pipeline outlook of over $2.5B within 1–2 years: GMI Cloud and Nvidia Cloud Partners across Asia ($200M and $300M), Europe ($60M), and the UAE ($80M).North America Neocloud Providers ($15M and $30M).Robotics asset providers ($20M). This pipeline lays a solid foundation for future expansion and scaling. VIII. Ecosystem: Compute, Robotics, and DeFi GAIB’s ecosystem consists of three pillars — GPU computing resources, robotics innovation enterprises, and DeFi protocol integrations — designed to form a closed-loop cycle of: Real Compute Assets → Financialization → DeFi Optimization.
GPU Compute Ecosystem: On-Chain Tokenization of Compute Assets Within the on-chain financing ecosystem for AI infrastructure, GAIB partners with a diverse set of compute providers, spanning both sovereign/enterprise-level clouds (GMI, Siam.AI) and decentralized networks (Aethir, PaleBlueDot.AI). This ensures both operational stability and an expanded RWA narrative. GMI Cloud: One of NVIDIA’s six Global Reference Platform Partners, operating seven data centers across five countries, with ~$95M already financed. Known for low-latency, AI-native environments. With GAIB’s financing model, GMI’s GPU expansion gains enhanced cross-regional flexibility.Siam.AI: Thailand’s first sovereign-level NVIDIA Cloud Partner. Achieves up to 35x performance improvement and 80% cost reduction in AI/ML and rendering workloads. Completed a $30M GPU tokenization deal with GAIB, marking GAIB’s first GPU RWA case and securing first-mover advantage in Southeast Asia.Aethir: A leading decentralized GPUaaS network with 40,000+ GPUs (incl. 3,000+ H100s). In early 2025, GAIB and Aethir jointly conducted the first GPU tokenization pilot on BNB Chain — raising $100K in 10 minutes. Future integrations aim to connect AID/sAID with Aethir staking, creating dual-yield opportunities.PaleBlueDot.AI: An emerging decentralized GPU cloud provider, adding further strength to GAIB’s DePIN narrative. Robotics Ecosystem: On-Chain Financing of Embodied Intelligence GAIB has formally entered the Embodied AI (robotics) sector, extending the GPU tokenization model into robotics. The aim is to create a dual-engine ecosystem of Compute + Robotics, using SPV collateral structures and cash flow distribution. By packaging robotics and GPU returns into AID/sAID, GAIB enables the financialization of both hardware and operations. To date, GAIB has allocated $15M on robotics financing deals aiming at generating ~15% APY, together with partners including OpenMind, PrismaX, CAMP, Kite, and SiamAI Robotics, spanning hardware, data streams, and supply chain innovations. PrismaX: Branded as “Robots as Miners”, PrismaX connects operators, robots, and data buyers through a teleoperation platform. It produces high-value motion and vision data priced at $30–50/hour, and has validated early commercialization with a $99-per-session paid model. GAIB provides financing to scale robot fleets, while data sales revenues are funneled back to investors via AID/sAID — creating a data-centric financialization pathway.OpenMind: With its FABRIC network and OM1 operating system, OpenMind offers identity verification, trusted data sharing, and multimodal integration — effectively acting as the “TCP/IP” of robotics. GAIB tokenizes task and data contracts to provide capital support. Together, the two achieve a complementary model of technical trustworthiness + financial assetization, enabling robotics assets to move from lab experiments to scalable, financeable, and verifiable growth. Overall, through PrismaX’s data networks, OpenMind’s control systems, and CAMP’s infrastructure deployment, GAIB is building a full-stack ecosystem covering robotics hardware, operations, and data value chains — accelerating both the industrialization and financialization of embodied intelligence. DeFi Ecosystem: Protocol Integrations and Yield Optimization During the AID Alpha stage, GAIB deeply integrated AID/aAID assets into a broad range of DeFi protocols. By leveraging yield splitting, liquidity mining, collateralized lending, and yield boosting, GAIB created a cross-chain, multi-layered yield optimization system, unified under the Spice points incentive framework.
Pendle: Users split AIDaUSDC/USDT into PT (Principal Tokens) and YT (Yield Tokens). PTs deliver ~15% fixed yield; YTs capture future yield and carry a 30x Spice bonus. LP providers also earn 20x Spice.Equilibria & Penpie: Pendle yield enhancers. Equilibria adds ~5% extra yield, while Penpie boosts up to 88% APR. Both carry 20x Spice multipliers.Morpho: Enables PT-AIDa to be used as collateral for borrowing USDC, giving users liquidity while retaining positions, and extending GAIB into Ethereum’s major lending markets.Curve: AIDaUSDC/USDC liquidity pool provides trading fee income plus a 20x Spice boost, ideal for conservative strategies.CIAN & Takara (Sei chain): Users collateralize enzoBTC with Takara to borrow stablecoins, which CIAN auto-deploys into GAIB strategies. This combines BTCfi with AI yield, with a 5x Spice multiplier.Wand (Story Protocol): On Story chain, Wand provides a Pendle-like PT/YT split for AIDa assets, with YTs earning 20x Spice, further enhancing cross-chain composability of AI yield. In summary, GAIB’s DeFi integration strategy spans Ethereum, Arbitrum, Base, Sei, Story Protocol, BNB Chain, and Plume Network. Through Pendle and its ecosystem enhancers (Equilibria, Penpie), lending markets (Morpho), stablecoin DEXs (Curve), BTCfi vaults (CIAN + Takara), and native AI-narrative protocols (Wand), GAIB delivers comprehensive yield opportunities — from fixed income to leveraged yield, and from cross-chain liquidity to AI-native strategies. IX. Team Background and Project Financing The GAIB team unites experts from AI, cloud computing, and DeFi, with backgrounds spanning L2IV, Huobi, Goldman Sachs, Ava Labs, and Binance Labs. Core members hail from top institutions such as Cornell, UPenn, NTU, and UCLA, bringing deep experience in finance, engineering, and blockchain infrastructure. Together, they form a strong foundation for bridging real-world AI assets with on-chain financial innovation.
Kony Kwong — Co-Founder & CEO Kony brings cross-disciplinary expertise in traditional finance and crypto venture capital. He previously worked as an investor at L2 Iterative Ventures and managed funds and M&A at Huobi. Earlier in his career, he held roles at CMB International, Goldman Sachs, and CITIC Securities. He holds a First-Class Honors degree in International Business & Finance from the University of Hong Kong and a Master’s in Computer Science from the University of Pennsylvania. Observing the lack of financialization (“-fi”) in AI infrastructure, Kony co-founded GAIB to transform real compute assets such as GPUs and robotics into investable on-chain products. Jun Liu — Co-Founder & CTO Jun has a dual background in academic research and industry practice, focusing on blockchain security, crypto-economics, and DeFi infrastructure. He previously served as VP at Sora Ventures, Technical Manager at Ava Labs (supporting BD and smart contract auditing), and led technical due diligence for Blizzard Fund. He holds dual bachelor’s degrees in Computer Science and Electrical Engineering from National Taiwan University and pursued a PhD in Computer Science at Cornell University, contributing to IC3 blockchain research. His expertise lies in building secure and scalable decentralized financial architectures. Alex Yeh — Co-Founder & Advisor Alex is also the founder and CEO of GMI Cloud, one of the world’s leading AI-native cloud service providers and one of NVIDIA’s six Reference Platform Partners. Alex has a background in semiconductors and AI cloud, manages the Realtek family office, and previously held positions at CDIB and IVC. At GAIB, Alex spearheads industry partnerships, bringing GMI’s GPU infrastructure and client networks into the protocol to drive the financialization of AI infra assets. Financing
In December 2024, GAIB closed a $5M Pre-Seed round led by Hack VC, Faction, and Hashed, with participation from The Spartan Group, L2IV, CMCC Global, Animoca Brands, IVC, MH Ventures, Presto Labs, J17, IDG Blockchain, 280 Capital, Aethir, NEAR Foundation, and other notable institutions, along with several industry and crypto angel investors.In July 2025, GAIB raised an additional $10M in strategic investment, led by Amber Group with participation from multiple Asian investors. The funds will be used to accelerate GPU asset tokenization, expand infrastructure and financial products, and deepen strategic collaborations across the AI and crypto ecosystems, strengthening institutional participation in on-chain AI infrastructure. X. Conclusion: Business Logic and Potential Risks Business Logic GAIB’s core positioning is RWAiFi — transforming AI infrastructure assets (GPUs, robotics, etc.) into composable financial products through tokenization. The business logic is built on three layers: Asset Layer: GPUs and robotics have the combined characteristics of high-value hardware + predictable cash flows, aligning with RWA requirements. GPUs, with standardization, clear residual value, and strong demand, are the most practical entry point. Robotics represent a longer-term direction, with monetization via teleoperation, data collection, and RaaS models.Capital Layer: Through a dual-token structure of AID (for stable settlement, non-yield-bearing, backed by T-Bills) and sAID (a yield-bearing fund token underpinned by a financing portfolio plus T-Bills), GAIB separates stable circulation from yield capture. It further unlocks yield and liquidity through DeFi integrations such as PT/YT (Principal/ Yield Tokens), lending, and LP liquidity.Ecosystem Layer: Partnerships with GMI, Siam.AI (sovereign-level GPU clouds), Aethir(decentralized GPU networks), and PrismaX, OpenMind (robotics innovators) build a cross-industry network spanning hardware, data, and services, advancing the Compute + Robotics dual-engine model.
Core Mechanisms Financing Models: Debt (10–20% APY), revenue share (60–80%+), or hybrid, with short tenors (3–36 months) and rapid payback cycles.Credit & Risk Management: Over-collateralization (~30%), cash reserves (5–7%), credit insurance, and default handling (GPU liquidation/custodial operations), alongside third-party underwriting and due diligence, supported by internal credit rating systems.On-Chain Mechanisms: AID minting/redemption and sAID yield accrual, integrated with Pendle, Morpho, Curve, CIAN, Wand, and other protocols for cross-chain, multi-dimensional yield optimization.Transparency: Real-time asset and cash flow tracking provided via the official site, DefiLlama, and Dune ensures clear correspondence between off-chain financing and on-chain assets. Potential Risks Despite GAIB’s transparent design (AID, sAID, AID Alpha, GPU Tokenization, etc.), underlying risks remain, and investors must carefully assess their own risk tolerance: Market & Liquidity Risks: GPU financing returns and digital asset prices are subject to volatility, with no guaranteed returns. Lockups may create liquidity challenges or discounted exits under adverse market conditions.Credit & Execution Risks: Financing often involves SMEs, which face higher default risk. Recovery depends heavily on off-chain enforcement — weak execution may directly affect investor repayments.Technical & Security Risks: Smart contract vulnerabilities, hacking, oracle manipulation, or key loss could cause asset losses. Deep integration with external DeFi protocols (e.g., Pendle, Curve) boosts TVL growth but also introduces external security and liquidity risks.Asset-Specific & Operational Risks: GPUs benefit from standardization and residual markets, but robotics assets are non-standard, highly operationally dependent, and vulnerable to regulatory differences across jurisdictions.Compliance & Regulatory Risks: The computing power assets invested in by GAIB belong to a new market and asset class that does not fall under the scope of traditional financial licensing. This could lead to regional regulatory challenges, including potential restrictions on business operations, asset issuance, and usage. Disclaimer This report was produced with the assistance of ChatGPT-5 AI tools. The author has carefully proofread and ensured accuracy, but errors or omissions may remain. Importantly, crypto assets often exhibit divergence between project fundamentals and secondary market token performance. This content is provided for informational and academic/research purposes only, and does not constitute investment advice or a recommendation to buy or sell any token. #GPU #Robotics #Defi #AI #GAIB
GAIB Forschungsbericht: Der Weg zur On-Chain-Finanzialisierung der AI-Infrastruktur - RWAiFi
Autor: 0xjacobzhao | https://linktr.ee/0xjacobzhao Mit AI, die als die am schnellsten wachsende Technologiewelle der Welt betrachtet wird, wird Rechenleistung als neue "Währung" angesehen, und leistungsstarke Hardware wie GPUs entwickelt sich zunehmend zu strategischen Vermögenswerten. Doch lange Zeit war die Finanzierung und Liquidität dieser Art von Vermögenswerten eingeschränkt. Gleichzeitig benötigt die Krypto-Finanzwelt dringend den Zugang zu hochwertigen Vermögenswerten mit echtem Cashflow, und die RWA-Transformation wird zunehmend zur entscheidenden Brücke zwischen traditioneller Finanzen und dem Krypto-Markt. AI-Infrastrukturvermögen wird aufgrund ihrer Eigenschaften von "hochwertiger Hardware + vorhersehbarem Cashflow" allgemein als der beste Durchbruch für nicht-standardisierte RWA-Assets angesehen, wobei GPUs das realistischste Umsetzungspotenzial bieten, während Roboter eine langfristige Erkundungsrichtung repräsentieren. In diesem Kontext bietet der von GAIB vorgeschlagene RWAiFi-Weg (RWA + AI + DeFi) eine neuartige Lösung für den "On-Chain-Finanzialisierungsweg der AI-Infrastruktur" und fördert den Kreislauf-Effekt von "AI-Infrastruktur (Rechenleistung und Robotik) x RWA x DeFi".
From Federated Learning to Decentralized Agent Networks: An Analysis on ChainOpera
Written by 0xjacobzhao | https://linktr.ee/0xjacobzhao In our June report “The Holy Grail of Crypto AI: Frontier Exploration of Decentralized Training”, we discussed Federated Learning—a “controlled decentralization” paradigm positioned between distributed training and fully decentralized training. Its core principle is keeping data local while aggregating parameters centrally, a design particularly suited for privacy-sensitive and compliance-heavy industries such as healthcare and finance. At the same time, our past research has consistently highlighted the rise of Agent Networks. Their value lies in enabling complex tasks to be completed through autonomous cooperation and division of labor across multiple agents, accelerating the shift from “large monolithic models” toward “multi-agent ecosystems.” Federated Learning, with its foundations of local data retention, contribution-based incentives, distributed design, transparent rewards, privacy protection, and regulatory compliance, has laid important groundwork for multi-party collaboration. These same principles can be directly adapted to the development of Agent Networks. The FedML team has been following this trajectory: evolving from open-source roots to TensorOpera (an AI infrastructure layer for the industry), and further advancing to ChainOpera (a decentralized Agent Network). That said, Agent Networks are not simply an inevitable extension of Federated Learning. Their essence lies in autonomous collaboration and task specialization among agents, and they can also be built directly on top of Multi-Agent Systems (MAS), Reinforcement Learning (RL), or blockchain-based incentive mechanisms. I. Federated Learning and the AI Agent Technology Stack Federated Learning (FL) is a framework for collaborative training without centralizing data. Its core principle is that each participant trains a model locally and uploads only parameters or gradients to a coordinating server for aggregation, thereby ensuring “data stays within its domain” and meeting privacy and compliance requirements. Having been tested in sectors such as healthcare, finance, and mobile applications, FL has entered a relatively mature stage of commercialization. However, it still faces challenges such as high communication overhead, incomplete privacy guarantees, and efficiency bottlenecks caused by heterogeneous devices. Compared with other training paradigms: Distributed training emphasizes centralized compute clusters to maximize efficiency and scale.Decentralized training achieves fully distributed collaboration via open compute networks.Federated learning lies in between, functioning as a form of “controlled decentralization”: it satisfies industrial requirements for privacy and compliance while enabling cross-institution collaboration, making it more suitable as a transitional deployment architecture.
AI Agent Protocol Stack In our previous research, we categorized the AI Agent protocol stack into three major layers: 1. Infrastructure Layer (Agent Infrastructure Layer) The foundational runtime support for agents, serving as the technical base of all Agent systems. Core Modules:Agent Framework – development and runtime environment for agents.Agent OS – deeper-level multitask scheduling and modular runtime, providing lifecycle management for agents.Supporting Modules:Agent DID (decentralized identity)Agent Wallet & Abstraction (account abstraction & transaction execution)Agent Payment/Settlement (payment and settlement capabilities) 2. Coordination & Execution Layer Focuses on agent collaboration, task scheduling, and incentive systems—key to building collective intelligence among agents. Agent Orchestration: Centralized orchestration and lifecycle management, task allocation, and workflow execution—suited for controlled environments.Agent Swarm: Distributed collaboration structure emphasizing autonomy, division of labor, and resilient coordination—suited for complex, dynamic environments.Agent Incentive Layer: Economic layer of the agent network that incentivizes developers, executors, and validators, ensuring sustainable ecosystem growth.
3. Application & Distribution Layer Covers distribution channels, end-user applications, and consumer-facing products. Distribution Sub-layer: Agent Launchpads, Agent Marketplaces, Agent Plugin NetworksApplication Sub-layer: AgentFi, Agent-native DApps, Agent-as-a-ServiceConsumer Sub-layer: Social/consumer agents, focused on lightweight end-user scenariosMeme Sub-layer: Hype-driven “Agent” projects with little actual technology or application—primarily marketing-driven. II. Federated Learning Benchmark: FedML and the TensorOpera Full-Stack Platform FedML is one of the earliest open-source frameworks for Federated Learning (FL) and distributed training. Originating from an academic team at USC, it gradually evolved into the core product of TensorOpera AI through commercialization. For researchers and developers, FedML provides cross-institution and cross-device tools for collaborative data training. In academia, FedML has become a widely adopted experimental platform for FL research, frequently appearing at top conferences such as NeurIPS, ICML, and AAAI. In industry, it has earned a strong reputation in privacy-sensitive fields such as healthcare, finance, edge AI, and Web3 AI—positioning itself as the benchmark toolchain for federated learning. TensorOpera represents the commercialized evolution of FedML, upgraded into a full-stack AI infrastructure platform for enterprises and developers. While retaining its federated learning capabilities, it extends into GPU marketplaces, model services, and MLOps, thereby expanding into the broader market of the LLM and Agent era. Its overall architecture is structured into three layers: Compute Layer (foundation), Scheduler Layer (coordination), and MLOps Layer (application). Compute Layer (Foundation) The Compute layer forms the technical backbone of TensorOpera, continuing the open-source DNA of FedML.Core Functions: Parameter Server, Distributed Training, Inference Endpoint, and Aggregation Server.Value Proposition: Provides distributed training, privacy-preserving federated learning, and a scalable inference engine. Together, these support the three core capabilities of Train / Deploy / Federate, covering the full pipeline from model training to deployment and cross-institution collaboration.Scheduler Layer (Coordination) The Scheduler layer acts as the compute marketplace and scheduling hub, composed of GPU Marketplace, Provision, Master Agent, and Schedule & Orchestrate modules.Capabilities: Enables resource allocation across public clouds, GPU providers, and independent contributors.Significance: This marks the pivotal step from FedML to TensorOpera—supporting large-scale AI training and inference through intelligent scheduling and orchestration, covering LLM and generative AI workloads.Tokenization Potential: The “Share & Earn” model leaves an incentive mechanism interface open, showing compatibility with DePIN or broader Web3 models.MLOps Layer (Application) The MLOps layer provides direct-facing services for developers and enterprises, including Model Serving, AI Agents, and Studio modules.Applications: LLM chatbots, multimodal generative AI, and developer copilot tools.Value Proposition: Abstracts low-level compute and training capabilities into high-level APIs and products, lowering the barrier to use. It offers ready-to-use agents, low-code environments, and scalable deployment solutions.Positioning: Comparable to new-generation AI infrastructure platforms such as Anyscale, Together, and Modal—serving as the bridge from infrastructure to applications.
In March 2025, TensorOpera upgraded into a full-stack platform oriented toward AI Agents, with its core products covering AgentOpera AI App, Framework, and Platform: Application Layer: Provides ChatGPT-like multi-agent entry points.Framework Layer: Evolves into an “Agentic OS” through graph-structured multi-agent systems and Orchestrator/Router modules.Platform Layer: Deeply integrates with the TensorOpera model platform and FedML, enabling distributed model services, RAG optimization, and hybrid edge–cloud deployment. The overarching vision is to build “one operating system, one agent network”, allowing developers, enterprises, and users to co-create the next-generation Agentic AI ecosystem in an open and privacy-preserving environment. III. The ChainOpera AI Ecosystem: From Co-Creators and Co-Owners to the Technical Foundation If FedML represents the technical core, providing the open-source foundations of federated learning and distributed training; and TensorOpera abstracts FedML’s research outcomes into a commercialized, full-stack AI infrastructure—then ChainOpera takes this platform capability on-chain. By combining AI Terminals + Agent Social Networks + DePIN-based compute/data layers + AI-Native blockchains, ChainOpera seeks to build a decentralized Agent Network ecosystem. The fundamental shift is this: while TensorOpera remains primarily enterprise- and developer-oriented, ChainOpera leverages Web3-style governance and incentive mechanisms to include users, developers, GPU providers, and data contributors as co-creators and co-owners. In this way, AI Agents are not only “used” but also “co-created and co-owned.”
Co-Creator Ecosystem Through its Model & GPU Platform and Agent Platform, ChainOpera provides toolchains, infrastructure, and coordination layers for collaborative creation. This enables model training, agent development, deployment, and cooperative scaling. The ecosystem’s co-creators include: AI Agent Developers – design and operate agents.Tool & Service Providers – templates, MCPs, databases, APIs.Model Developers – train and publish model cards.GPU Providers – contribute compute power via DePIN or Web2 cloud partnerships.Data Contributors & Annotators – upload and label multimodal datasets.
Together, these three pillars—development, compute, and data—drive the continuous growth of the agent network. Co-Owner Ecosystem ChainOpera also introduces a co-ownership mechanism through shared participation in building the network. AI Agent Creators (individuals or teams) design and deploy new agents via the Agent Platform, launching and maintaining them while pushing functional and application-level innovation.AI Agent Participants (from the community) join agent lifecycles by acquiring and holding Access Units, thereby supporting agent growth and activity through usage and promotion.
These two roles represent the supply side and demand side, together forming a value-sharing and co-development model within the ecosystem. Ecosystem Partners: Platforms and Frameworks ChainOpera collaborates widely to enhance usability, security, and Web3 integration: AI Terminal App combines wallets, algorithms, and aggregation platforms to deliver intelligent service recommendations.Agent Platform integrates multi-framework and low-code tools to lower the development barrier.TensorOpera AI powers model training and inference.FedML serves as an exclusive partner, enabling cross-institution, cross-device, privacy-preserving training. The result is an open ecosystem balancing enterprise-grade applications with Web3-native user experiences. Hardware Entry Points: AI Hardware & Partners Through DeAI Phones, wearables, and robotic AI partners, ChainOpera integrates blockchain and AI into smart terminals. These devices enable dApp interaction, edge-side training, and privacy protection, gradually forming a decentralized AI hardware ecosystem. Central Platforms and Technical Foundation TensorOpera GenAI Platform – provides full-stack services across MLOps, Scheduler, and Compute; supports large-scale model training and deployment.TensorOpera FedML Platform – enterprise-grade federated/distributed learning platform, enabling cross-organization/device privacy-preserving training and serving as a bridge between academia and industry.FedML Open Source – the globally leading federated/distributed ML library, serving as the technical base of the ecosystem with a trusted, scalable open-source framework. ChainOpera AI Ecosystem Structure
IV. ChainOpera Core Products and Full-Stack AI Agent Infrastructure In June 2025, ChainOpera officially launched its AI Terminal App and decentralized tech stack, positioning itself as a “Decentralized OpenAI.” Its core products span four modules: Application Layer – AI Terminal & Agent NetworkDeveloper Layer – Agent Creator CenterModel & GPU Layer – Model & Compute NetworkCoAI Protocol & Dedicated Chain Together, these modules cover the full loop from user entry points to underlying compute and on-chain incentives.
AI Terminal App Already integrated with BNB Chain, the AI Terminal supports on-chain transactions and DeFi-native agents. The Agent Creator Center is open to developers, providing MCP/HUB, knowledge base, and RAG capabilities, with continuous onboarding of community-built agents. Meanwhile, ChainOpera launched the CO-AI Alliance, partnering with io.net, Render, TensorOpera, FedML, and MindNetwork.
According to BNB DApp Bay on-chain data (past 30 days): 158.87K unique users, 2.6M transactions and Ranked #2 in the entire “AI Agent” category on BSC, This demonstrates strong and growing on-chain activity. Super AI Agent App – AI Terminal 👉 chat.chainopera.ai Positioned as a decentralized ChatGPT + AI Social Hub, the AI Terminal provides: Multimodal collaboration, Data contribution incentives, DeFi tool integration, Cross-platform assistance, Privacy-preserving agent collaboration (Your Data, Your Agent). Users can directly call the open-source DeepSeek-R1 model and community-built agents from mobile. During interactions, both language tokens and crypto tokens circulate transparently on-chain. Core Value: transforms users from “content consumers” into “intelligent co-creators.” Applicable across DeFi, RWA, PayFi, e-commerce, and other domains via personalized agent networks.
AI Agent Social Network 👉 chat.chainopera.ai/agent-social-network Envisioned as LinkedIn + Messenger for AI Agents.Provides virtual workspaces and Agent-to-Agent collaboration mechanisms (MetaGPT, ChatDEV, AutoGEN, Camel).Evolves single agents into multi-agent cooperative networks spanning finance, gaming, e-commerce, and research.Gradually enhances memory and autonomy.
AI Agent Developer Platform👉 agent.chainopera.ai Designed as a “LEGO-style” creation experience for developers.Supports no-code and modular extensions, Blockchain smart contracts ensure ownership rights, DePIN + cloud infrastructure lower entry barriers and Marketplace enables discovery and distribution
Core Value: empowers developers to rapidly reach users, with contributions transparently recorded and rewarded.
AI Model & GPU Platform 👉 platform.chainopera.ai Serving as the infrastructure layer, it combines DePIN and federated learning to address Web3 AI’s reliance on centralized compute. Capabilities include:Distributed GPU network, Privacy-preserving data training, Model and data marketplace, End-to-end MLOps Vision: shift from “big tech monopoly” to “community-driven infrastructure”—enabling multi-agent collaboration and personalized AI.
ChainOpera Full-Stack Architecture Overview
V. ChainOpera AI Roadmap Beyond the already launched full-stack AI Agent platform, ChainOpera AI holds a firm belief that Artificial General Intelligence (AGI) will emerge from multimodal, multi-agent collaborative networks. Its long-term roadmap is structured into four phases:
Phase I (Compute → Capital): Build decentralized infrastructure: GPU DePIN networks, federated learning, distributed training/inference platforms.Introduce a Model Router to coordinate multi-end inference.Incentivize compute, model, and data providers with usage-based revenue sharing. Phase II (Agentic Apps → Collaborative AI Economy): Launch AI Terminal, Agent Marketplace, and Agent Social Network, forming a multi-agent application ecosystem.Deploy the CoAI Protocol to connect users, developers, and resource providers.Introduce user–developer matching and a credit system, enabling high-frequency interactions and sustainable economic activity. Phase III (Collaborative AI → Crypto-Native AI): Expand into DeFi, RWA, payments, and e-commerce scenarios.Extend to KOL-driven and personal data exchange use cases.Develop finance/crypto-specialized LLMs and launch Agent-to-Agent payments and wallet systems, unlocking “Crypto AGI” applications. Phase IV (Ecosystems → Autonomous AI Economies): Evolve into autonomous subnet economies, each subnet specializing in applications, infrastructure, compute, models, or data.Enable subnet governance and tokenized operations, while cross-subnet protocols support interoperability and cooperation.Extend from Agentic AI into Physical AI (robotics, autonomous driving, aerospace). Disclaimer: This roadmap is for reference only. Timelines and functionalities may adjust dynamically with market conditions and do not constitute a delivery guarantee. VI. Token Incentives and Protocol Governance ChainOpera has not yet released a full token incentive plan, but its CoAI Protocol centers on “co-creation and co-ownership.” Contributions are transparently recorded and verifiable via blockchain and a Proof-of-Intelligence (PoI) mechanism. Developers, compute providers, data contributors, and service providers are compensated based on standardized contribution metrics. Users consume services.Resource providers sustain operations.Developers build applications. All participants share in ecosystem growth dividends. The platform sustains itself via a 1% service fee, allocation rewards, and liquidity support—building an open, fair, and collaborative decentralized AI ecosystem. Proof-of-Intelligence (PoI) Framework PoI is ChainOpera’s core consensus mechanism under the CoAI Protocol, designed to establish a transparent, fair, and verifiable incentive and governance system for decentralized AI. It extends Proof-of-Contribution into a blockchain-enabled collaborative machine learning framework, addressing federated learning’s persistent issues: insufficient incentives, privacy risks, and lack of verifiability. Core Design: Anchored in smart contracts, integrated with decentralized storage (IPFS), aggregation nodes, and zero-knowledge proofs (zkSNARKs).Achieves five key objectives:Fair rewards based on contribution, ensuring trainers are incentivized for real model improvements.Data remains local, guaranteeing privacy protection.Robustness mechanisms against malicious participants (poisoning, aggregation attacks).ZKP verification for critical processes: model aggregation, anomaly detection, contribution evaluation.Efficiency and generality across heterogeneous data and diverse learning tasks.
Token Value Flows in Full-Stack AI ChainOpera’s token design is anchored in utility and contribution recognition, not speculation. It revolves around five core value streams: LaunchPad – for agent/application initiation.Agent API – service access and integration.Model Serving – inference and deployment fees.Contribution – data annotation, compute sharing, or service input.Model Training – distributed training tasks. Stakeholders: AI Users – spend tokens to access services or subscribe to apps; contribute by providing/labeling/staking data.Agent & App Developers – use compute/data for development; rewarded for contributing agents, apps, or datasets.Resource Providers – contribute compute, data, or models; rewarded transparently.Governance Participants (Community & DAO) – use tokens to vote, shape mechanisms, and coordinate the ecosystem.Protocol Layer (CoAI) – sustains development through service fees and automated balancing of supply/demand.Nodes & Validators – secure the network by providing validation, compute, and security services. Protocol Governance ChainOpera adopts DAO-based governance, where token staking enables participation in proposals and voting, ensuring transparency and fairness. Governance mechanisms include: Reputation System – validates and quantifies contributions.Community Collaboration – proposals and voting drive ecosystem evolution.Parameter Adjustments – covering data usage, security, and validator accountability. The overarching goal: prevent concentration of power, ensure system stability, and sustain community co-creation. VIII. Team Background and Project Financing The ChainOpera project was co-founded by Professor Salman Avestimehr, a leading scholar in federated learning, and Dr. Aiden Chaoyang He. The core team spans academic and industry backgrounds from institutions such as UC Berkeley, Stanford, USC, MIT, Tsinghua University, and tech leaders including Google, Amazon, Tencent, Meta, and Apple. The team combines deep research expertise with extensive industry execution capabilities and has grown to over 40 members to date. Co-Founder: Professor Salman Avestimehr Title & Roles: Dean’s Professor of Electrical & Computer Engineering at University of Southern California (USC), Founding Director of the USC-Amazon Center on Trusted AI, and head of the vITAL (Information Theory & Machine Learning) Lab at USC.Entrepreneurship: Co-Founder & CEO of FedML, and in 2022 co-founded TensorOpera/ChainOpera AI.Education & Honors: Ph.D. in EECS from UC Berkeley (Best Dissertation Award). IEEE Fellow with 300+ publications in information theory, distributed computing, and federated learning, cited over 30,000 times. Recipient of PECASE, NSF CAREER Award, and the IEEE Massey Award, among others.Contributions: Creator of the FedML open-source framework, widely adopted in healthcare, finance, and privacy-preserving AI, which became a core foundation for TensorOpera/ChainOpera AI.
Co-Founder: Dr. Aiden Chaoyang He Title & Roles: Co-Founder & President of TensorOpera/ChainOpera AI; Ph.D. in Computer Science from USC; original creator of FedML.Research Focus: Distributed & federated learning, large-scale model training, blockchain, and privacy-preserving computation.Industry Experience: Previously held R&D roles at Meta, Amazon, Google, Tencent; served in core engineering and management positions at Tencent, Baidu, and Huawei, leading the deployment of multiple internet-scale products and AI platforms.Academic Impact: Published 30+ papers with 13,000+ citations on Google Scholar. Recipient of the Amazon Ph.D. Fellowship, Qualcomm Innovation Fellowship, and Best Paper Awards at NeurIPS and AAAI.Technical Contributions: Led the development of FedML, one of the most widely used open-source frameworks in federated learning, supporting 27 billion daily requests. Core contributor to FedNLP and hybrid model parallel training methods, applied in decentralized AI projects such as Sahara AI.
In December 2024, ChainOpera AI announced the completion of a $3.5M seed round, bringing its total funding (combined with TensorOpera) to $17M. Funds will be directed toward building a blockchain Layer 1 and AI operating system for decentralized AI Agents. Lead Investors: Finality Capital, Road Capital, IDG CapitalOther Participants: Camford VC, ABCDE Capital, Amber Group, Modular CapitalStrategic Backers: Sparkle Ventures, Plug and Play, USCNotable Individual Investors:Sreeram Kannan, Founder of EigenLayer and David Tse, Co-Founder of BabylonChain The team stated that this round will accelerate its vision of creating a decentralized AI ecosystem where resource providers, developers, and users co-own and co-create. IX. Market Landscape Analysis: Federated Learning and AI Agent Networks Federated Learning Landscape The federated learning (FL) field is shaped by four main frameworks. FedML is the most comprehensive, combining FL, distributed large-model training, and MLOps, making it enterprise-ready. Flower is lightweight and widely used in teaching and small-scale experiments. TFF (TensorFlow Federated) is academically valuable but weak in industrialization. OpenFL targets healthcare and finance, with strong compliance features but a closed ecosystem. In short: FedML is the industrial-grade all-rounder, Flower emphasizes ease of use, TFF remains academic, and OpenFL excels in vertical compliance. Industry Platforms & Infrastructure TensorOpera, the commercialized evolution of FedML, integrates cross-cloud GPU scheduling, distributed training, federated learning, and MLOps in a unified stack. Positioned as a bridge between research and industry, it serves developers, SMEs, and Web3/DePIN ecosystems. Effectively, TensorOpera is like “Hugging Face + W&B” for federated and distributed learning, offering a more complete and general-purpose platform than tool- or sector-specific alternatives. Innovation Layer: ChainOpera vs. Flock ChainOpera and Flock both merge FL with Web3 but diverge in focus. ChainOpera builds a full-stack AI Agent platform, turning users into co-creators through the AI Terminal and Agent Social Network. Flock centers on Blockchain-Augmented FL (BAFL), stressing privacy and incentives at the compute and data layer. Put simply: ChainOpera emphasizes applications and agent networks, while Flock focuses on low-level training and privacy-preserving computation. Federated Learning & AI Infrastructure Landscape
Agent Network Layer: ChainOpera vs. Olas At the agent-network level, the most representative projects are ChainOpera and Olas Network. ChainOpera: rooted in federated learning, builds a full-stack loop across models, compute, and agents. Its Agent Social Network acts as a testbed for multi-agent interaction and social collaboration.Olas Network (Autonolas / Pearl): originated from DAO collaboration and the DeFi ecosystem, positioned as a decentralized autonomous service network. Through Pearl, it delivers direct-to-market DeFi agent applications—showing a very different trajectory from ChainOpera.
X. Investment Thesis and Risk Analysis Investment Thesis Technical Moat: ChainOpera’s strength lies in its unique evolutionary path: from FedML (the benchmark open-source framework for federated learning) → TensorOpera (enterprise-grade full-stack AI infrastructure) → ChainOpera (Web3-enabled agent networks + DePIN + tokenomics). This trajectory integrates academic foundations, industrial deployment, and crypto-native narratives, creating a differentiated moat.Applications & User Scale: The AI Terminal has already reached hundreds of thousands of daily active users and a thriving ecosystem of 1,000+ agent applications. It ranks #1 in the AI category on BNBChain DApp Bay, showing clear on-chain user growth and verifiable transaction activity. Its multimodal scenarios, initially rooted in crypto-native use cases, have the potential to expand gradually into the broader Web2 user base.Ecosystem Partnerships: ChainOpera launched the CO-AI Alliance, partnering with io.net, Render, TensorOpera, FedML, and MindNetwork to build multi-sided network effects across GPUs, models, data, and privacy computing. In parallel, its collaboration with Samsung Electronics to validate mobile multimodal GenAI demonstrates expansion potential into hardware and edge AI.Token & Economic Model: ChainOpera’s tokenomics are based on the Proof-of-Intelligence consensus, with incentives distributed across five value streams: LaunchPad, Agent API, Model Serving, Contribution, and Model Training. A 1% platform service fee, reward allocation, and liquidity support form a positive feedback loop, avoiding reliance on pure “token speculation” and enhancing sustainability. Potential Risks Technical execution risks: ChainOpera’s proposed five-layer decentralized architecture spans a wide scope. Cross-layer coordination—especially in distributed inference for large models and privacy-preserving training—still faces performance and stability challenges and has not yet been validated at scale.User and ecosystem stickiness: While early user growth is notable, it remains to be seen whether the Agent Marketplace and developer toolchain can sustain long-term activity and high-quality contributions. The current Agent Social Network is mainly LLM-driven text dialogue; user experience and retention still need refinement. Without carefully designed incentives, the ecosystem risks short-term hype without long-term value.Sustainability of the business model: At present, revenue primarily depends on platform service fees and token circulation; stable cash flows are not yet established. Compared with AgentFi or Payment-focused applications that carry stronger financial or productivity attributes, ChainOpera’s current model still requires further validation of its commercial value. In addition, the mobile and hardware ecosystem remains exploratory, leaving its market prospects uncertain. Disclaimer: This report was prepared with assistance from AI tools (ChatGPT-5). The author has made every effort to proofread and ensure accuracy, but some errors or omissions may remain. Readers should note that crypto asset markets often exhibit divergence between project fundamentals and secondary-market token performance. This report is intended solely for information consolidation and academic/research discussion. It does not constitute investment advice, nor should it be interpreted as a recommendation to buy or sell any token.
免责声明:本文在创作过程中借助了 ChatGPT-5 的 AI 工具辅助完成,作者已尽力校对并确保信息真实与准确,但仍难免存在疏漏,敬请谅解。需特别提示的是,加密资产市场普遍存在项目基本面与二级市场价格表现背离的情况。本文内容仅用于信息整合与学术/研究交流,不构成任何投资建议,亦不应视为任何代币的买卖推荐。
OpenLedger Forschungsbericht: Monetarisierung von Daten und Modellen in der AI-Chain
Einführung | Die Ebene der Modelle von Crypto AI Daten, Modelle und Rechenleistung sind die drei zentralen Elemente der AI-Infrastruktur. Sie sind wie Kraftstoff (Daten), Motor (Modelle) und Energie (Rechenleistung) unentbehrlich. Ähnlich wie der evolutionäre Pfad der Infrastruktur in der traditionellen AI-Industrie hat auch der Crypto-AI-Bereich ähnliche Phasen durchlaufen. Anfang 2024 wurde der Markt vorübergehend von dezentralen GPU-Projekten (Akash, Render, io.net usw.) dominiert, die häufig die "Kraft des Rechnens" betonten. Doch ab 2025 verlagert sich der Fokus der Branche zunehmend auf die Modell- und Datenebene, was darauf hinweist, dass Crypto AI von einem Wettbewerb um grundlegende Ressourcen zu einem nachhaltigerem und wertvolleren mittleren Aufbau übergeht.
OpenLedger Forschungsbericht: Eine KI-Kette für monetarisierbare Daten und Modelle
1. Einführung | Der Modell-Layer-Shift in Crypto AI Daten, Modelle und Rechenleistung bilden die drei Kernpfeiler der KI-Infrastruktur – vergleichbar mit Treibstoff (Daten), Motor (Modell) und Energie (Rechnen) – alles unverzichtbar. Ähnlich wie die Entwicklung der Infrastruktur in der traditionellen KI-Branche hat der Crypto-AI-Sektor einen ähnlichen Verlauf durchlaufen. Anfang 2024 war der Markt von dezentralen GPU-Projekten (wie Akash, Render und io.net) dominiert, die durch ein ressourcenintensives Wachstumsmodell gekennzeichnet sind, das sich auf rohe Rechenleistung konzentriert. Bis 2025 hat sich die Aufmerksamkeit der Branche allmählich auf die Modell- und Datenebenen verschoben, was einen Übergang von einem Wettbewerb um Infrastrukturen auf niedriger Ebene zu einer nachhaltigeren, anwendungsgetriebenen Entwicklung in der Mittelschicht markiert.
Von 0xjacobzhao | https://linktr.ee/0xjacobzhao Zweifellos ist Pendle eines der erfolgreichsten DeFi-Protokolle im aktuellen Krypto-Zyklus. Während viele Protokolle aufgrund von Liquiditätsengpässen und nachlassenden Narrativen ins Stocken geraten sind, hat sich Pendle durch seinen einzigartigen Ertragsaufspaltungs- und Handelsmechanismus hervorgetan und ist zum „Preisfindungsort“ für ertragsbringende Vermögenswerte geworden. Durch die tiefe Integration mit Stablecoins, LSTs/LRTs und anderen ertragsgenerierenden Vermögenswerten hat es seine Position als grundlegende „DeFi-Ertragsrate-Infrastruktur“ gesichert.
From zkVM to Open Proof Market: An Analysis of RISC Zero and Boundless
In blockchain, cryptography is the fundamental basis of security and trust. Zero-Knowledge Proofs (ZK) can compress any complex off-chain computation into a succinct proof that can be efficiently verified on-chain—without relying on third-party trust—while also enabling selective input hiding to preserve privacy. With its combination of efficient verification, universality, and privacy, ZK has become a key solution across scaling, privacy, and interoperability use cases. Although challenges remain, such as the high cost of proof generation and the complexity of circuit development, ZK’s engineering feasibility and degree of adoption have already surpassed other approaches, making it the most widely adopted framework for trusted computation. I. The Evolution of the ZK Track The development of Zero-Knowledge Proofs has been neither instantaneous nor accidental, but rather the result of decades of theoretical accumulation and engineering breakthroughs. Broadly, it can be divided into the following stages: Theoretical Foundations & Technical Breakthroughs (1980s–2010s) The ZK concept was first proposed by MIT scholars Shafi Goldwasser, Silvio Micali, and Charles Rackoff, initially limited to interactive proof theory. In the 2010s, the emergence of Non-Interactive Zero-Knowledge proofs (NIZKs) and zk-SNARKs significantly improved proof efficiency, though they still relied on trusted setup.Blockchain Applications (Late 2010s) Zcash introduced zk-SNARKs to enable private payments, marking the first large-scale blockchain deployment of ZK. However, due to the high cost of proof generation, real-world applications remained relatively limited.Explosive Growth & Expansion (2020s–present) During this period, ZK technology entered the industry mainstream: ZK Rollups: Off-chain batch computation with on-chain proofs enabled high throughput and security inheritance, becoming the core Layer 2 scaling path.zk-STARKs: StarkWare introduced zk-STARKs, eliminating trusted setup while enhancing transparency and scalability.zkEVMs: Teams like Scroll, Taiko, and Polygon advanced EVM bytecode-level proofs, enabling seamless migration of existing Solidity applications.General-purpose zkVMs: Projects such as RISC Zero, Succinct SP1, and Delphinus zkWasm supported verifiable execution of arbitrary programs, extending ZK from a scaling tool to a “trustworthy CPU.”zkCoprocessors: Wrapping zkVMs as coprocessors to outsource complex logic (e.g., RISC Zero Steel, Succinct Coprocessor).zkMarketplaces: Marketizing proof computation into decentralized prover networks (e.g., Boundless), pushing ZK toward becoming a universal compute layer.
Today, ZK technology has evolved from an esoteric cryptographic concept into a core component of blockchain infrastructure. Beyond supporting scalability and privacy, it is also demonstrating strategic value in interoperability, financial compliance, and frontier fields such as ZKML (zero-knowledge machine learning). With the continuous improvement of toolchains, hardware acceleration, and proof networks, the ZK ecosystem is rapidly moving toward large-scale and universal adoption.
II. The Application Landscape of ZK Technology: Scalability, Privacy, and Interoperability Scalability, Privacy, and Interoperability & Data Integrity form the three fundamental application scenarios of ZK-based “trusted computation.” They directly address blockchain’s native pain points: insufficient performance, lack of privacy, and trust across multiple chains. 1. Scalability: Scalability is both the earliest and most widely deployed use case for ZK. The core idea is to move transaction execution off-chain and only verify succinct proofs on-chain, thereby significantly increasing TPS and lowering costs without compromising security. Representative paths include: zkRollups (zkSync, Scroll, Polygon zkEVM): compressing batches of transactions for scaling.zkEVMs: building circuits at the EVM instruction level for native Ethereum compatibility.General-purpose zkVMs (RISC Zero, Succinct): enabling verifiable outsourcing of arbitrary logic.
2. Privacy: Privacy aims to prove the validity of a transaction or action without revealing sensitive data. Typical applications include: Private payments (Zcash, Aztec): ensuring transfer validity without disclosing amounts or counterparties.Private voting & DAO governance: enabling governance while keeping individual votes confidential.Private identity / KYC (zkID, zkKYC): proving “eligibility” without disclosing unnecessary information.
3. Interoperability & Data Integrity: Interoperability is the critical ZK path for solving trust issues in a multi-chain world. By generating proofs of another chain’s state, cross-chain interactions can eliminate reliance on centralized relays. Representative approaches include: zkBridges: cross-chain state proofs. Light client verification: efficient verification of source-chain headers on the target chain. Key projects: Polyhedra, Herodotus. Meanwhile, ZK is also widely used in data and state proofs, such as:Axiom, Space and Time’s zkQuery/zkSQL for historical data and SQL queries.and IoT and storage integrity verification, ensuring off-chain data is verifiably trusted on-chain.
Future Extensions On top of these three foundational scenarios, ZK technology has the potential to extend into broader industries: AI (zkML): generating verifiable proofs for model inference or training, enabling “trustworthy AI.” Financial compliance: proof-of-reserves (PoR), clearing, and auditing, reducing reliance on trust.Gaming & scientific computing: ensuring fairness in GameFi or the integrity of experiments in DeSci. At its core, all of these represent the expansion of “verifiable computation + data proofs” into diverse industries.
III. Beyond zkEVM: The Rise of General-Purpose zkVMs and Proof Markets In 2022, Ethereum co-founder Vitalik Buterin introduced the four types of zkEVMs (Type 1–4), highlighting the trade-offs between compatibility and performance:
Type 1 (Fully Equivalent): Bytecode identical to Ethereum L1; lowest migration cost but slowest proving. Example: Taiko.Type 2 (Fully Compatible): Maintains high EVM equivalence with minimal low-level optimizations; strongest compatibility. Examples: Scroll, Linea.Type 2.5 (Quasi-Compatible): Slight modifications to the EVM (e.g., gas costs, precompile support), sacrificing limited compatibility for better performance. Examples: Polygon zkEVM, Kakarot (EVM on Starknet).Type 3 (Partially Compatible): Deeper architectural modifications allow most applications to run but cannot fully reuse Ethereum infrastructure. Example: zkSync Era.Type 4 (Language-Level Compatible): Abandons bytecode compatibility, compiling directly from high-level languages into zkVM. Delivers the best performance but requires rebuilding the ecosystem. Example: Starknet (Cairo). This stage has often been described as the “zkRollup wars,” aimed at alleviating Ethereum’s execution bottlenecks. However, two key limitations soon became apparent: (1) the difficulty of circuitizing the EVM, which constrained proving efficiency, and (2) the realization that ZK’s potential extends far beyond scaling—into cross-chain verification, data proofs, and even AI computation. Against this backdrop, general-purpose zkVMs have risen, replacing the zkEVM’s “Ethereum-compatibility mindset” with a shift toward chain-agnostic trusted computation. Built on universal instruction sets (e.g., RISC-V, LLVM IR, Wasm), zkVMs support mainstream languages such as Rust and C/C++, allowing developers to build arbitrary application logic with mature libraries, and then generate proofs for on-chain verification. Representative projects include RISC Zero (RISC-V) and Delphinus zkWasm (Wasm). In essence, zkVMs are not merely Ethereum scaling tools, but rather the “trusted CPUs” of the ZK world. RISC-V Approach: Represented by Risc Zero, this path directly adopts the open RISC-V instruction set as the zkVM core. It benefits from an open ecosystem, a simple and circuit-friendly instruction set, and broad compatibility with Rust, C, and C++. Well-suited for building a “general-purpose zkCPU,” though it lacks native compatibility with Ethereum bytecode and therefore requires coprocessor integration.LLVM IR Approach: Represented by Succinct SP1, this design uses LLVM IR as the front-end for multi-language support, while the back-end remains a RISC-V zkVM. In essence, it is “LLVM front-end + RISC-V back-end.” This makes it more versatile than pure RISC-V, but LLVM IR’s complexity increases proving overhead.Wasm Approach: Represented by Delphinus zkWasm, this route leverages the mature WebAssembly ecosystem, which is widely known to developers and inherently cross-platform. However, WASM instructions are more complex and less circuit-friendly, which limits proving efficiency compared to RISC-V and LLVM IR. As ZK technology evolves, it is trending toward modularization and marketization: zkVMs provide the universal, trusted execution environment—the CPU/compiler layer of zero-knowledge computation—supplying the foundational verifiable compute for applications.zk-Coprocessors encapsulate zkVMs as accelerators, enabling EVM and other chains to outsource complex computations off-chain, then verify them on-chain through proofs. Examples include RISC Zero Steel and Lagrange, which play roles analogous to “GPUs/coprocessors.”zkMarketplaces push this further by decentralizing the distribution of proving tasks. Global prover nodes compete to complete workloads, creating a compute marketplace for zero-knowledge proofs. Boundless is a prime example. Thus, the zero-knowledge technology stack is gradually forming a progression: zkVM → zk-Coprocessor → zkMarketplace. This evolution marks the transformation of ZK proofs from a narrow Ethereum scaling tool into a general-purpose trusted computing infrastructure. Within this trajectory, RISC Zero’s adoption of RISC-V as its zkVM kernel strikes the optimal balance between openness, circuitization efficiency, and ecosystem compatibility. It not only delivers a low-barrier developer experience but also extends through layers like Steel, Bonsai, and Boundless to evolve zkVMs into zk-Coprocessors and decentralized proof markets—unlocking far broader application horizons.
IV. RISC Zero’s Technical Path and Ecosystem Landscape RISC-V is an open, royalty-free instruction set architecture not controlled by any single vendor, inherently aligned with decentralization. Building on this open architecture, RISC Zero has developed a zkVM compatible with general-purpose languages like Rust, breaking through the limitations of Solidity within the Ethereum ecosystem. This allows developers to directly compile standard Rust programs into applications capable of generating zero-knowledge proofs. As a result, ZK technology extends beyond blockchain smart contracts into the broader domain of general-purpose computation. RISC0 zkVM: A General-Purpose Trusted Computing Environment Unlike zkEVM projects that must remain compatible with the complex EVM instruction set, the RISC0 zkVM is built on the simpler, more flexible RISC-V architecture. Applications are structured as Guest Code, compiled into ELF binaries. The Host runs these programs through the Executor, recording the execution process as a Session. A Prover then generates a verifiable Receipt, which contains both the public output (Journal) and the cryptographic proof (Seal). Any third party can verify the Receipt to confirm the correctness of the computation—without needing to re-execute it.
The Release of R0VM 2.0 (April 2025): Entering the Real-Time zkVM Era In April 2025, the launch of R0VM 2.0 marked the beginning of real-time zkVMs: Ethereum block proving time was reduced from 35 minutes to 44 seconds, costs dropped by up to 5x, user memory was expanded to 3GB, enabling more complex application scenarios. Two critical precompiles—BN254 and BLS12-381—were also added, fully covering Ethereum’s mainstream needs. More importantly, R0VM 2.0 introduced formal verification for security, with most RISC-V circuits already deterministically verified. The target is to achieve the first block-level real-time zkVM (<12-second proofs) by July 2025. zkCoprocessor Steel: A Bridge for Off-Chain Computation The core idea of a zkCoprocessor is to offload complex computational tasks from on-chain execution to off-chain environments, returning only a zero-knowledge proof of the result. Smart contracts need only verify the proof rather than recompute the entire task, thereby significantly reducing gas costs and breaking performance bottlenecks. RISC0’s Steel provides Solidity with an external proof interface, enabling outsourcing of large-scale historical state queries or cross-block batch computations—allowing even tens of Ethereum blocks to be verified with a single proof. Bonsai: SaaS-Based Proving Service RISC Zero’s Bonsai is an officially hosted Prover-as-a-Service platform that distributes proving tasks across its GPU clusters, delivering high-performance proofs without developer-managed hardware. With the Bento SDK, Solidity contracts can interact seamlessly with zkVM. By contrast, Boundless decentralizes the proving process through an open marketplace, making the two approaches complementary.
RISC Zero’s Full Product Matrix RISC Zero’s ecosystem extends upward from the zkVM, gradually forming a complete matrix that spans the execution, network, marketplace, and application layers:
V. The ZK Marketplace: Decentralized Commoditization of Trusted Computation The ZK marketplace decouples the costly and complex process of proof generation, transforming it into a decentralized, tradable commodity of computation. Through globally distributed prover networks, tasks are outsourced via competitive bidding, dynamically balancing cost and efficiency. Economic incentives continuously attract GPU and ASIC participants, creating a self-reinforcing cycle. Boundless and Succinct are leading representatives of this emerging sector. 5.1 Boundless: A General-Purpose Zero-Knowledge Compute Marketplace Concept & Positioning Boundless is a general-purpose ZK protocol developed by RISC Zero, designed to provide scalable verifiable compute capabilities for all blockchains. Its core innovation lies in decoupling proof generation from blockchain consensus, distributing computational tasks through a decentralized marketplace mechanism. Developers submit proof requests, and prover nodes compete to execute them via decentralized incentive mechanisms. Rewards are issued based on Proof of Verifiable Work, where unlike traditional PoW’s wasteful energy expenditure, computational power is directly converted into useful ZK results for real applications. In this way, Boundless transforms raw compute resources into assets of intrinsic value.
V. The ZK Marketplace: Decentralized Commoditization of Trusted Computation The ZK marketplace decouples the costly, complex process of proof generation and transforms it into a decentralized, tradable commodity of computation. Through globally distributed prover networks, tasks are outsourced via competitive bidding, dynamically balancing cost and efficiency. Economic incentives continuously attract GPU and ASIC participants, forming a self-reinforcing cycle. Boundless and Succinct are representative pioneers in this sector. 5.1 Boundless: General-Purpose ZK Compute Marketplace Architecture & Mechanism The Boundless workflow consists of: Request submission – Developers submit zkVM programs and inputs to the marketplace.Node bidding – Prover nodes evaluate the task and place bids; once locked, the winning node gains execution rights.Proof generation & aggregation – Complex computations are broken into subtasks, each generating zk-STARK proofs, which are then recursively aggregated into a single succinct proof, dramatically reducing on-chain verification costs.Cross-chain verification – Boundless provides unified verification interfaces across multiple chains, enabling “build once, reuse everywhere.”
This architecture allows smart contracts to confirm computations by verifying a short proof—without re-executing heavy tasks—thereby breaking through gas and block capacity limits. Ecosystem & Applications As a marketplace-layer protocol, Boundless complements other RISC Zero products: Steel – An EVM zkCoprocessor for outsourcing complex Solidity execution to off-chain environments with proof-backed verification.OP Kailua – A ZK upgrade path for OP Stack chains, improving both security and finality. Boundless targets sub-12s real-time proofs on Ethereum, enabled by FRI optimizations, polynomial parallelization, and VPU hardware acceleration. As prover nodes and demand scale, Boundless aims to form a self-reinforcing compute network—reducing gas costs while unlocking new application frontiers such as verifiable on-chain AI, cross-chain liquidity, and unbounded computation.
5.2 Boundless for Apps: Breaking the Gas Ceiling Boundless for Apps provides Ethereum and L2 applications with “infinite compute capacity” by offloading complex logic to the decentralized proving network, then verifying results on-chain. Its advantages include: unlimited execution, constant gas costs, Solidity/Vyper compatibility, and native cross-chain support. At the core is Steel, the zkCoprocessor for EVM, enabling developers to build contracts with large-scale state queries, cross-block computations, and event-driven logic. Combined with the R0-Helios light client, Steel also supports cross-chain data verification between Ethereum and OP Stack. Projects including EigenLayer are already exploring integrations, highlighting its potential in DeFi and multi-chain interoperability. Steel: EVM’s Scalable Compute Layer Steel’s primary goal is to overcome Ethereum’s limits on gas, single-block execution, and historical state access. By migrating heavy logic off-chain and returning only proofs, Steel delivers near-unlimited compute with fixed verification costs. In Steel 2.0, developers gain three major capabilities to expand contract design: Event-driven logic – Using event logs as inputs directly, removing reliance on centralized indexers.Historical state queries – Accessing any storage slot or account balance since Ethereum’s Dencun upgrade.Cross-block computation – Performing calculations spanning multiple blocks (e.g., moving averages, cumulative metrics) and committing them on-chain with a single proof.
This design significantly lowers costs and makes previously infeasible applications—such as high-frequency computation, state backtracking, and cross-block logic—possible. Steel is emerging as a key bridge between off-chain computation and on-chain verification.
5.3 Boundless for Rollups: ZK-Accelerated Rollup Settlement Boundless for Rollups leverages the decentralized proving network to provide OP Stack-based L2s with faster and more secure settlement. Its core advantages include: Faster finality – Reducing settlement from 7 days to ~3 hours (Hybrid mode) or under 1 hour (Validity mode).Stronger security – Gradual upgrades from ZK Fraud Proofs to full Validity Proofs, achieving cryptographic guarantees.Decentralized progression – Powered by a distributed prover network with low collateral requirements, enabling rapid Stage 2 decentralization.Native scalability – Maintaining stable performance and predictable costs even on high-throughput chains.
OP Kailua: The ZK Upgrade Path for OP Chains Launched by RISC Zero, OP Kailua is the flagship Boundless-for-Rollups solution. It allows OP Stack chains to surpass the performance and security limitations of traditional optimistic rollups. Kailua supports two modes for progressive upgrading: Hybrid Mode (ZK Fraud Proofs) – Replaces multi-round interactive fault proofs with ZK Fraud Proofs, simplifying dispute resolution and cutting costs. The malicious actor bears the proving fees, reducing finality to ~3 hours.Validity Mode (ZK Validity Proofs) – Transitions to full ZK Rollup, eliminating disputes entirely with validity proofs, achieving sub-1-hour finality and the highest security guarantees.
Kailua enables OP chains to evolve smoothly from Optimistic → Hybrid → ZK Rollup, meeting Stage 2 decentralization requirements while lowering costs in high-throughput scenarios. Applications and tooling remain intact, ensuring ecosystem continuity while unlocking fast finality, reduced staking costs, and cryptographic security. Already, Eclipse has integrated Kailua for ZK Fraud Proofs, while BOB has migrated fully to ZK Rollup architecture.
5.4 The Signal: ZK Consensus Layer for Cross-Chain Interoperability Positioning & Mechanism The Signal is Boundless’ flagship application—an open-source ZK consensus client. It compresses Ethereum Beacon Chain finality events into a single ZK proof, verifiable by any chain or contract. This enables trust-minimized cross-chain interoperability without multisigs or oracles. Its core value lies in giving Ethereum’s final state “global readability”, establishing a foundation for cross-chain liquidity and logic, while reducing redundant computation and gas costs. Operating Mechanism Boost The Signal – Users can submit proof requests to “boost” the signal; all ETH is directed toward funding new proofs, extending signal longevity and benefiting all chains and apps.Prove The Signal – Anyone can run a Boundless prover node to generate and broadcast Ethereum block proofs, replacing multisig verification with a “mathematics over trust” consensus layer. Expansion Path Generate continuous proofs of Ethereum’s finalized blocks, forming the “Ethereum Signal.”Extend to other blockchains, creating a unified multi-chain signal.Interconnect chains on a shared cryptographic signal layer, enabling cross-chain interoperability without wrapped assets or centralized bridges. Already, 30+ teams are contributing to The Signal. More than 1,500 prover nodes are active on the Boundless marketplace, competing for 0.5% token rewards. Any GPU owner can join permissionlessly. The Signal is live on Boundless Mainnet Beta, with production-grade proof requests already supported on Base. VI. Boundless Roadmap, Mainnet Progress, and Ecosystem Boundless has followed a clear phased development path: Phase I – Developer Access: Early access for developers with free proving resources to accelerate application experimentation.Phase II – Public Testnet 1: Launch of the first public testnet, introducing a two-sided marketplace where developers and prover nodes interact in real environments.Phase III – Public Testnet 2: Activation of incentive structures and the full economic model to test a self-sustaining decentralized proving network.Phase IV – Mainnet: Full mainnet launch, providing universal ZK compute capacity for all blockchains.
On July 15, 2025, the Boundless Mainnet Beta officially went live, with production deployment first integrated on Base. Users can now submit proof requests with real funds, while prover nodes join permissionlessly, with each node supporting up to 100 GPUs for bidding. As a showcase application, the team released The Signal, an open-source ZK consensus client that compresses Ethereum Beacon Chain finality events into a single proof verifiable by any chain or contract. This effectively gives Ethereum’s finalized state “global readability”, laying the foundation for cross-chain interoperability and secure settlement. Boundless Explorer data highlights the network’s rapid growth and resilience: As of August 18, 2025, the network had processed 542.7 trillion compute cycles, completed 399,000 orders, and supported 106 independent programs.The largest single proof exceeded 106 billion compute cycles (August 18).The compute throughput peak reached 25.93 MHz (August 14), setting industry records.Daily order volume surpassed 15,000 orders in mid-August, with daily peak compute exceeding 40 trillion cycles, showing exponential growth momentum.Order fulfillment success rate consistently stayed between 98%–100%, demonstrating a mature and reliable marketplace mechanism.As prover competition intensified, unit compute costs dropped to nearly 0 Wei per cycle, signaling the arrival of a high-efficiency, low-cost era of large-scale verifiable computation. Boundless has also attracted strong participation from leading mining players. Major firms such as Bitmain have begun developing dedicated ASIC miners, while 6block, Bitfufu, Powerpool, Intchain, and Nano Labs have integrated existing mining pool resources into ZK proving nodes. This influx of miners marks Boundless’ progression toward an industrial-scale ZK marketplace, bridging the gap between cryptographic research and mainstream compute infrastructure.
VII. ZK Coin Tokenomics Design ZK Coin (ZKC) is the native token of the Boundless protocol, serving as the economic and security anchor of the entire network. Its design goal is to build a trusted, low-friction, and sustainably scalable marketplace for zero-knowledge computation. The total supply of ZKC is 1 billion, with a declining annual inflation model: initial annual inflation at ~7%, gradually decreasing to 3% by year eight, and remaining stable thereafter. All newly issued tokens are distributed through Proof of Verifiable Work (PoVW), ensuring that issuance is directly tied to real computational tasks. Proof of Verifiable Work (PoVW) is Boundless’ core innovation. It transforms verifiable computation from a technical capability into a measurable, tradable commodity. Traditional blockchains rely on redundant execution by all nodes, constrained by single-node compute bottlenecks. PoVW, by contrast, enables single execution with network-wide verification via zero-knowledge proofs. It also introduces a trustless metering system that converts computational work into a priced resource. This allows computation to scale on demand, discover fair pricing through markets, formalize service contracts, and incentivize prover participation—creating a demand-driven positive feedback loop. For the first time, blockchains can transcend compute scarcity, enabling applications in cross-chain interoperability, off-chain execution, complex computation, and privacy-preserving use cases. PoVW thus establishes both the economic and technical foundation for Boundless as a universal ZK compute infrastructure. Token Roles & Value Capture ZKC functions as the native token and economic backbone of Boundless: Staking & Collateral – Provers must stake ZKC (≥10× the maximum request fee) before accepting jobs. If they fail to deliver proofs on time, penalties apply: 50% slashed (burned) and 50% redistributed to other provers.Proof of Verifiable Work (PoVW) – Provers earn ZKC rewards for generating proofs, analogous to mining. Reward distribution: 75% to provers, 25% to protocol stakers.Universal Payment Layer – Applications pay proof fees in native tokens (ETH, USDC, SOL, etc.), but provers are required to stake ZKC. Thus, all proofs are ultimately collateralized by ZKC.Governance – ZKC holders participate in protocol governance, including marketplace rules, zkVM integrations, and ecosystem funding. Token Distribution (Initial Supply: 1B ZKC)
Ecosystem Growth (49%)31% Ecosystem Fund: supports app development, developer tools, education, and infra maintenance; linear unlock over 3 years.18% Strategic Growth Fund: for enterprise integrations, BD partnerships, and institutional prover clusters; gradual unlock within 12 months, milestone-based.Core Team & Early Contributors (23.5%)20% Core team and early contributors: 25% cliff after 1 year, remainder vesting linearly over 24 months.3.5% allocated to RISC Zero for zkVM R&D and research grants.Investors (21.5%): Strategic capital and technical backers; 25% cliff after 1 year, remainder vesting linearly over 24 months.Community (6%)Public Sale & Airdrop: strengthens community participation.Public sale: 50% unlocked at TGE, 50% after 6 months.Airdrops: 100% unlocked at TGE.
ZKC is the core economic and security anchor of the Boundless protocol. It secures proof delivery through staking, ties issuance to real computational output via PoVW, underpins the universal ZK demand layer through collateralization, and empowers holders to guide protocol evolution. As proof demand grows and slashing/burning reduces circulating supply, more ZKC will be locked and removed from circulation—creating long-term value support through the dual forces of rising demand and contracting supply. VIII. Team Background and Project Fundraising RISC Zero was founded in 2021. The team consists of engineers and entrepreneurs from leading technology and crypto organizations such as Amazon, Google, Intel, Meta, Microsoft, Coinbase, Mina Foundation, and O(1) Labs. They built the world’s first zkVM capable of running arbitrary code and are now building a universal zero-knowledge computing ecosystem on top of it.
Core Team Jeremy Bruestle – Co-founder & CEO, RISC Zero Jeremy is a veteran technologist and serial entrepreneur with over 20 years of experience in systems architecture and distributed computing. He previously served as Principal Engineer at Intel, co-founder and Chief Scientist at Vertex.AI, and co-founder and board member at Spiral Genetics. In 2022, he founded RISC Zero and became CEO, leading zkVM research and strategy to drive the adoption of zero-knowledge proofs in general-purpose computation. Frank Laub – Co-founder & CTO, RISC Zero Frank has deep expertise in deep learning compilers and virtual machine technologies. He worked on deep learning software at Intel Labs and Movidius and gained extensive engineering experience at Vertex.AI and Peach Tech. Since co-founding RISC Zero in 2021, he has served as CTO, leading the development of the zkVM core, the Bonsai network, and the developer tooling ecosystem. Shiv Shankar – CEO, Boundless Shiv has more than 15 years of experience in technology and engineering management, spanning fintech, cloud storage, compliance, and distributed systems. In 2025, he became CEO of Boundless, where he leads product and engineering teams to drive the marketization of zero-knowledge proofs and the development of cross-chain compute infrastructure. Joe Restivo – COO, RISC Zero Joe is an entrepreneur and operations expert with three successful exits. Two of his companies were acquired by Accenture and GitLab. He also teaches risk management at Seattle University’s Business School. Joe joined RISC Zero in 2023 as COO, overseeing company operations and scaling. Brett Carter – VP of Product, RISC Zero Brett brings extensive product management and ecosystem experience. He previously worked as a senior product manager at O(1) Labs. Since joining RISC Zero in 2023, he has served as VP of Product, responsible for product strategy, ecosystem adoption, and integration with Boundless’ marketplace initiatives.
Fundraising
In July 2023, RISC Zero completed a $40 million Series A round, led by Blockchain Capital. Seed round lead investor Bain Capital Crypto also participated, alongside Galaxy Digital, IOSG, RockawayX, Maven 11, Fenbushi Capital, Delphi Digital, Algaé Ventures, IOBC, Zero Dao (Tribute Labs), Figment Capital, a100x, and Alchemy.
IX. Competitive Analysis: zkVMs and ZK Marketplaces A key competitor that combines both a zkVM and a zkMarketplace is Succinct, which consists of the SP1 zkVM and the Succinct Prover Network (SPN). SP1 zkVM is a general-purpose zero-knowledge virtual machine built on RISC-V with an LLVM IR front-end, designed to support multiple languages, lower development barriers, and improve performance.Succinct Prover Network (SPN) is a decentralized proving marketplace deployed on Ethereum, where tasks are allocated through staking and bidding, and the $PROVE token is used for payments, prover incentives, and network security. In contrast, RISC Zero follows a “dual-engine” strategy: Bonsai provides an officially hosted Prover-as-a-Service with high performance and enterprise-grade stability, while Boundless builds an open decentralized proving marketplace that allows any GPU/CPU node to participate, maximizing decentralization and coverage though with less consistent performance.
Comparison of RISC-V and Wasm RISC-V and Wasm represent two major approaches to general-purpose zkVMs: RISC-V is an open hardware-level instruction set with simple rules and a mature ecosystem, making it well-suited for circuit performance optimization and future verifiable hardware acceleration. However, it has limited integration with traditional Web application ecosystems.Wasm, by contrast, is a cross-platform bytecode format with native multi-language support and strong compatibility for Web application migration. Its runtime ecosystem is mature, though its stack-based architecture imposes lower performance ceilings compared to RISC-V. Overall, RISC-V zkVMs are better suited for high-performance and general-purpose compute expansion, while zkWasm holds stronger advantages in cross-language and Web-oriented use cases.
X. Conclusion: Business Logic, Engineering Implementation, and Potential Risks Zero-knowledge (ZK) technology is evolving from a single-purpose scaling tool into a general foundation for trusted computation in blockchain. By leveraging the open RISC-V architecture, RISC Zero breaks free from EVM dependency, extending zero-knowledge proofs to general-purpose off-chain computation. This, in turn, has given rise to zk-Coprocessors and decentralized proof marketplaces such as Bonsai and Boundless. Together, these components form a scalable, tradable, and governable layer of computational trust, unlocking higher performance, stronger interoperability, and broader application scenarios for blockchain systems. That said, the ZK sector still faces significant near-term challenges. Following the peak of ZK hype in the primary market in 2023, the 2024 launch of mainstream zkEVM projects absorbed much of the secondary market’s attention. Meanwhile, leading L2 teams largely rely on in-house prover designs, while applications such as cross-chain verification, zkML, and privacy-preserving computation remain nascent, with limited matching demand. This suggests that open proving marketplaces may struggle to sustain high order volumes in the short term, with their value lying more in aggregating prover supply in advance to capture demand when it eventually surges. Similarly, while zkVMs offer lower technical barriers, they face difficulty in directly penetrating the Ethereum ecosystem. Their unique value may lie instead in off-chain complex computation, cross-chain verification, and integrations with non-EVM chains. Overall, the evolutionary path of ZK technology is becoming clear: from zkEVMs’ compatibility experiments, to the rise of general-purpose zkVMs, and now to decentralized proving marketplaces represented by Boundless. Zero-knowledge proofs are rapidly advancing toward commoditization and infrastructuralization. For both investors and developers, today may still be a phase of validation—but within it lies the foundation of the next industry cycle.
Disclaimer: This article includes content assisted by AI. While I have made every effort to ensure the accuracy and reliability of the information provided, there may still be errors or omissions. This article is for research and reference purposes only and does not constitute investment advice, solicitation, or any form of financial service. Please note that tokens and related digital assets carry significant risks and high volatility. Readers should exercise independent judgment and assume full responsibility before making any investment decisions.
Almanak Forschungsbericht: Der inklusive Weg der On-Chain-Quantitativen Finanzen
In unserem früheren Forschungsbericht „Die intelligente Evolution von DeFi: Von Automatisierung zu AgentFi“ haben wir systematisch die drei Phasen der Entwicklung der DeFi-Intelligenz kartiert und verglichen: Automatisierung, Absicht-zentrierte Co-Pilot und AgentFi. Wir haben darauf hingewiesen, dass ein erheblicher Teil der aktuellen DeFAI-Projekte weiterhin ihre Kernfähigkeiten um „absichtsgesteuerte + einzelne atomare Interaktion“ Swap-Transaktionen zentriert. Da diese Interaktionen keine laufenden Ertragsstrategien beinhalten, keine Zustandsverwaltung erfordern und keinen komplexen Ausführungsrahmen benötigen, sind sie besser für absichtsbasierten Co-Piloten geeignet und können nicht streng als AgentFi klassifiziert werden.
Die intelligente Evolution von DeFi: Von der Automatisierung zu AgentFi
Dieser Artikel profitierte von den aufschlussreichen Vorschlägen von Lex Sokolin (Generative Ventures), Stepan Gershuni (cyber.fund) und Advait Jayant (Aivos Labs) sowie wertvollen Beiträgen der Teams hinter Giza, Theoriq, Olas, Almanak, Brahma.fi und HeyElsa. Obwohl alle Anstrengungen unternommen wurden, um Objektivität und Genauigkeit sicherzustellen, können bestimmte Perspektiven persönliche Interpretationen widerspiegeln. Die Leser werden ermutigt, sich kritisch mit den Inhalten auseinanderzusetzen. Unter den verschiedenen Sektoren in der aktuellen Krypto-Landschaft stechen Stablecoin-Zahlungen und DeFi-Anwendungen als zwei vertikale Bereiche mit nachgewiesener Nachfrage aus der realen Welt und langfristigem Wert hervor. Gleichzeitig entwickelt sich die florierende Entwicklung von KI-Agenten als benutzerfreundliche Schnittstelle der KI-Branche und fungiert als wichtiger Vermittler zwischen KI und Nutzern.
Melde dich an, um weitere Inhalte zu entdecken
Bleib immer am Ball mit den neuesten Nachrichten aus der Kryptowelt
⚡️ Beteilige dich an aktuellen Diskussionen rund um Kryptothemen
💬 Interagiere mit deinen bevorzugten Content-Erstellern