Binance Square

0xjacobzhao

Trade aberto
Trader casual
5.4 meses
Crypto x AI | ex-Crypto VC | ENTJ/INTJ
1 Seguindo
18 Seguidores
15 Curtiu
8 Compartilhamentos
Todos os conteúdos
Portfólio
--
Traduzir
Noya.ai: Agents in Prediction MarketsAuthor: 0xjacobzhao | https://linktr.ee/0xjacobzhao In our previous Crypto AI series research reports, we have consistently emphasized the view that the most practical application scenarios in the current crypto field are mainly concentrated in stablecoin payments and DeFi, while Agents are the key interface for the AI industry facing users. Therefore, in the trend of Crypto and AI integration, the two most valuable paths are: AgentFi, based on existing mature DeFi protocols (basic strategies like lending and liquidity mining, as well as advanced strategies like Swap, Pendle PT, and funding rate arbitrage) in the short term; and Agent Payment, centering on stablecoin settlement and relying on protocols such as ACP/AP2/x402/ERC-8004 in the medium to long term. Prediction markets have become an undeniable new industry trend in 2025, with their total annual trading volume surging from approximately $9 billion in 2024 to over $40 billion in 2025, achieving a year-over-year growth of more than 400%. This significant growth is driven by multiple factors: uncertainty demand brought by macro-political events (such as the 2024 US election), the maturity of infrastructure and trading models, and the thawing of the regulatory environment (Kalshi's lawsuit victory and Polymarket's return to the US). Prediction Market Agents are showing early embryonic forms in early 2026 and are poised to become a continuously emerging product form in the agent field over the coming year. I. Prediction Markets: Betting to  Truth Layer A prediction market is a financial mechanism for trading on the outcomes of future events. Contract prices essentially reflect the market's collective judgment on the probability of an event occurring. Its effectiveness stems from the combination of crowd wisdom and economic incentives: in an environment of anonymous, real-money betting, scattered information is quickly integrated into price signals weighted by financial willingness, thereby significantly reducing noise and false judgments. By the end of 2025, prediction markets have basically formed a duopoly dominated by Polymarket and Kalshi. According to Forbes, the total trading volume in 2025 reached approximately $44 billion, with Polymarket contributing about $21.5 billion and Kalshi about $17.1 billion. Relying on its legal victory in the previous election contract case, its first-mover compliance advantage in the US sports prediction market, and relatively clear regulatory expectations, Kalshi has achieved rapid expansion. Currently, the development paths of the two have shown clear differentiation: Polymarket adopts a mixed CLOB architecture with "off-chain matching, on-chain settlement" and a decentralized settlement mechanism, building a globalized, non-custodial high-liquidity market. After returning to the US with compliance, it formed an "onshore + offshore" dual-track operating structure.Kalshi integrates into the traditional financial system, accessing mainstream retail brokerages via API, attracting Wall Street market makers to participate deeply in macro and data-type contract trading. Its products are constrained by traditional regulatory processes, and long-tail demands and sudden events lag relatively behind. Apart from Polymarket and Kalshi, other competitive players in the prediction market field are developing mainly along two paths: First is the compliance distribution path, embedding event contracts into the existing account systems of brokerages or large platforms, relying on channel coverage, clearing capabilities, and institutional trust to build advantages (e.g., ForecastTrader by Interactive Brokers and ForecastEx, and FanDuel Predicts by FanDuel and CME).Second is the on-chain performance and capital efficiency path. Taking the Solana ecosystem's perpetual contract DEX Drift as an example, it added a prediction market module B.E.T (prediction markets) on top of its original product line. The two paths—traditional financial compliance entry and crypto-native performance advantages—together constitute the diversified competitive landscape of the prediction market ecosystem. Prediction markets appear similar to gambling on the surface and are essentially zero-sum games. However, the core difference lies not in the form, but in whether they possess positive externalities: aggregating scattered information through real-money trading to publicly price real-world events, forming a valuable signal layer. Despite limitations such as entertainment-focused participation, the trend is shifting from gaming to a "Global Truth Layer"—with the access of institutions like CME and Bloomberg, event probabilities have become decision-making metadata that can be directly called by financial and enterprise systems, providing a more timely and quantifiable market-based truth. II. Prediction Agents: Architecture & Strategy Currently, Prediction Market Agents are entering an early practice stage. Their value lies not in "AI predicting more accurately," but in amplifying information processing and execution efficiency in prediction markets. The essence of a prediction market is an information aggregation mechanism, where price reflects the collective judgment of event probability; market inefficiencies in reality stem from information asymmetry, liquidity, and attention constraints. The reasonable positioning of a Prediction Market Agent is Executable Probabilistic Portfolio Management: converting news, rule texts, and on-chain data into verifiable pricing deviations, executing strategies in a faster, more disciplined, and lower-cost manner, and capturing structural opportunities through cross-platform arbitrage and portfolio risk control. An ideal Prediction Market Agent can be abstracted into a four-layer architecture: Information Layer: Aggregates news, social media, on-chain, and official data.Analysis Layer: Uses LLMs and ML to identify mispricing and calculate Edge.Strategy Layer: Converts Edge into positions through the Kelly criterion, staggered entry, and risk control.Execution Layer: Completes multi-market order placement, slippage and Gas optimization, and arbitrage execution, forming an efficient automated closed loop. The ideal business model design for Prediction Market Agents has different exploration spaces at different levels: Bottom Infrastructure Layer: Provides multi-source real-time data aggregation, Smart Money address libraries, unified prediction market execution engines, and backtesting tools. Charges B2B/B2D fees to obtain stable revenue unrelated to prediction accuracy.Middle Strategy Layer: Precipitates modular strategy components and community-contributed strategies in an open-source or Token-Gated manner, forming a composable strategy ecosystem and achieving value capture.Top Agent Layer: Directly runs live trading through trusted managed Vaults, realizing capabilities with transparent on-chain records and a 20–30% performance fee (plus a small management fee). The ideal Prediction Market Agent is closer to an "AI-driven probabilistic asset management product," gaining returns through long-term disciplined execution and cross-market mispricing gaming, rather than relying on single-time prediction accuracy. The core logic of the diversified revenue structure of "Infrastructure Monetization + Ecosystem Expansion + Performance Participation" is that even if Alpha converges as the market matures, bottom-layer capabilities such as execution, risk control, and settlement still have long-term value, reducing dependence on the single assumption that "AI consistently beats the market." Prediction Market Agent Strategy Analysis: Theoretically, Agents have advantages in high-speed, 24/7, and emotion-free execution. However, in prediction markets, this is often difficult to convert into sustainable Alpha. Its effective application is mainly limited to specific structures, such as automated market making, cross-platform mispricing capture, and information integration of long-tail events. These opportunities are scarce and constrained by liquidity and capital. Market Selection: Not all prediction markets have tradable value. Participation value depends on five dimensions: settlement clarity, liquidity quality, information advantage, time structure, and manipulation risk. It is recommended to prioritize the early stages of new markets, long-tail events with few professional players, and fleeting pricing windows caused by time zone differences; avoid high-heat political events, subjective settlement markets, and varieties with extremely low liquidity.Order Strategy: Adopt strict systematic position management. The prerequisite for entry is that one's own probability judgment is significantly higher than the market implied probability. Positions are determined based on the fractional Kelly criterion (usually 1/10–1/4 Kelly), and single event risk exposure does not exceed 15%, to achieve robust growth with controllable risk, bearable drawdowns, and compoundable advantages in the long run.Arbitrage Strategy: Arbitrage in prediction markets is mainly manifested in four types: cross-platform spread (be wary of settlement differences), Dutch Book arbitrage (high certainty but strict liquidity requirements), settlement arbitrage (relies on execution speed), and correlated asset hedging (limited by structural mismatch). The key to practice lies not in discovering spreads, but in strictly aligning contract definitions and settlement standards to avoid pseudo-arbitrage caused by subtle rule differences.Smart Money Copy-Trading: On-chain "Smart Money" signals are not suitable as a main strategy due to lagging, inducement risks, and sample issues. A more reasonable usage is as a confidence adjustment factor, used to assist core judgments based on information and pricing deviations. III. Noya.ai: Intelligence to Action As an early exploration of Prediction Market Agents, NOYA's core philosophy is "Intelligence That Acts." In on-chain markets, pure analysis and insight are not enough to create value—although dashboards, data analysis, and research tools can help users understand "what might happen," there is still a large amount of manual operation, cross-chain friction, and execution risk between insight and execution. NOYA is built based on this pain point: compressing the complete link of "Research → Form Judgment → Execution → Continuous Monitoring" in the professional investment process into a unified system, enabling intelligence to be directly translated into on-chain action. NOYA achieves this goal by integrating three core levels: Intelligence Layer: Aggregates market data, token analysis, and prediction market signals.Abstraction Layer: Hides complex cross-chain routing; users only need to express Intent.Execution Layer: AI Agents execute operations across chains and protocols based on user authorization. In terms of product form, NOYA supports different participation methods for passive income users, active traders, and prediction market participants. Through designs like Omnichain Execution, AI Agents & Intents, and Vault Abstraction, it modularizes and automates multi-chain liquidity management, complex strategy execution, and risk control. The overall system forms a continuous closed loop: Intelligence → Intent → Execution → Monitoring, achieving efficient, verifiable, and low-friction conversion from insight to execution while ensuring users always maintain control over their assets. IV. Noya.ai's Product System Evolution  Core Cornerstone: Noya Omnichain Vaults Omnivaults is NOYA's capital deployment layer, providing cross-chain, risk-controlled automated yield strategies. Users hand over assets to the system to run continuously across multiple chains and protocols through simple deposit and withdrawal operations, without the need for manual rebalancing or monitoring. The core goal is to achieve stable risk-adjusted returns rather than short-term speculation. Omnivaults cover strategies like standard yield and Loop, clearly divided by asset and risk level, and support optional bonding incentive mechanisms. At the execution level, the system automatically completes cross-chain routing and optimization, and can introduce ZKML to provide verifiable proof for strategy decisions, enhancing the transparency and credibility of automated asset management. The overall design focuses on modularity and composability, supporting future access to more asset types and strategy forms. NOYA Vault Technical Architecture: Each vault is uniformly registered and managed through the Registry; the AccountingManager is responsible for user shares (ERC-20) and NAV pricing; the bottom layer connects to protocols like Aave and Uniswap through modular Connectors and calculates cross-protocol TVL, relying on Value Oracle (Chainlink + Uniswap v3 TWAP) for price routing and valuation; trading and cross-chain operations are executed by Swap Handler (LiFi); finally, strategy execution is triggered by Keeper Multi-sig, forming a composable and auditable execution closed loop. Future Alpha: Prediction Market Agent NOYA's most imaginative module: the Intelligence layer continuously tracks on-chain fund behavior and off-chain narrative changes, identifying news shocks, emotional fluctuations, and odds mismatches. When probability deviations are found in prediction markets like Polymarket, the Execution layer AI Agent can mobilize vault funds for arbitrage and rebalancing under user authorization. At the same time, Token Intelligence and Prediction Market Copilot provide users with structured token and prediction market analysis, directly converting external information into actionable trading decisions. Prediction Market Intelligence Copilot NOYA is committed to upgrading prediction markets from single-event betting to systematically manageable probabilistic assets. Its core module integrates diverse data such as market implied probability, liquidity structure, historical settlements, and on-chain smart money behavior. It uses Expected Value (EV) and scenario analysis to identify pricing deviations and focuses on tracking position signals of high-win-rate wallets to distinguish informed trading from market noise. Based on this, Copilot supports cross-market and cross-event correlation analysis and transmits real-time signals to AI Agents to drive automated execution such as opening and rebalancing positions, achieving portfolio management and dynamic optimization of prediction markets. Core Strategy Mechanisms include: Multi-source Edge Sourcing: Fuses Polymarket real-time odds, polling data, private and external information flows to cross-verify event implied probabilities, systematically mining information advantages that have not been fully priced in.Prediction Market Arbitrage: Builds probabilistic and structural arbitrage strategies based on pricing differences across different markets, different contract structures, or similar events, capturing odds convergence returns while controlling directional risk.Auto-adjust Positions (Odds-Driven): When odds shift significantly due to changes in information, capital, or sentiment, the AI Agent automatically adjusts position size and direction, achieving continuous optimization in the prediction market rather than a one-time bet. NOYA Intelligence Token Reports NOYA's institutional-grade research and decision hub aims to automate the professional crypto investment research process and directly output decision-level signals usable for real asset allocation. This module presents clear investment stances, comprehensive scores, core logic, key catalysts, and risk warnings in a standardized report structure, continuously updated with real-time market and on-chain data. Unlike traditional research tools, NOYA's intelligence does not stop at static analysis but can be queried, compared, and followed up by AI Agents in natural language. It is directly fed to the execution layer to drive subsequent cross-chain trading, fund allocation, and portfolio management, thereby forming a "Research—Decision—Execution" integrated closed loop, making Intelligence an active signal source in the automated capital operation system. NOYA AI Agent (Voice & Natural Language Driven) The NOYA AI Agent is the platform's execution layer, whose core role is to directly translate user intent and market intelligence into authorized on-chain actions. Users can express goals via text or voice, and the Agent is responsible for planning and executing cross-chain, cross-protocol operations, compressing research and execution into a continuous process. It is a key product form for NOYA to lower the threshold for DeFi and prediction market operations. Users do not need to understand the underlying links, protocols, or transaction paths. They only need to express their goals through natural language or voice to trigger the AI Agent to automatically plan and execute multi-step on-chain operations, achieving "Intent as Execution." Under the premise of full-process user signing and non-custody, the Agent operates in a closed loop of "Intent Understanding → Action Planning → User Confirmation → On-chain Execution → Result Monitoring." It does not replace decision-making but is only responsible for efficient implementation and execution, significantly reducing the friction and threshold of complex financial operations. Trust Moat: ZKML Verifiable Execution Verifiable Execution aims to build a verifiable closed loop for the entire process of strategy, decision-making, and execution. NOYA introduces ZKML as a key mechanism to reduce trust assumptions: strategies are calculated off-chain and verifiable proofs are generated; corresponding fund operations can only be triggered after on-chain verification passes. This mechanism can provide credibility for strategy output without revealing model details and supports derivative capabilities such as verifiable backtesting. Currently, relevant modules are still marked as "under development" in public documents, and engineering details remain to be disclosed and verified. Future 6-Month Product Roadmap Prediction Market Advanced Order Capabilities: Improve strategy expression and execution precision to support Agent-based trading.Expansion to Multi-Prediction Markets: Access more platforms beyond Polymarket to expand event coverage and liquidity.Multi-source Edge Information Collection: Cross-verify with handicap odds to systematically capture underpriced probability deviations.Clearer Token Signals & Advanced Reports: Output trading signals and in-depth on-chain analysis that can directly drive execution.Advanced On-chain DeFi Strategy Combinations: Launch complex strategy structures to improve capital efficiency, returns, and scalability. V. Noya.ai's Ecosystem Growth Currently, Omnichain Vaults are in the early stage of ecosystem development, and their cross-chain execution and multi-strategy framework have been verified. Strategy & Coverage: The platform has integrated mainstream DeFi protocols such as Aave and Morpho, supports cross-chain allocation of stablecoins, ETH, and their derivative assets, and has preliminarily built a layered risk strategy (e.g., Basic Yield vs. Loop Strategy).Development Stage: The current TVL volume is limited. The core goal lies in functional verification (MVP) and risk control framework refinement. The architectural design has strong composability, reserving interfaces for the subsequent introduction of complex assets and advanced Agent scheduling. Incentive System: Kaito Linkage & Space Race Dual Drive NOYA has built a growth flywheel deeply binding content narrative and liquidity anchored on "Real Contribution." Ecosystem Partnership (Kaito Yaps): NOYA landed on Kaito Leaderboards with a composite narrative of "AI × DeFi × Agent," configuring an unlocked incentive pool of 5% of the total supply, and reserving an additional 1% for the Kaito ecosystem. Its mechanism deeply binds content creation (Yaps) with Vault deposits and Bond locking. User weekly contributions are converted into Stars that determine rank and multipliers, thereby synchronously strengthening narrative consensus and long-term capital stickiness at the incentive level.Growth Engine (Space Race): Space Race constitutes NOYA's core growth flywheel, replacing the traditional "capital scale first" airdrop model by using Stars as long-term equity credentials. This mechanism integrates Bond locking bonuses, two-way 10% referral incentives, and content dissemination into a weekly Points system, filtering out long-term users with high participation and strong consensus, and continuously optimizing community structure and token distribution.Community Building (Ambassador): NOYA adopts an invitation-only ambassador program, providing qualified participants with community round participation qualifications and performance rebates based on actual contributions (up to 10%). Currently, Noya.ai has accumulated over 3,000 on-chain users, and its X platform followers have exceeded 41,000, ranking in the top five of the Kaito Mindshare list. This indicates that NOYA has occupied a favorable attention niche in the prediction market and Agent track. In addition, Noya.ai's core contracts have passed dual audits by Code4rena and Hacken, and have accessed Hacken Extractor. VI. Tokenomics Design and Governance NOYA adopts a Single-token ecosystem model, with $NOYA as the sole value carrier and governance vehicle. NOYA employs a Buyback & Burn value capture mechanism. The value generated by the protocol layer in products such as AI Agents, Omnivaults, and prediction markets is captured through mechanisms like staking, governance, access permissions, and buyback & burn, forming a value closed loop of Use → Fee → Buyback, converting platform usage into long-term token value. The project takes Fair Launch as its core principle. It did not introduce angel round or VC investment but completed distribution through a public community round (Launch-Raise) with a low valuation ($10M FDV), Space Race, and airdrops. It deliberately reserves asymmetric upside space for the community, making the chip structure more biased towards active users and long-term participants; team incentives mainly come from long-term locked token shares. Token Distribution: Total Supply: 1 Billion (1,000,000,000) NOYAInitial Float (Low Float): ~12%Valuation & Financing (The Raise): Financing Amount: $1 Million; Valuation (FDV): $10 Million VII. Prediction Agent Competitive Analysis Currently, the Prediction Market Agent track is still in its early stages with a limited number of projects. Representative ones include Olas (Pearl Prediction Agents), Warden (BetFlix), and Noya.ai. From the perspective of product form and user participation, each represents three types of paths in the current prediction market agent track: Olas (Pearl Prediction Agents): Agent Productization & Runnable Delivery. Participated by "running an automated prediction Agent," encapsulating prediction market trading into a runnable Agent: users inject capital and run it, and the system automatically completes information acquisition, probability judgment, betting, and settlement. The participation method requiring additional installation has relatively limited friendliness for ordinary users.Warden (BetFlix): Interactive Distribution & Consumer-grade Betting Platform. Attracts user participation through a low-threshold, highly entertaining interactive experience. Adopts an interaction and distribution-oriented path, lowering participation costs with gamified and content-based frontends, emphasizing the consumption and entertainment attributes of prediction markets. Its competitive advantage mainly comes from user growth and distribution efficiency, rather than strategy or execution layer depth.NOYA.ai: Centered on "Fund Custody + Strategy Execution on Behalf," abstracting prediction markets and DeFi execution into asset management products through Vaults, providing a participation method with low operation and low mental burden. If the Prediction Market Intelligence and Agent execution modules are superimposed later, it is expected to form a "Research—Execution—Monitoring" integrated workflow Compared with AgentFi projects that have achieved clear product delivery such as Giza and Almanak, NOYA's DeFi Agent is currently still in a relatively early stage. However, NOYA's differentiation lies in its positioning and entry level: it enters the same execution and asset management narrative track with a fair launch valuation of about $10M FDV, possessing significant valuation discount and growth potential at the current stage. NOYA: An AgentFi project encapsulating asset management centered on Omnichain Vault. Current delivery focus is on infrastructure layers like cross-chain execution and risk control. Upper-layer Agent execution, prediction market capabilities, and ZKML-related mechanisms are still in the development and verification stage.Giza: Can directly run asset management strategies (ARMA, Pulse). Currently has the highest AgentFi product completion.Almanak: Positioned as AI Quant for DeFi, outputting strategy and risk signals through models and quantitative frameworks. Mainly targets professional fund and strategy management needs, emphasizing methodological systematicness and result reproducibility.Theoriq: Centered on multi-agent collaboration (Agent Swarms) strategy and execution framework, emphasizing scalable Agent collaboration systems and medium-to-long-term infrastructure narratives, leaning more towards bottom-layer capability construction.Infinit: An Agentic DeFi terminal leaning towards the execution layer. Through process orchestration of "Intent → Multi-step on-chain operation," it significantly lowers the execution threshold of complex DeFi operations, and users' perception of product value is relatively direct. VIII. Summary: Business, Engineering and Risks Business Logic: NOYA is a rare target in the current market that superimposes multiple narratives of AI Agent × Prediction Market × ZKML, and further combines the product direction of Intent-Driven Execution. At the asset pricing level, it launches with an FDV of approximately $10M, significantly lower than the common $75M–$100M valuation range of similar AI / DeFAI / Prediction related projects, forming a certain structural price difference. Design-wise, NOYA attempts to unify Strategy Execution (Vault / Agent) and Information Advantage (Prediction Market Intelligence) into the same execution framework, and establishes a value capture closed loop through protocol revenue return (fees → buyback & burn). Although the project is still in its early stages, under the combined effect of multi-narrative superposition and low valuation starting point, its risk-return structure is closer to a type of high-odds, asymmetric betting target. Engineering Implementation: At the verifiable delivery level, NOYA's core function currently online is Omnichain Vaults, providing cross-chain asset scheduling, yield strategy execution, and delayed settlement mechanisms. The engineering implementation is relatively foundational. The Prediction Market Intelligence (Copilot), NOYA AI Agent, and ZKML-driven verifiable execution emphasized in its vision are still in the development stage and have not yet formed a complete closed loop on the mainnet. It is not a mature DeFAI platform at this stage. Potential Risks & Key Focus Points: Delivery Uncertainty: The technological span from "Basic Vault" to "All-round Agent" is huge. Be alert to the risk of Roadmap delays or ZKML implementation falling short of expectations.Potential System Risks: Including contract security, cross-chain bridge failures, and oracle disputes specific to prediction markets (such as fuzzy rules leading to inability to adjudicate). Any single point of failure could cause fund loss. Disclaimer: This article was created with the assistance of AI tools such as ChatGPT-5.2, Gemini 3, and Claude Opus 4.5. The author has tried their best to proofread and ensure the information is true and accurate, but omissions are inevitable. Please understand. It should be specially noted that the crypto asset market generally has a divergence between project fundamentals and secondary market price performance. The content of this article is only for information integration and academic/research exchange, does not constitute any investment advice, and should not be considered as a recommendation to buy or sell any tokens.

Noya.ai: Agents in Prediction Markets

Author: 0xjacobzhao | https://linktr.ee/0xjacobzhao
In our previous Crypto AI series research reports, we have consistently emphasized the view that the most practical application scenarios in the current crypto field are mainly concentrated in stablecoin payments and DeFi, while Agents are the key interface for the AI industry facing users. Therefore, in the trend of Crypto and AI integration, the two most valuable paths are: AgentFi, based on existing mature DeFi protocols (basic strategies like lending and liquidity mining, as well as advanced strategies like Swap, Pendle PT, and funding rate arbitrage) in the short term; and Agent Payment, centering on stablecoin settlement and relying on protocols such as ACP/AP2/x402/ERC-8004 in the medium to long term.
Prediction markets have become an undeniable new industry trend in 2025, with their total annual trading volume surging from approximately $9 billion in 2024 to over $40 billion in 2025, achieving a year-over-year growth of more than 400%. This significant growth is driven by multiple factors: uncertainty demand brought by macro-political events (such as the 2024 US election), the maturity of infrastructure and trading models, and the thawing of the regulatory environment (Kalshi's lawsuit victory and Polymarket's return to the US). Prediction Market Agents are showing early embryonic forms in early 2026 and are poised to become a continuously emerging product form in the agent field over the coming year.
I. Prediction Markets: Betting to  Truth Layer
A prediction market is a financial mechanism for trading on the outcomes of future events. Contract prices essentially reflect the market's collective judgment on the probability of an event occurring. Its effectiveness stems from the combination of crowd wisdom and economic incentives: in an environment of anonymous, real-money betting, scattered information is quickly integrated into price signals weighted by financial willingness, thereby significantly reducing noise and false judgments.
By the end of 2025, prediction markets have basically formed a duopoly dominated by Polymarket and Kalshi. According to Forbes, the total trading volume in 2025 reached approximately $44 billion, with Polymarket contributing about $21.5 billion and Kalshi about $17.1 billion. Relying on its legal victory in the previous election contract case, its first-mover compliance advantage in the US sports prediction market, and relatively clear regulatory expectations, Kalshi has achieved rapid expansion. Currently, the development paths of the two have shown clear differentiation:
Polymarket adopts a mixed CLOB architecture with "off-chain matching, on-chain settlement" and a decentralized settlement mechanism, building a globalized, non-custodial high-liquidity market. After returning to the US with compliance, it formed an "onshore + offshore" dual-track operating structure.Kalshi integrates into the traditional financial system, accessing mainstream retail brokerages via API, attracting Wall Street market makers to participate deeply in macro and data-type contract trading. Its products are constrained by traditional regulatory processes, and long-tail demands and sudden events lag relatively behind.
Apart from Polymarket and Kalshi, other competitive players in the prediction market field are developing mainly along two paths:
First is the compliance distribution path, embedding event contracts into the existing account systems of brokerages or large platforms, relying on channel coverage, clearing capabilities, and institutional trust to build advantages (e.g., ForecastTrader by Interactive Brokers and ForecastEx, and FanDuel Predicts by FanDuel and CME).Second is the on-chain performance and capital efficiency path. Taking the Solana ecosystem's perpetual contract DEX Drift as an example, it added a prediction market module B.E.T (prediction markets) on top of its original product line.
The two paths—traditional financial compliance entry and crypto-native performance advantages—together constitute the diversified competitive landscape of the prediction market ecosystem.

Prediction markets appear similar to gambling on the surface and are essentially zero-sum games. However, the core difference lies not in the form, but in whether they possess positive externalities: aggregating scattered information through real-money trading to publicly price real-world events, forming a valuable signal layer. Despite limitations such as entertainment-focused participation, the trend is shifting from gaming to a "Global Truth Layer"—with the access of institutions like CME and Bloomberg, event probabilities have become decision-making metadata that can be directly called by financial and enterprise systems, providing a more timely and quantifiable market-based truth.
II. Prediction Agents: Architecture & Strategy
Currently, Prediction Market Agents are entering an early practice stage. Their value lies not in "AI predicting more accurately," but in amplifying information processing and execution efficiency in prediction markets. The essence of a prediction market is an information aggregation mechanism, where price reflects the collective judgment of event probability; market inefficiencies in reality stem from information asymmetry, liquidity, and attention constraints. The reasonable positioning of a Prediction Market Agent is Executable Probabilistic Portfolio Management: converting news, rule texts, and on-chain data into verifiable pricing deviations, executing strategies in a faster, more disciplined, and lower-cost manner, and capturing structural opportunities through cross-platform arbitrage and portfolio risk control.
An ideal Prediction Market Agent can be abstracted into a four-layer architecture:
Information Layer: Aggregates news, social media, on-chain, and official data.Analysis Layer: Uses LLMs and ML to identify mispricing and calculate Edge.Strategy Layer: Converts Edge into positions through the Kelly criterion, staggered entry, and risk control.Execution Layer: Completes multi-market order placement, slippage and Gas optimization, and arbitrage execution, forming an efficient automated closed loop.

The ideal business model design for Prediction Market Agents has different exploration spaces at different levels:
Bottom Infrastructure Layer: Provides multi-source real-time data aggregation, Smart Money address libraries, unified prediction market execution engines, and backtesting tools. Charges B2B/B2D fees to obtain stable revenue unrelated to prediction accuracy.Middle Strategy Layer: Precipitates modular strategy components and community-contributed strategies in an open-source or Token-Gated manner, forming a composable strategy ecosystem and achieving value capture.Top Agent Layer: Directly runs live trading through trusted managed Vaults, realizing capabilities with transparent on-chain records and a 20–30% performance fee (plus a small management fee).
The ideal Prediction Market Agent is closer to an "AI-driven probabilistic asset management product," gaining returns through long-term disciplined execution and cross-market mispricing gaming, rather than relying on single-time prediction accuracy. The core logic of the diversified revenue structure of "Infrastructure Monetization + Ecosystem Expansion + Performance Participation" is that even if Alpha converges as the market matures, bottom-layer capabilities such as execution, risk control, and settlement still have long-term value, reducing dependence on the single assumption that "AI consistently beats the market."
Prediction Market Agent Strategy Analysis:
Theoretically, Agents have advantages in high-speed, 24/7, and emotion-free execution. However, in prediction markets, this is often difficult to convert into sustainable Alpha. Its effective application is mainly limited to specific structures, such as automated market making, cross-platform mispricing capture, and information integration of long-tail events. These opportunities are scarce and constrained by liquidity and capital.
Market Selection: Not all prediction markets have tradable value. Participation value depends on five dimensions: settlement clarity, liquidity quality, information advantage, time structure, and manipulation risk. It is recommended to prioritize the early stages of new markets, long-tail events with few professional players, and fleeting pricing windows caused by time zone differences; avoid high-heat political events, subjective settlement markets, and varieties with extremely low liquidity.Order Strategy: Adopt strict systematic position management. The prerequisite for entry is that one's own probability judgment is significantly higher than the market implied probability. Positions are determined based on the fractional Kelly criterion (usually 1/10–1/4 Kelly), and single event risk exposure does not exceed 15%, to achieve robust growth with controllable risk, bearable drawdowns, and compoundable advantages in the long run.Arbitrage Strategy: Arbitrage in prediction markets is mainly manifested in four types: cross-platform spread (be wary of settlement differences), Dutch Book arbitrage (high certainty but strict liquidity requirements), settlement arbitrage (relies on execution speed), and correlated asset hedging (limited by structural mismatch). The key to practice lies not in discovering spreads, but in strictly aligning contract definitions and settlement standards to avoid pseudo-arbitrage caused by subtle rule differences.Smart Money Copy-Trading: On-chain "Smart Money" signals are not suitable as a main strategy due to lagging, inducement risks, and sample issues. A more reasonable usage is as a confidence adjustment factor, used to assist core judgments based on information and pricing deviations.
III. Noya.ai: Intelligence to Action
As an early exploration of Prediction Market Agents, NOYA's core philosophy is "Intelligence That Acts." In on-chain markets, pure analysis and insight are not enough to create value—although dashboards, data analysis, and research tools can help users understand "what might happen," there is still a large amount of manual operation, cross-chain friction, and execution risk between insight and execution. NOYA is built based on this pain point: compressing the complete link of "Research → Form Judgment → Execution → Continuous Monitoring" in the professional investment process into a unified system, enabling intelligence to be directly translated into on-chain action.
NOYA achieves this goal by integrating three core levels:
Intelligence Layer: Aggregates market data, token analysis, and prediction market signals.Abstraction Layer: Hides complex cross-chain routing; users only need to express Intent.Execution Layer: AI Agents execute operations across chains and protocols based on user authorization.
In terms of product form, NOYA supports different participation methods for passive income users, active traders, and prediction market participants. Through designs like Omnichain Execution, AI Agents & Intents, and Vault Abstraction, it modularizes and automates multi-chain liquidity management, complex strategy execution, and risk control.
The overall system forms a continuous closed loop: Intelligence → Intent → Execution → Monitoring, achieving efficient, verifiable, and low-friction conversion from insight to execution while ensuring users always maintain control over their assets.

IV. Noya.ai's Product System Evolution 
Core Cornerstone: Noya Omnichain Vaults
Omnivaults is NOYA's capital deployment layer, providing cross-chain, risk-controlled automated yield strategies. Users hand over assets to the system to run continuously across multiple chains and protocols through simple deposit and withdrawal operations, without the need for manual rebalancing or monitoring. The core goal is to achieve stable risk-adjusted returns rather than short-term speculation.
Omnivaults cover strategies like standard yield and Loop, clearly divided by asset and risk level, and support optional bonding incentive mechanisms. At the execution level, the system automatically completes cross-chain routing and optimization, and can introduce ZKML to provide verifiable proof for strategy decisions, enhancing the transparency and credibility of automated asset management. The overall design focuses on modularity and composability, supporting future access to more asset types and strategy forms.

NOYA Vault Technical Architecture: Each vault is uniformly registered and managed through the Registry; the AccountingManager is responsible for user shares (ERC-20) and NAV pricing; the bottom layer connects to protocols like Aave and Uniswap through modular Connectors and calculates cross-protocol TVL, relying on Value Oracle (Chainlink + Uniswap v3 TWAP) for price routing and valuation; trading and cross-chain operations are executed by Swap Handler (LiFi); finally, strategy execution is triggered by Keeper Multi-sig, forming a composable and auditable execution closed loop.
Future Alpha: Prediction Market Agent
NOYA's most imaginative module: the Intelligence layer continuously tracks on-chain fund behavior and off-chain narrative changes, identifying news shocks, emotional fluctuations, and odds mismatches. When probability deviations are found in prediction markets like Polymarket, the Execution layer AI Agent can mobilize vault funds for arbitrage and rebalancing under user authorization. At the same time, Token Intelligence and Prediction Market Copilot provide users with structured token and prediction market analysis, directly converting external information into actionable trading decisions.
Prediction Market Intelligence Copilot
NOYA is committed to upgrading prediction markets from single-event betting to systematically manageable probabilistic assets. Its core module integrates diverse data such as market implied probability, liquidity structure, historical settlements, and on-chain smart money behavior. It uses Expected Value (EV) and scenario analysis to identify pricing deviations and focuses on tracking position signals of high-win-rate wallets to distinguish informed trading from market noise. Based on this, Copilot supports cross-market and cross-event correlation analysis and transmits real-time signals to AI Agents to drive automated execution such as opening and rebalancing positions, achieving portfolio management and dynamic optimization of prediction markets.
Core Strategy Mechanisms include:
Multi-source Edge Sourcing: Fuses Polymarket real-time odds, polling data, private and external information flows to cross-verify event implied probabilities, systematically mining information advantages that have not been fully priced in.Prediction Market Arbitrage: Builds probabilistic and structural arbitrage strategies based on pricing differences across different markets, different contract structures, or similar events, capturing odds convergence returns while controlling directional risk.Auto-adjust Positions (Odds-Driven): When odds shift significantly due to changes in information, capital, or sentiment, the AI Agent automatically adjusts position size and direction, achieving continuous optimization in the prediction market rather than a one-time bet.
NOYA Intelligence Token Reports
NOYA's institutional-grade research and decision hub aims to automate the professional crypto investment research process and directly output decision-level signals usable for real asset allocation. This module presents clear investment stances, comprehensive scores, core logic, key catalysts, and risk warnings in a standardized report structure, continuously updated with real-time market and on-chain data. Unlike traditional research tools, NOYA's intelligence does not stop at static analysis but can be queried, compared, and followed up by AI Agents in natural language. It is directly fed to the execution layer to drive subsequent cross-chain trading, fund allocation, and portfolio management, thereby forming a "Research—Decision—Execution" integrated closed loop, making Intelligence an active signal source in the automated capital operation system.
NOYA AI Agent (Voice & Natural Language Driven)
The NOYA AI Agent is the platform's execution layer, whose core role is to directly translate user intent and market intelligence into authorized on-chain actions. Users can express goals via text or voice, and the Agent is responsible for planning and executing cross-chain, cross-protocol operations, compressing research and execution into a continuous process. It is a key product form for NOYA to lower the threshold for DeFi and prediction market operations.
Users do not need to understand the underlying links, protocols, or transaction paths. They only need to express their goals through natural language or voice to trigger the AI Agent to automatically plan and execute multi-step on-chain operations, achieving "Intent as Execution." Under the premise of full-process user signing and non-custody, the Agent operates in a closed loop of "Intent Understanding → Action Planning → User Confirmation → On-chain Execution → Result Monitoring." It does not replace decision-making but is only responsible for efficient implementation and execution, significantly reducing the friction and threshold of complex financial operations.
Trust Moat: ZKML Verifiable Execution
Verifiable Execution aims to build a verifiable closed loop for the entire process of strategy, decision-making, and execution. NOYA introduces ZKML as a key mechanism to reduce trust assumptions: strategies are calculated off-chain and verifiable proofs are generated; corresponding fund operations can only be triggered after on-chain verification passes. This mechanism can provide credibility for strategy output without revealing model details and supports derivative capabilities such as verifiable backtesting. Currently, relevant modules are still marked as "under development" in public documents, and engineering details remain to be disclosed and verified.
Future 6-Month Product Roadmap
Prediction Market Advanced Order Capabilities: Improve strategy expression and execution precision to support Agent-based trading.Expansion to Multi-Prediction Markets: Access more platforms beyond Polymarket to expand event coverage and liquidity.Multi-source Edge Information Collection: Cross-verify with handicap odds to systematically capture underpriced probability deviations.Clearer Token Signals & Advanced Reports: Output trading signals and in-depth on-chain analysis that can directly drive execution.Advanced On-chain DeFi Strategy Combinations: Launch complex strategy structures to improve capital efficiency, returns, and scalability.
V. Noya.ai's Ecosystem Growth
Currently, Omnichain Vaults are in the early stage of ecosystem development, and their cross-chain execution and multi-strategy framework have been verified.
Strategy & Coverage: The platform has integrated mainstream DeFi protocols such as Aave and Morpho, supports cross-chain allocation of stablecoins, ETH, and their derivative assets, and has preliminarily built a layered risk strategy (e.g., Basic Yield vs. Loop Strategy).Development Stage: The current TVL volume is limited. The core goal lies in functional verification (MVP) and risk control framework refinement. The architectural design has strong composability, reserving interfaces for the subsequent introduction of complex assets and advanced Agent scheduling.
Incentive System: Kaito Linkage & Space Race Dual Drive
NOYA has built a growth flywheel deeply binding content narrative and liquidity anchored on "Real Contribution."
Ecosystem Partnership (Kaito Yaps): NOYA landed on Kaito Leaderboards with a composite narrative of "AI × DeFi × Agent," configuring an unlocked incentive pool of 5% of the total supply, and reserving an additional 1% for the Kaito ecosystem. Its mechanism deeply binds content creation (Yaps) with Vault deposits and Bond locking. User weekly contributions are converted into Stars that determine rank and multipliers, thereby synchronously strengthening narrative consensus and long-term capital stickiness at the incentive level.Growth Engine (Space Race): Space Race constitutes NOYA's core growth flywheel, replacing the traditional "capital scale first" airdrop model by using Stars as long-term equity credentials. This mechanism integrates Bond locking bonuses, two-way 10% referral incentives, and content dissemination into a weekly Points system, filtering out long-term users with high participation and strong consensus, and continuously optimizing community structure and token distribution.Community Building (Ambassador): NOYA adopts an invitation-only ambassador program, providing qualified participants with community round participation qualifications and performance rebates based on actual contributions (up to 10%).
Currently, Noya.ai has accumulated over 3,000 on-chain users, and its X platform followers have exceeded 41,000, ranking in the top five of the Kaito Mindshare list. This indicates that NOYA has occupied a favorable attention niche in the prediction market and Agent track.
In addition, Noya.ai's core contracts have passed dual audits by Code4rena and Hacken, and have accessed Hacken Extractor.
VI. Tokenomics Design and Governance
NOYA adopts a Single-token ecosystem model, with $NOYA as the sole value carrier and governance vehicle.
NOYA employs a Buyback & Burn value capture mechanism. The value generated by the protocol layer in products such as AI Agents, Omnivaults, and prediction markets is captured through mechanisms like staking, governance, access permissions, and buyback & burn, forming a value closed loop of Use → Fee → Buyback, converting platform usage into long-term token value.
The project takes Fair Launch as its core principle. It did not introduce angel round or VC investment but completed distribution through a public community round (Launch-Raise) with a low valuation ($10M FDV), Space Race, and airdrops. It deliberately reserves asymmetric upside space for the community, making the chip structure more biased towards active users and long-term participants; team incentives mainly come from long-term locked token shares.
Token Distribution:
Total Supply: 1 Billion (1,000,000,000) NOYAInitial Float (Low Float): ~12%Valuation & Financing (The Raise): Financing Amount: $1 Million; Valuation (FDV): $10 Million

VII. Prediction Agent Competitive Analysis
Currently, the Prediction Market Agent track is still in its early stages with a limited number of projects. Representative ones include Olas (Pearl Prediction Agents), Warden (BetFlix), and Noya.ai.
From the perspective of product form and user participation, each represents three types of paths in the current prediction market agent track:
Olas (Pearl Prediction Agents): Agent Productization & Runnable Delivery. Participated by "running an automated prediction Agent," encapsulating prediction market trading into a runnable Agent: users inject capital and run it, and the system automatically completes information acquisition, probability judgment, betting, and settlement. The participation method requiring additional installation has relatively limited friendliness for ordinary users.Warden (BetFlix): Interactive Distribution & Consumer-grade Betting Platform. Attracts user participation through a low-threshold, highly entertaining interactive experience. Adopts an interaction and distribution-oriented path, lowering participation costs with gamified and content-based frontends, emphasizing the consumption and entertainment attributes of prediction markets. Its competitive advantage mainly comes from user growth and distribution efficiency, rather than strategy or execution layer depth.NOYA.ai: Centered on "Fund Custody + Strategy Execution on Behalf," abstracting prediction markets and DeFi execution into asset management products through Vaults, providing a participation method with low operation and low mental burden. If the Prediction Market Intelligence and Agent execution modules are superimposed later, it is expected to form a "Research—Execution—Monitoring" integrated workflow

Compared with AgentFi projects that have achieved clear product delivery such as Giza and Almanak, NOYA's DeFi Agent is currently still in a relatively early stage. However, NOYA's differentiation lies in its positioning and entry level: it enters the same execution and asset management narrative track with a fair launch valuation of about $10M FDV, possessing significant valuation discount and growth potential at the current stage.
NOYA: An AgentFi project encapsulating asset management centered on Omnichain Vault. Current delivery focus is on infrastructure layers like cross-chain execution and risk control. Upper-layer Agent execution, prediction market capabilities, and ZKML-related mechanisms are still in the development and verification stage.Giza: Can directly run asset management strategies (ARMA, Pulse). Currently has the highest AgentFi product completion.Almanak: Positioned as AI Quant for DeFi, outputting strategy and risk signals through models and quantitative frameworks. Mainly targets professional fund and strategy management needs, emphasizing methodological systematicness and result reproducibility.Theoriq: Centered on multi-agent collaboration (Agent Swarms) strategy and execution framework, emphasizing scalable Agent collaboration systems and medium-to-long-term infrastructure narratives, leaning more towards bottom-layer capability construction.Infinit: An Agentic DeFi terminal leaning towards the execution layer. Through process orchestration of "Intent → Multi-step on-chain operation," it significantly lowers the execution threshold of complex DeFi operations, and users' perception of product value is relatively direct.
VIII. Summary: Business, Engineering and Risks
Business Logic:
NOYA is a rare target in the current market that superimposes multiple narratives of AI Agent × Prediction Market × ZKML, and further combines the product direction of Intent-Driven Execution. At the asset pricing level, it launches with an FDV of approximately $10M, significantly lower than the common $75M–$100M valuation range of similar AI / DeFAI / Prediction related projects, forming a certain structural price difference.
Design-wise, NOYA attempts to unify Strategy Execution (Vault / Agent) and Information Advantage (Prediction Market Intelligence) into the same execution framework, and establishes a value capture closed loop through protocol revenue return (fees → buyback & burn). Although the project is still in its early stages, under the combined effect of multi-narrative superposition and low valuation starting point, its risk-return structure is closer to a type of high-odds, asymmetric betting target.
Engineering Implementation:
At the verifiable delivery level, NOYA's core function currently online is Omnichain Vaults, providing cross-chain asset scheduling, yield strategy execution, and delayed settlement mechanisms. The engineering implementation is relatively foundational. The Prediction Market Intelligence (Copilot), NOYA AI Agent, and ZKML-driven verifiable execution emphasized in its vision are still in the development stage and have not yet formed a complete closed loop on the mainnet. It is not a mature DeFAI platform at this stage.
Potential Risks & Key Focus Points:
Delivery Uncertainty: The technological span from "Basic Vault" to "All-round Agent" is huge. Be alert to the risk of Roadmap delays or ZKML implementation falling short of expectations.Potential System Risks: Including contract security, cross-chain bridge failures, and oracle disputes specific to prediction markets (such as fuzzy rules leading to inability to adjudicate). Any single point of failure could cause fund loss.

Disclaimer: This article was created with the assistance of AI tools such as ChatGPT-5.2, Gemini 3, and Claude Opus 4.5. The author has tried their best to proofread and ensure the information is true and accurate, but omissions are inevitable. Please understand. It should be specially noted that the crypto asset market generally has a divergence between project fundamentals and secondary market price performance. The content of this article is only for information integration and academic/research exchange, does not constitute any investment advice, and should not be considered as a recommendation to buy or sell any tokens.
Traduzir
Noya.ai 研报:预测市场智能体的前瞻Noya.ai 研报:预测市场智能体的前瞻 作者:0xjacobzhao | https://linktr.ee/0xjacobzhao 在过往Crypto AI系列研报中我们持续强调的观点:当前加密领域最具实际应用价值的场景,主要集中在稳定币支付与DeFi,而Agent是AI产业面向用户的关键界面。因此,在Crypto与AI融合的趋势中,最具价值的两条路径分别是:短期内基于现有成熟DeFi协议(借贷、流动性挖矿等基础策略,以及Swap、Pendle PT、资金费率套利等高级策略)的AgentFi,以及中长期围绕稳定币结算、并依托ACP/AP2/x402/ERC-8004等协议的Agent Payment。 预测市场在2025年已成为不容忽视的行业新趋势,其年度总交易量从2024年的约90亿美元激增至2025年的超过400亿美元,实现超过400%的年同比增长。这一显著增长由多重因素共同推动:宏观政治事件(如2024年美国大选)带来不确定性需求,基础设施与交易模式的成熟,以及监管环境出现破冰(Kalshi胜诉与Polymarket回归美国)。预测市场智能体(Prediction Market Agent)在2026年初呈现早期雏形,有望在未来一年成为智能体领域的新兴产品形态。 一、预测市场:从下注工具到“全球真相层” 预测市场是一种围绕未来事件结果进行交易的金融机制,合约价格本质上反映了市场对事件发生概率的集体判断。其有效性源于群体智慧与经济激励的结合:在匿名、真金白银下注的环境中,分散信息被快速整合为按资金意愿加权的价格信号,从而显著降低噪音与虚假判断。 截至2025年底,预测市场已基本形成 Polymarket与Kalshi  双寡头主导的格局。据《福布斯》统计,2025年总交易量约达440亿美元,其中Polymarket贡献约215亿美元,Kalshi约为171亿美元。Kalshi凭借此前选举合约案的法律胜诉、在美国体育预测市场的合规先发优势,以及相对明确的监管预期,实现了快速扩张。目前,二者的发展路径已呈现清晰分化: Polymarket 采用“链下撮合、链上结算”的混合CLOB架构与去中心化结算机制,构建起全球化、非托管的高流动性市场,合规重返美国后形成“在岸+离岸”双轨运营结构;Kalshi 融入传统金融体系,通过API接入主流零售券商,吸引华尔街做市商深度参与宏观与数据型合约交易,产品受制于传统监管流程,长尾需求与突发事件相对滞后。 除Polymarket与Kalshi之外,预测市场领域具备竞争力的其他参与者主要沿着两条路径发展: 一是合规分发路径,将事件合约嵌入券商或大型平台的现有账户体系,依靠渠道覆盖、清算能力与机构信任建立优势(例如Interactive Brokers与ForecastEx合作的ForecastTrader,以及FanDuel与CME合作的FanDuel Predicts);二是链上性能与资金效率路径,以Solana生态的永续合约DEX Drift为例,其在原有产品线基础上新增了预测市场模块B.E.T(prediction markets)。 传统金融合规入口与加密原生性能优势这两类路径共同构成预测市场生态的多元竞争格局。 预测市场表面上与赌博相似,本质上也是一种零和博弈,但二者的核心区别并不在于形式,而在于是否具有正外部性:通过真金白银的交易聚合分散信息,对现实事件进行公共定价,形成有价值的信号层。尽管存在娱乐化参与等局限,但其趋势正从博弈转向“全球真相层”——随着CME、彭博等机构的接入,事件概率已成为可被金融与企业系统直接调用的决策元数据,提供更及时、可量化的市场化真相。 二、预测智能体:架构设计、商业模式与策略分析 当下预测市场智能体(Prediction Market Agent)正在进入早期实践阶段,其价值不在于“AI 预测更准”,而在于放大预测市场中的信息处理与执行效率。预测市场本质是信息聚合机制,价格反映对事件概率的集体判断;现实中的市场低效源于信息不对称、流动性与注意力约束。预测市场智能体 的合理定位是可执行的概率资产管理(Executable Probabilistic Portfolio Management):将新闻、规则文本与链上数据转化为可验证的定价偏差,以更快、更纪律化、低成本的方式执行策略,并通过跨平台套利与组合风控捕获结构性机会。 理想的预测市场智能体 可抽象为四层架构: 信息层汇集新闻、社交、链上与官方数据;分析层以 LLM 与 ML 识别错价并计算 Edge;策略层通过凯利公式、分批建仓与风控将 Edge 转化为仓位;执行层完成多市场下单、滑点与 Gas 优化与套利执行,形成高效自动化闭环。 预测市场智能体的理想的商业模式设计在不同层级有不同方向的探索空间: 底层Infrastructure 层,提供多源实时数据聚合、Smart Money 地址库、统一的预测市场执行引擎与回测工具,向 B2B/B2D 收费,获取与预测准确率无关的稳定收入;中间Strategy 层,以开源或 Token-Gated 方式沉淀模块化策略组件与社区贡献策略,形成可组合的策略生态并实现价值捕获;顶层Agent 层,通过受托管理的 Vault 直接跑实盘,以透明链上记录和 20–30% 的绩效费(叠加少量管理费)兑现能力。 理想的预测市场智能体 Agent 更接近一个“AI 驱动的概率型资管产品”,通过长期纪律化执行与跨市场错价博弈,而非依赖单次预测准确率来获取收益。而“基础设施变现 + 生态扩展 + 业绩参与”的多元收入结构设计的核心逻辑在于:即便 Alpha 随市场成熟而收敛,执行、风控与结算等底层能力仍具长期价值,可降低对单一“AI 持续战胜市场”假设的依赖。 预测市场智能体策略分析: 理论上,Agent 具备高速、全天候与去情绪化执行优势,但在预测市场中往往难以转化为持续 Alpha,其有效应用主要局限于特定结构,如自动化做市、跨平台错价捕捉及长尾事件的信息整合,这些机会稀缺且受流动性与资本约束。 市场选择:并非所有预测市场都具备可交易价值,参与价值取决于结算清晰度、流动性质量、信息优势、时间结构与操纵风险五个维度。建议优先关注新市场的早期阶段、专业玩家少的长尾事件以及时区差异导致的短暂定价窗口;避免高热度政治事件、主观结算市场与极低流动性品种。下单策略:采用严格的系统化仓位管理。入场前提是自身概率判断显著高于市场隐含概率,并依据分数化凯利公式(通常为1/10–1/4 Kelly)确定仓位,单事件风险敞口不超过15%,以在长期实现风险可控、回撤可承受、优势可复利的稳健增长。套利策略:预测市场中的套利主要体现为四类:跨平台价差(需警惕结算差异)、Dutch Book套利(确定性高但流动性要求严)、结算套利(依赖执行速度)及关联资产对冲(受结构错配限制)。实践关键不在于发现价差,而在于严格对齐合约定义与结算标准,避免因规则细微差异导致的伪套利。聪明钱跟单:链上“聪明钱”信号因滞后性、诱导风险与样本问题,不宜作为主策略。更合理的用法是作为置信度调节因子,用于辅助基于信息与定价偏差的核心判断。 三、Noya.ai:从情报到行动的智能体网络 作为预测市场智能体的早期探索,NOYA 的核心理念是 “Intelligence That Acts(让情报直接行动)”。在链上市场中,单纯的分析与洞察并不足以创造价值——尽管仪表盘、数据分析和研究工具能够帮助用户理解“可能发生什么”,但从洞察到执行之间仍存在大量人工操作、跨链摩擦与执行风险。NOYA 正是基于这一痛点构建:将专业投资流程中“研究 → 形成判断 → 执行 → 持续监控”的完整链路,压缩进一个统一系统,使情报能够直接转化为链上行动。 NOYA 通过整合三大核心层级实现这一目标: 情报层 (Intelligence): 聚合市场数据、代币分析和预测市场信号。抽象层 (Abstraction): 隐藏复杂的跨链路由,用户只需表达意图(Intent)。执行层 (Execution): AI Agent 根据用户授权,跨链、跨协议执行操作。 在产品形态上,NOYA 支持被动收益型用户、主动交易者以及预测市场参与者等不同参与方式,并通过 Omnichain Execution、AI Agents & Intents、Vault Abstraction 等设计,将多链流动性管理、复杂策略执行与风险控制模块化、自动化。 整体系统形成一个持续闭环:Intelligence → Intent → Execution → Monitoring,在确保用户始终掌握资产控制权的前提下,实现从洞察到执行的高效、可验证与低摩擦转化。 四、Noya.ai 的产品体系与演进路径 核心基石:Noya Omnichain Vaults Omnivaults 是 NOYA 的资本部署层,提供跨链、风险可控的自动化收益策略。用户通过简单的存取操作,将资产交由系统在多链、多协议中持续运行,无需手动调仓或盯盘,核心目标是实现稳定的风险调整后收益而非短期投机。 Omnivaults 覆盖标准收益与循环(Loop)等策略,按资产与风险等级清晰划分,并支持可选的绑定激励机制。在执行层面,系统自动完成跨链路由与优化,并可引入 ZKML 对策略决策进行可验证证明,增强自动化资管的透明度与可信度。整体设计以模块化和可组合为核心,支持未来接入更多资产类型与策略形态。 NOYA  Vault(金库)的技术架构:各金库通过 Registry 统一注册与管理,AccountingManager 负责用户份额(ERC-20)与净值定价;底层通过模块化 Connectors 对接 Aave、Uniswap 等协议并计算跨协议 TVL,依赖 Value Oracle(Chainlink + Uniswap v3 TWAP)完成价格路由与估值;交易与跨链由 Swap Handler(LiFi) 执行;最终,策略执行由 Keeper 多签 触发,形成可组合、可审计的执行闭环。 未来 Alpha:预测市场智能体 (Prediction Market Agent) NOYA 最具想象空间的模块:情报层持续追踪链上资金行为与链下叙事变化,识别新闻冲击、情绪波动与赔率错配;当在 Polymarket 等预测市场发现概率偏差时,执行层 AI Agent 可在用户授权下调动金库资金进行套利与调仓。同时,Token Intelligence 与 Prediction Market Copilot 为用户提供结构化代币与预测市场分析,将外部信息直接转化为可执行的交易决策。 预测市场智能决策助理(Prediction Market Intelligence Copilot) NOYA致力于将预测市场从单一事件下注升级为可系统管理的概率资产。其核心模块通过整合市场隐含概率、流动性结构、历史结算与链上聪明钱行为等多元数据,运用期望值(EV)与情景分析识别定价偏差,并重点追踪高胜率钱包的仓位信号以区分信息交易与市场噪音。基于此,Copilot 支持跨市场、跨事件的关联分析,并将实时信号传递至AI Agent,驱动开仓、调仓等自动化执行,实现预测市场的组合管理与动态优化。 核心策略机制包括: 多源 Edge 信息捕获(Multi-source Edge Sourcing):融合 Polymarket 实时赔率、民调数据、私有与外部信息流,对事件隐含概率进行交叉验证,系统性挖掘尚未被充分定价的信息优势。跨市场与跨事件套利(Prediction Market Arbitrage):基于不同市场、不同合约结构或相近事件间的定价差异,构建概率与结构性套利策略,在控制方向性风险的前提下捕获赔率收敛收益。赔率驱动的动态仓位管理(Auto-adjust Positions):当赔率因信息、资金或情绪变化显著偏移时,由 AI Agent 自动调整仓位规模与方向,实现预测市场中的持续优化,而非一次性下注。 NOYA 智能代币情报报告:(NOYA Intelligence Token Reports)   NOYA 的机构级研究与决策中枢,目标在于将专业加密投研流程自动化,并直接输出可用于真实资产配置的决策级信号。该模块以标准化报告结构呈现明确的投资立场、综合评分、核心逻辑、关键催化剂与风险提示,并结合实时市场与链上数据持续更新。与传统研究工具不同,NOYA 的情报并不止步于静态分析,而是可通过 AI Agent 以自然语言调用、对比与追问,并被直接输送至执行层,驱动后续的跨链交易、资金配置与组合管理,从而形成“研究—决策—执行”一体化闭环,使 Intelligence 成为自动化资本运作体系中的主动信号源。 NOYA AI Agent (语音与自然语言驱动) NOYA AI Agent 是平台的执行层,核心作用是将用户意图与市场情报直接转化为经授权的链上行动。用户可通过文本或语音表达目标,Agent 负责规划并执行跨链、跨协议的操作,将研究与执行压缩为一个连续流程。 是 NOYA 降低 DeFi 与预测市场操作门槛的关键产品形态 用户无需理解底层链路、协议或交易路径,仅需通过自然语言或语音表达目标,即可触发 AI Agent 自动规划并执行多步链上操作,实现“意图即执行”。在全程用户签名与非托管前提下,Agent 按“意图理解 → 行动规划 → 用户确认 → 链上执行 → 结果监控”的闭环运行,不替代决策,仅负责高效落地执行,显著降低复杂金融操作的摩擦与门槛。 信任护城河:ZKML 可信执行(Verifiable Execution) 可信执行旨在构建策略、决策与执行的全流程可验证闭环。NOYA引入ZKML作为降低信任假设的关键机制:策略在链下计算,并生成可验证证明,链上验证通过后方可触发相应资金操作。该机制可在不泄露模型细节的前提下,为策略输出提供可信性,并支持可验证回测等衍生能力。目前相关模块在公开文档中仍标注为“开发中”,工程细节仍有待后续披露与验证。 未来 6 个月产品路线图 预测市场高级订单能力:提升策略表达与执行精度,支撑 Agent 化交易。扩展至多预测市场:在 Polymarket 之外接入更多平台,扩大事件覆盖与流动性。多源 Edge 信息采集:与盘口赔率交叉验证,系统性捕获未充分定价的概率偏差。更清晰的代币信号与高阶报告:输出可直接驱动执行的交易信号与深度链上分析。更高级的链上 DeFi 策略组合:上线复杂策略结构,提升资金效率、收益与可扩展性。 五、Noya.ai的生态增长与激励体系 目前 Omnichain Vaults 处于生态发展的早期阶段,其跨链执行与多策略框架已通过验证。 策略与覆盖: 平台已集成 Aave、Morpho 等主流 DeFi 协议,支持稳定币、ETH 及其衍生资产的跨链调配,并初步构建了分层风险策略(如基础收益 vs. Loop 策略)。发展阶段: 当前 TVL 体量有限,核心目标在于功能验证(MVP)与风控框架打磨,架构设计有较强的可组合性,为后续引入复杂资产及高级 Agent 调度预留接口。 激励体系:Kaito 联动与 Space Race 双轮驱动 NOYA 构建了一套以“真实贡献”为锚点,深度绑定内容叙事与流动性的增长飞轮。 生态合作(Kaito Yaps):NOYA 以“AI × DeFi × Agent”的复合叙事登陆 Kaito Leaderboards,配置 总供应量 5% 的无锁仓激励池,并额外预留 1% 用于 Kaito 生态。其机制将内容创作(Yaps)与 Vault 存入、Bond 锁定深度绑定,用户周度贡献转化为决定等级与倍率的 Stars,从而在激励层面同步强化叙事共识与资金长期黏性。增长引擎(Space Race):Space Race 构成 NOYA 的核心增长飞轮,通过以 Stars 作为长期权益凭证,替代传统“资金规模优先”的空投模式。该机制将 Bond 锁仓加成、双向 10% 推荐激励与内容传播统一纳入周度 Points 体系,筛选出高参与度、强共识的长期用户,持续优化社区结构与代币分布。社区建设(Ambassador):NOYA 采用邀请制大使计划,向合格参与者提供社区轮参与资格及基于实际贡献的绩效返佣(最高 10%)。 目前Noya.ai积累超 3,000 名链上用户,X 平台粉丝突破 4.1 万,位列 Kaito Mindshare 榜单前五。这表明 NOYA 在预测市场与 Agent 赛道中已占据了有利的注意力生态位。 此外Noya.ai核心合约通过 Code4rena 与 Hacken 双重审计,并接入 Hacken Extractor。 六、代币经济模型设计及治理 NOYA 采用单代币(Single-token)生态模型,以 $NOYA 作为唯一的价值承载与治理载体。 NOYA 采用回购销毁(Buyback & Burn) 价值捕获机制,协议层在 AI Agent、Omnivaults 与预测市场等产品中产生的价值,通过质押、治理、访问权限及回购销毁等机制实现价值承接,形成 使用 → 收费 → 回购价值闭环,将平台使用度转化为代币长期价值。 项目以 Fair Launch 为核心原则,未引入天使轮或 VC 投资,而是通过低估值($10M FDV)的公开社区轮(Launch-Raise)、Space Race 与空投完成分发,刻意为社区保留非对称上行空间,使筹码结构更偏向活跃用户与长期参与者;团队激励主要来自长期锁定的代币份额。 代币分配 (Distribution) 总供应量: 10 亿 (1,000,000,000) NOYA 初始流通量 (Low Float): 约 12% 估值与融资 (The Raise):融资额:100万美金;估值 (FDV): 1000万美金  七、预测智能体市场竞争分析 目前,预测市场智能体(Prediction Market Agent)赛道仍处于早期,项目数量有限,较具代表性的包括 Olas(Pearl  Prediction Agents)、Warden(BetFlix) 与 Noya.ai。 从产品形态与用户参与方式看,各代表了目前预测市场智能体赛道的三类路径: 1)Olas(Pearl Prediction Agents):Agent 产品化与可运行交付, 以“运行一个自动化预测 Agent”为参与方式,将预测市场交易封装为可运行的 Agent:用户注资并运行,系统自动完成信息获取、概率判断、下注与结算。需要额外安装的参与方式对普通用户的友好度相对有限。 2)Warden(BetFlix):交互分发与消费级投注平台 , 通过低门槛、强娱乐性的交互体验吸引用户参与,采用交互与分发导向路径,以游戏化、内容化前端降低参与成本,强调预测市场的消费与娱乐属性。其竞争优势主要来自用户增长与分发效率,而非策略或执行层深度。 3)NOYA.ai:以“资金托管 + 策略代执行”为核心,通过 Vault 将预测市场与 DeFi 执行抽象为资管产品,提供低操作、低心智负担的参与方式。若后续叠加 Prediction Market Intelligence 与 Agent 执行模块,有望形成“研究—执行—监控”的一体化工作流。 与 Giza、Almanak 等已实现明确产品交付的 AgentFi 项目相比,NOYA 的 DeFi Agent 目前仍处于相对早期阶段。但 NOYA 的差异化在于其定位与切入层级:其以约 $10M FDV 的公平启动估值进入同一执行与资管叙事赛道,在现阶段具备显著的估值折价与增长潜力。 NOYA:以 Omnichain Vault 为核心的资管封装型 AgentFi 项目,当前交付重点集中在跨链执行与风险控制等基础设施层,上层的 Agent 执行、预测市场能力及 ZKML 相关机制仍处于开发与验证阶段。Giza:可直接运行资管策略(ARMA、Pulse),目前 AgentFi 产品完成度最高。Almanak:定位于 AI Quant for DeFi,通过模型与量化框架输出策略与风险信号,主要面向专业资金与策略管理需求,强调方法论的系统性与结果的可复现性。Theoriq:以多智能体协作(Agent Swarms)为核心的策略与执行框架,强调可扩展的 Agent 协作体系与中长期基础设施叙事,更偏向底层能力建设。Infinit:偏执行层的 Agentic DeFi 终端,通过“意图 → 多步链上操作”的流程编排,显著降低复杂 DeFi 操作的执行门槛,用户对产品价值的感知相对直接。 八、总结:商业逻辑、工程实现及潜在风险 商业逻辑: NOYA 是当前市场中较为少见的 AI Agent × Prediction Market × ZKML 多重叙事叠加标的,并进一步结合了 Intent 驱动执行 的产品方向。在资产定价层面,其以约 $10M FDV 启动,明显低于同类 AI / DeFAI / Prediction 相关项目常见的 $75M–$100M 区间估值,形成一定的结构性价差。 从设计上看,NOYA 试图将 策略执行(Vault / Agent) 与 信息优势(Prediction Market Intelligence) 统一到同一执行框架中,并通过协议收入回流(fees → buyback & burn)建立价值捕获闭环。尽管项目仍处于早期阶段,但在多叙事叠加与低估值起点的共同作用下,其风险—收益结构更接近一类高赔率、非对称博弈标的。 工程实现: 在可验证的交付层面,NOYA 当前已上线的核心功能为 Omnichain Vaults,提供跨链资产调度、收益策略执行与延迟结算机制,工程实现相对偏基础。其愿景中强调的 Prediction Market Intelligence(Copilot)、NOYA AI Agent 以及 ZKML 驱动的可验证执行仍处于开发阶段,尚未在主网形成完整闭环。现阶段并非成熟的 DeFAI 平台。 潜在风险与关注要点 交付不确定性: 从“基础 Vault”到“全能 Agent”的技术跨度极大,需警惕 Roadmap 延期或 ZKML 落地不及预期的风险。潜在系统风险 : 包含合约安全、跨链桥故障以及预测市场特有的预言机争议(如规则模糊导致无法裁决),任何单点故障都可能造成资金损耗。 免责声明:本文在创作过程中借助了 ChatGPT-5.2, Gemini 3和Claude Opus 4.5等 AI 工具辅助完成,作者已尽力校对并确保信息真实与准确,但仍难免存在疏漏,敬请谅解。需特别提示的是,加密资产市场普遍存在项目基本面与二级市场价格表现背离的情况。本文内容仅用于信息整合与学术/研究交流,不构成任何投资建议,亦不应视为任何代币的买卖推荐。

Noya.ai 研报:预测市场智能体的前瞻

Noya.ai 研报:预测市场智能体的前瞻
作者:0xjacobzhao | https://linktr.ee/0xjacobzhao

在过往Crypto AI系列研报中我们持续强调的观点:当前加密领域最具实际应用价值的场景,主要集中在稳定币支付与DeFi,而Agent是AI产业面向用户的关键界面。因此,在Crypto与AI融合的趋势中,最具价值的两条路径分别是:短期内基于现有成熟DeFi协议(借贷、流动性挖矿等基础策略,以及Swap、Pendle PT、资金费率套利等高级策略)的AgentFi,以及中长期围绕稳定币结算、并依托ACP/AP2/x402/ERC-8004等协议的Agent Payment。
预测市场在2025年已成为不容忽视的行业新趋势,其年度总交易量从2024年的约90亿美元激增至2025年的超过400亿美元,实现超过400%的年同比增长。这一显著增长由多重因素共同推动:宏观政治事件(如2024年美国大选)带来不确定性需求,基础设施与交易模式的成熟,以及监管环境出现破冰(Kalshi胜诉与Polymarket回归美国)。预测市场智能体(Prediction Market Agent)在2026年初呈现早期雏形,有望在未来一年成为智能体领域的新兴产品形态。

一、预测市场:从下注工具到“全球真相层”
预测市场是一种围绕未来事件结果进行交易的金融机制,合约价格本质上反映了市场对事件发生概率的集体判断。其有效性源于群体智慧与经济激励的结合:在匿名、真金白银下注的环境中,分散信息被快速整合为按资金意愿加权的价格信号,从而显著降低噪音与虚假判断。
截至2025年底,预测市场已基本形成 Polymarket与Kalshi  双寡头主导的格局。据《福布斯》统计,2025年总交易量约达440亿美元,其中Polymarket贡献约215亿美元,Kalshi约为171亿美元。Kalshi凭借此前选举合约案的法律胜诉、在美国体育预测市场的合规先发优势,以及相对明确的监管预期,实现了快速扩张。目前,二者的发展路径已呈现清晰分化:
Polymarket 采用“链下撮合、链上结算”的混合CLOB架构与去中心化结算机制,构建起全球化、非托管的高流动性市场,合规重返美国后形成“在岸+离岸”双轨运营结构;Kalshi 融入传统金融体系,通过API接入主流零售券商,吸引华尔街做市商深度参与宏观与数据型合约交易,产品受制于传统监管流程,长尾需求与突发事件相对滞后。
除Polymarket与Kalshi之外,预测市场领域具备竞争力的其他参与者主要沿着两条路径发展:
一是合规分发路径,将事件合约嵌入券商或大型平台的现有账户体系,依靠渠道覆盖、清算能力与机构信任建立优势(例如Interactive Brokers与ForecastEx合作的ForecastTrader,以及FanDuel与CME合作的FanDuel Predicts);二是链上性能与资金效率路径,以Solana生态的永续合约DEX Drift为例,其在原有产品线基础上新增了预测市场模块B.E.T(prediction markets)。
传统金融合规入口与加密原生性能优势这两类路径共同构成预测市场生态的多元竞争格局。

预测市场表面上与赌博相似,本质上也是一种零和博弈,但二者的核心区别并不在于形式,而在于是否具有正外部性:通过真金白银的交易聚合分散信息,对现实事件进行公共定价,形成有价值的信号层。尽管存在娱乐化参与等局限,但其趋势正从博弈转向“全球真相层”——随着CME、彭博等机构的接入,事件概率已成为可被金融与企业系统直接调用的决策元数据,提供更及时、可量化的市场化真相。

二、预测智能体:架构设计、商业模式与策略分析
当下预测市场智能体(Prediction Market Agent)正在进入早期实践阶段,其价值不在于“AI 预测更准”,而在于放大预测市场中的信息处理与执行效率。预测市场本质是信息聚合机制,价格反映对事件概率的集体判断;现实中的市场低效源于信息不对称、流动性与注意力约束。预测市场智能体 的合理定位是可执行的概率资产管理(Executable Probabilistic Portfolio Management):将新闻、规则文本与链上数据转化为可验证的定价偏差,以更快、更纪律化、低成本的方式执行策略,并通过跨平台套利与组合风控捕获结构性机会。
理想的预测市场智能体 可抽象为四层架构:
信息层汇集新闻、社交、链上与官方数据;分析层以 LLM 与 ML 识别错价并计算 Edge;策略层通过凯利公式、分批建仓与风控将 Edge 转化为仓位;执行层完成多市场下单、滑点与 Gas 优化与套利执行,形成高效自动化闭环。

预测市场智能体的理想的商业模式设计在不同层级有不同方向的探索空间:
底层Infrastructure 层,提供多源实时数据聚合、Smart Money 地址库、统一的预测市场执行引擎与回测工具,向 B2B/B2D 收费,获取与预测准确率无关的稳定收入;中间Strategy 层,以开源或 Token-Gated 方式沉淀模块化策略组件与社区贡献策略,形成可组合的策略生态并实现价值捕获;顶层Agent 层,通过受托管理的 Vault 直接跑实盘,以透明链上记录和 20–30% 的绩效费(叠加少量管理费)兑现能力。
理想的预测市场智能体 Agent 更接近一个“AI 驱动的概率型资管产品”,通过长期纪律化执行与跨市场错价博弈,而非依赖单次预测准确率来获取收益。而“基础设施变现 + 生态扩展 + 业绩参与”的多元收入结构设计的核心逻辑在于:即便 Alpha 随市场成熟而收敛,执行、风控与结算等底层能力仍具长期价值,可降低对单一“AI 持续战胜市场”假设的依赖。

预测市场智能体策略分析:
理论上,Agent 具备高速、全天候与去情绪化执行优势,但在预测市场中往往难以转化为持续 Alpha,其有效应用主要局限于特定结构,如自动化做市、跨平台错价捕捉及长尾事件的信息整合,这些机会稀缺且受流动性与资本约束。
市场选择:并非所有预测市场都具备可交易价值,参与价值取决于结算清晰度、流动性质量、信息优势、时间结构与操纵风险五个维度。建议优先关注新市场的早期阶段、专业玩家少的长尾事件以及时区差异导致的短暂定价窗口;避免高热度政治事件、主观结算市场与极低流动性品种。下单策略:采用严格的系统化仓位管理。入场前提是自身概率判断显著高于市场隐含概率,并依据分数化凯利公式(通常为1/10–1/4 Kelly)确定仓位,单事件风险敞口不超过15%,以在长期实现风险可控、回撤可承受、优势可复利的稳健增长。套利策略:预测市场中的套利主要体现为四类:跨平台价差(需警惕结算差异)、Dutch Book套利(确定性高但流动性要求严)、结算套利(依赖执行速度)及关联资产对冲(受结构错配限制)。实践关键不在于发现价差,而在于严格对齐合约定义与结算标准,避免因规则细微差异导致的伪套利。聪明钱跟单:链上“聪明钱”信号因滞后性、诱导风险与样本问题,不宜作为主策略。更合理的用法是作为置信度调节因子,用于辅助基于信息与定价偏差的核心判断。
三、Noya.ai:从情报到行动的智能体网络
作为预测市场智能体的早期探索,NOYA 的核心理念是 “Intelligence That Acts(让情报直接行动)”。在链上市场中,单纯的分析与洞察并不足以创造价值——尽管仪表盘、数据分析和研究工具能够帮助用户理解“可能发生什么”,但从洞察到执行之间仍存在大量人工操作、跨链摩擦与执行风险。NOYA 正是基于这一痛点构建:将专业投资流程中“研究 → 形成判断 → 执行 → 持续监控”的完整链路,压缩进一个统一系统,使情报能够直接转化为链上行动。
NOYA 通过整合三大核心层级实现这一目标:
情报层 (Intelligence): 聚合市场数据、代币分析和预测市场信号。抽象层 (Abstraction): 隐藏复杂的跨链路由,用户只需表达意图(Intent)。执行层 (Execution): AI Agent 根据用户授权,跨链、跨协议执行操作。
在产品形态上,NOYA 支持被动收益型用户、主动交易者以及预测市场参与者等不同参与方式,并通过 Omnichain Execution、AI Agents & Intents、Vault Abstraction 等设计,将多链流动性管理、复杂策略执行与风险控制模块化、自动化。
整体系统形成一个持续闭环:Intelligence → Intent → Execution → Monitoring,在确保用户始终掌握资产控制权的前提下,实现从洞察到执行的高效、可验证与低摩擦转化。

四、Noya.ai 的产品体系与演进路径
核心基石:Noya Omnichain Vaults
Omnivaults 是 NOYA 的资本部署层,提供跨链、风险可控的自动化收益策略。用户通过简单的存取操作,将资产交由系统在多链、多协议中持续运行,无需手动调仓或盯盘,核心目标是实现稳定的风险调整后收益而非短期投机。
Omnivaults 覆盖标准收益与循环(Loop)等策略,按资产与风险等级清晰划分,并支持可选的绑定激励机制。在执行层面,系统自动完成跨链路由与优化,并可引入 ZKML 对策略决策进行可验证证明,增强自动化资管的透明度与可信度。整体设计以模块化和可组合为核心,支持未来接入更多资产类型与策略形态。

NOYA  Vault(金库)的技术架构:各金库通过 Registry 统一注册与管理,AccountingManager 负责用户份额(ERC-20)与净值定价;底层通过模块化 Connectors 对接 Aave、Uniswap 等协议并计算跨协议 TVL,依赖 Value Oracle(Chainlink + Uniswap v3 TWAP)完成价格路由与估值;交易与跨链由 Swap Handler(LiFi) 执行;最终,策略执行由 Keeper 多签 触发,形成可组合、可审计的执行闭环。

未来 Alpha:预测市场智能体 (Prediction Market Agent)
NOYA 最具想象空间的模块:情报层持续追踪链上资金行为与链下叙事变化,识别新闻冲击、情绪波动与赔率错配;当在 Polymarket 等预测市场发现概率偏差时,执行层 AI Agent 可在用户授权下调动金库资金进行套利与调仓。同时,Token Intelligence 与 Prediction Market Copilot 为用户提供结构化代币与预测市场分析,将外部信息直接转化为可执行的交易决策。
预测市场智能决策助理(Prediction Market Intelligence Copilot)
NOYA致力于将预测市场从单一事件下注升级为可系统管理的概率资产。其核心模块通过整合市场隐含概率、流动性结构、历史结算与链上聪明钱行为等多元数据,运用期望值(EV)与情景分析识别定价偏差,并重点追踪高胜率钱包的仓位信号以区分信息交易与市场噪音。基于此,Copilot 支持跨市场、跨事件的关联分析,并将实时信号传递至AI Agent,驱动开仓、调仓等自动化执行,实现预测市场的组合管理与动态优化。
核心策略机制包括:
多源 Edge 信息捕获(Multi-source Edge Sourcing):融合 Polymarket 实时赔率、民调数据、私有与外部信息流,对事件隐含概率进行交叉验证,系统性挖掘尚未被充分定价的信息优势。跨市场与跨事件套利(Prediction Market Arbitrage):基于不同市场、不同合约结构或相近事件间的定价差异,构建概率与结构性套利策略,在控制方向性风险的前提下捕获赔率收敛收益。赔率驱动的动态仓位管理(Auto-adjust Positions):当赔率因信息、资金或情绪变化显著偏移时,由 AI Agent 自动调整仓位规模与方向,实现预测市场中的持续优化,而非一次性下注。
NOYA 智能代币情报报告:(NOYA Intelligence Token Reports) 
 NOYA 的机构级研究与决策中枢,目标在于将专业加密投研流程自动化,并直接输出可用于真实资产配置的决策级信号。该模块以标准化报告结构呈现明确的投资立场、综合评分、核心逻辑、关键催化剂与风险提示,并结合实时市场与链上数据持续更新。与传统研究工具不同,NOYA 的情报并不止步于静态分析,而是可通过 AI Agent 以自然语言调用、对比与追问,并被直接输送至执行层,驱动后续的跨链交易、资金配置与组合管理,从而形成“研究—决策—执行”一体化闭环,使 Intelligence 成为自动化资本运作体系中的主动信号源。
NOYA AI Agent (语音与自然语言驱动)
NOYA AI Agent 是平台的执行层,核心作用是将用户意图与市场情报直接转化为经授权的链上行动。用户可通过文本或语音表达目标,Agent 负责规划并执行跨链、跨协议的操作,将研究与执行压缩为一个连续流程。 是 NOYA 降低 DeFi 与预测市场操作门槛的关键产品形态
用户无需理解底层链路、协议或交易路径,仅需通过自然语言或语音表达目标,即可触发 AI Agent 自动规划并执行多步链上操作,实现“意图即执行”。在全程用户签名与非托管前提下,Agent 按“意图理解 → 行动规划 → 用户确认 → 链上执行 → 结果监控”的闭环运行,不替代决策,仅负责高效落地执行,显著降低复杂金融操作的摩擦与门槛。
信任护城河:ZKML 可信执行(Verifiable Execution)
可信执行旨在构建策略、决策与执行的全流程可验证闭环。NOYA引入ZKML作为降低信任假设的关键机制:策略在链下计算,并生成可验证证明,链上验证通过后方可触发相应资金操作。该机制可在不泄露模型细节的前提下,为策略输出提供可信性,并支持可验证回测等衍生能力。目前相关模块在公开文档中仍标注为“开发中”,工程细节仍有待后续披露与验证。
未来 6 个月产品路线图
预测市场高级订单能力:提升策略表达与执行精度,支撑 Agent 化交易。扩展至多预测市场:在 Polymarket 之外接入更多平台,扩大事件覆盖与流动性。多源 Edge 信息采集:与盘口赔率交叉验证,系统性捕获未充分定价的概率偏差。更清晰的代币信号与高阶报告:输出可直接驱动执行的交易信号与深度链上分析。更高级的链上 DeFi 策略组合:上线复杂策略结构,提升资金效率、收益与可扩展性。
五、Noya.ai的生态增长与激励体系
目前 Omnichain Vaults 处于生态发展的早期阶段,其跨链执行与多策略框架已通过验证。
策略与覆盖: 平台已集成 Aave、Morpho 等主流 DeFi 协议,支持稳定币、ETH 及其衍生资产的跨链调配,并初步构建了分层风险策略(如基础收益 vs. Loop 策略)。发展阶段: 当前 TVL 体量有限,核心目标在于功能验证(MVP)与风控框架打磨,架构设计有较强的可组合性,为后续引入复杂资产及高级 Agent 调度预留接口。
激励体系:Kaito 联动与 Space Race 双轮驱动
NOYA 构建了一套以“真实贡献”为锚点,深度绑定内容叙事与流动性的增长飞轮。
生态合作(Kaito Yaps):NOYA 以“AI × DeFi × Agent”的复合叙事登陆 Kaito Leaderboards,配置 总供应量 5% 的无锁仓激励池,并额外预留 1% 用于 Kaito 生态。其机制将内容创作(Yaps)与 Vault 存入、Bond 锁定深度绑定,用户周度贡献转化为决定等级与倍率的 Stars,从而在激励层面同步强化叙事共识与资金长期黏性。增长引擎(Space Race):Space Race 构成 NOYA 的核心增长飞轮,通过以 Stars 作为长期权益凭证,替代传统“资金规模优先”的空投模式。该机制将 Bond 锁仓加成、双向 10% 推荐激励与内容传播统一纳入周度 Points 体系,筛选出高参与度、强共识的长期用户,持续优化社区结构与代币分布。社区建设(Ambassador):NOYA 采用邀请制大使计划,向合格参与者提供社区轮参与资格及基于实际贡献的绩效返佣(最高 10%)。
目前Noya.ai积累超 3,000 名链上用户,X 平台粉丝突破 4.1 万,位列 Kaito Mindshare 榜单前五。这表明 NOYA 在预测市场与 Agent 赛道中已占据了有利的注意力生态位。
此外Noya.ai核心合约通过 Code4rena 与 Hacken 双重审计,并接入 Hacken Extractor。
六、代币经济模型设计及治理
NOYA 采用单代币(Single-token)生态模型,以 $NOYA 作为唯一的价值承载与治理载体。
NOYA 采用回购销毁(Buyback & Burn) 价值捕获机制,协议层在 AI Agent、Omnivaults 与预测市场等产品中产生的价值,通过质押、治理、访问权限及回购销毁等机制实现价值承接,形成 使用 → 收费 → 回购价值闭环,将平台使用度转化为代币长期价值。
项目以 Fair Launch 为核心原则,未引入天使轮或 VC 投资,而是通过低估值($10M FDV)的公开社区轮(Launch-Raise)、Space Race 与空投完成分发,刻意为社区保留非对称上行空间,使筹码结构更偏向活跃用户与长期参与者;团队激励主要来自长期锁定的代币份额。
代币分配 (Distribution)
总供应量: 10 亿 (1,000,000,000) NOYA 初始流通量 (Low Float): 约 12% 估值与融资 (The Raise):融资额:100万美金;估值 (FDV): 1000万美金 

七、预测智能体市场竞争分析
目前,预测市场智能体(Prediction Market Agent)赛道仍处于早期,项目数量有限,较具代表性的包括 Olas(Pearl  Prediction Agents)、Warden(BetFlix) 与 Noya.ai。
从产品形态与用户参与方式看,各代表了目前预测市场智能体赛道的三类路径:
1)Olas(Pearl Prediction Agents):Agent 产品化与可运行交付, 以“运行一个自动化预测 Agent”为参与方式,将预测市场交易封装为可运行的 Agent:用户注资并运行,系统自动完成信息获取、概率判断、下注与结算。需要额外安装的参与方式对普通用户的友好度相对有限。
2)Warden(BetFlix):交互分发与消费级投注平台 , 通过低门槛、强娱乐性的交互体验吸引用户参与,采用交互与分发导向路径,以游戏化、内容化前端降低参与成本,强调预测市场的消费与娱乐属性。其竞争优势主要来自用户增长与分发效率,而非策略或执行层深度。
3)NOYA.ai:以“资金托管 + 策略代执行”为核心,通过 Vault 将预测市场与 DeFi 执行抽象为资管产品,提供低操作、低心智负担的参与方式。若后续叠加 Prediction Market Intelligence 与 Agent 执行模块,有望形成“研究—执行—监控”的一体化工作流。
与 Giza、Almanak 等已实现明确产品交付的 AgentFi 项目相比,NOYA 的 DeFi Agent 目前仍处于相对早期阶段。但 NOYA 的差异化在于其定位与切入层级:其以约 $10M FDV 的公平启动估值进入同一执行与资管叙事赛道,在现阶段具备显著的估值折价与增长潜力。
NOYA:以 Omnichain Vault 为核心的资管封装型 AgentFi 项目,当前交付重点集中在跨链执行与风险控制等基础设施层,上层的 Agent 执行、预测市场能力及 ZKML 相关机制仍处于开发与验证阶段。Giza:可直接运行资管策略(ARMA、Pulse),目前 AgentFi 产品完成度最高。Almanak:定位于 AI Quant for DeFi,通过模型与量化框架输出策略与风险信号,主要面向专业资金与策略管理需求,强调方法论的系统性与结果的可复现性。Theoriq:以多智能体协作(Agent Swarms)为核心的策略与执行框架,强调可扩展的 Agent 协作体系与中长期基础设施叙事,更偏向底层能力建设。Infinit:偏执行层的 Agentic DeFi 终端,通过“意图 → 多步链上操作”的流程编排,显著降低复杂 DeFi 操作的执行门槛,用户对产品价值的感知相对直接。
八、总结:商业逻辑、工程实现及潜在风险
商业逻辑:
NOYA 是当前市场中较为少见的 AI Agent × Prediction Market × ZKML 多重叙事叠加标的,并进一步结合了 Intent 驱动执行 的产品方向。在资产定价层面,其以约 $10M FDV 启动,明显低于同类 AI / DeFAI / Prediction 相关项目常见的 $75M–$100M 区间估值,形成一定的结构性价差。
从设计上看,NOYA 试图将 策略执行(Vault / Agent) 与 信息优势(Prediction Market Intelligence) 统一到同一执行框架中,并通过协议收入回流(fees → buyback & burn)建立价值捕获闭环。尽管项目仍处于早期阶段,但在多叙事叠加与低估值起点的共同作用下,其风险—收益结构更接近一类高赔率、非对称博弈标的。
工程实现: 在可验证的交付层面,NOYA 当前已上线的核心功能为 Omnichain Vaults,提供跨链资产调度、收益策略执行与延迟结算机制,工程实现相对偏基础。其愿景中强调的 Prediction Market Intelligence(Copilot)、NOYA AI Agent 以及 ZKML 驱动的可验证执行仍处于开发阶段,尚未在主网形成完整闭环。现阶段并非成熟的 DeFAI 平台。

潜在风险与关注要点
交付不确定性: 从“基础 Vault”到“全能 Agent”的技术跨度极大,需警惕 Roadmap 延期或 ZKML 落地不及预期的风险。潜在系统风险 : 包含合约安全、跨链桥故障以及预测市场特有的预言机争议(如规则模糊导致无法裁决),任何单点故障都可能造成资金损耗。

免责声明:本文在创作过程中借助了 ChatGPT-5.2, Gemini 3和Claude Opus 4.5等 AI 工具辅助完成,作者已尽力校对并确保信息真实与准确,但仍难免存在疏漏,敬请谅解。需特别提示的是,加密资产市场普遍存在项目基本面与二级市场价格表现背离的情况。本文内容仅用于信息整合与学术/研究交流,不构成任何投资建议,亦不应视为任何代币的买卖推荐。
Ver original
Aprendizado por Reforço: A Mudança de Paradigma da IA DescentralizadaAutor: 0xjacobzhao | https://linktr.ee/0xjacobzhao Este relatório de pesquisa independente é apoiado pela IOSG Ventures. O processo de pesquisa e escrita foi inspirado pelo trabalho de Sam Lehman (Pantera Capital) em aprendizado por reforço. Agradecimentos a Ben Fielding (Gensyn.ai), Gao Yuan(Gradient), Samuel Dare & Erfan Miahi (Covenant AI), Shashank Yadav (Fraction AI), Chao Wang por suas valiosas sugestões neste artigo. Este artigo busca objetividade e precisão, mas algumas opiniões envolvem julgamento subjetivo e podem conter viés. Agradecemos a compreensão dos leitores.

Aprendizado por Reforço: A Mudança de Paradigma da IA Descentralizada

Autor: 0xjacobzhao | https://linktr.ee/0xjacobzhao
Este relatório de pesquisa independente é apoiado pela IOSG Ventures. O processo de pesquisa e escrita foi inspirado pelo trabalho de Sam Lehman (Pantera Capital) em aprendizado por reforço. Agradecimentos a Ben Fielding (Gensyn.ai), Gao Yuan(Gradient), Samuel Dare & Erfan Miahi (Covenant AI), Shashank Yadav (Fraction AI), Chao Wang por suas valiosas sugestões neste artigo. Este artigo busca objetividade e precisão, mas algumas opiniões envolvem julgamento subjetivo e podem conter viés. Agradecemos a compreensão dos leitores.
Ver original
Aprendizado por Reforço: A Mudança de Paradigma nas Redes de IA DescentralizadasAutor: 0xjacobzhao | https://linktr.ee/0xjacobzhao Este relatório de pesquisa independente é apoiado pela IOSG Ventures, e o processo de pesquisa e escrita foi inspirado pelo relatório de aprendizado por reforço de Sam Lehman (Pantera Capital). Agradecimentos a Ben Fielding (Gensyn.ai), Gao Yuan (Gradient), Samuel Dare & Erfan Miahi (Covenant AI), Shashank Yadav (Fraction AI) e Chao Wang pelas valiosas sugestões feitas a este artigo. Este artigo se esforça para ser objetivo e preciso; algumas opiniões envolvem julgamentos subjetivos e, inevitavelmente, podem haver desvios, pedimos a compreensão dos leitores. A inteligência artificial está passando de um aprendizado estatístico baseado principalmente em "ajuste de padrões" para um sistema de capacidades centrado em "raciocínio estruturado", e a importância do pós-treinamento (Post-training) está rapidamente aumentando. O surgimento do DeepSeek-R1 marca uma virada paradigmática do aprendizado por reforço na era dos grandes modelos, com um consenso na indústria: a pré-treinamento constrói a base de capacidade geral do modelo, e o aprendizado por reforço não é mais apenas uma ferramenta de alinhamento de valor, mas provou ser capaz de melhorar sistematicamente a qualidade da cadeia de raciocínio e a capacidade de tomada de decisão complexa, evoluindo gradualmente para um caminho técnico que melhora continuamente o nível de inteligência.

Aprendizado por Reforço: A Mudança de Paradigma nas Redes de IA Descentralizadas

Autor: 0xjacobzhao | https://linktr.ee/0xjacobzhao

Este relatório de pesquisa independente é apoiado pela IOSG Ventures, e o processo de pesquisa e escrita foi inspirado pelo relatório de aprendizado por reforço de Sam Lehman (Pantera Capital). Agradecimentos a Ben Fielding (Gensyn.ai), Gao Yuan (Gradient), Samuel Dare & Erfan Miahi (Covenant AI), Shashank Yadav (Fraction AI) e Chao Wang pelas valiosas sugestões feitas a este artigo. Este artigo se esforça para ser objetivo e preciso; algumas opiniões envolvem julgamentos subjetivos e, inevitavelmente, podem haver desvios, pedimos a compreensão dos leitores.
A inteligência artificial está passando de um aprendizado estatístico baseado principalmente em "ajuste de padrões" para um sistema de capacidades centrado em "raciocínio estruturado", e a importância do pós-treinamento (Post-training) está rapidamente aumentando. O surgimento do DeepSeek-R1 marca uma virada paradigmática do aprendizado por reforço na era dos grandes modelos, com um consenso na indústria: a pré-treinamento constrói a base de capacidade geral do modelo, e o aprendizado por reforço não é mais apenas uma ferramenta de alinhamento de valor, mas provou ser capaz de melhorar sistematicamente a qualidade da cadeia de raciocínio e a capacidade de tomada de decisão complexa, evoluindo gradualmente para um caminho técnico que melhora continuamente o nível de inteligência.
Ver original
Ordem Econômica de Máquina: Um Caminho Full-Stack para o Comércio AgenteAutor: 0xjacobzhao | https://linktr.ee/0xjacobzhao Este relatório de pesquisa independente é apoiado pela IOSG Ventures. O processo de pesquisa e redação foi inspirado por trabalhos relacionados de Raghav Agarwal (LongHash) e Jay Yu (Pantera). Agradecimentos a Lex Sokolin @ Generative Ventures, Jordan@AIsa, Ivy @PodOur2Cents por suas valiosas sugestões sobre este artigo. O feedback também foi solicitado de equipes de projetos como Nevermined, Skyfire, Virtuals Protocol, AIsa, Heurist, AEON durante o processo de redação. Este artigo busca um conteúdo objetivo e preciso, mas alguns pontos de vista envolvem julgamento subjetivo e podem inevitavelmente conter desvios. A compreensão dos leitores é apreciada.

Ordem Econômica de Máquina: Um Caminho Full-Stack para o Comércio Agente

Autor: 0xjacobzhao | https://linktr.ee/0xjacobzhao

Este relatório de pesquisa independente é apoiado pela IOSG Ventures. O processo de pesquisa e redação foi inspirado por trabalhos relacionados de Raghav Agarwal (LongHash) e Jay Yu (Pantera). Agradecimentos a Lex Sokolin @ Generative Ventures, Jordan@AIsa, Ivy @PodOur2Cents por suas valiosas sugestões sobre este artigo. O feedback também foi solicitado de equipes de projetos como Nevermined, Skyfire, Virtuals Protocol, AIsa, Heurist, AEON durante o processo de redação. Este artigo busca um conteúdo objetivo e preciso, mas alguns pontos de vista envolvem julgamento subjetivo e podem inevitavelmente conter desvios. A compreensão dos leitores é apreciada.
Ver original
A Ordem Econômica das Máquinas: O Caminho Full-Stack para o Comércio de AgentesAutor: 0xjacobzhao | https://linktr.ee/0xjacobzhao Este relatório de pesquisa independente é apoiado pela IOSG Ventures, e o processo de escrita foi inspirado pelos relatórios relacionados de Raghav Agarwal@LongHash e Jay Yu@Pantera. Agradecemos as valiosas sugestões feitas por Lex Sokolin @ Generative Ventures, Jordan@AIsa, e Ivy@(支无不言) blog. Durante o processo de redação, também consultamos as opiniões e feedback das equipes de projetos como Nevermined, Skyfire, Virtuals Protocol, AIsa, Heurist, e AEON. Este artigo busca ser objetivo e preciso, embora algumas opiniões envolvam julgamentos subjetivos, o que pode resultar em desvios, pedimos a compreensão dos leitores.

A Ordem Econômica das Máquinas: O Caminho Full-Stack para o Comércio de Agentes

Autor: 0xjacobzhao | https://linktr.ee/0xjacobzhao

Este relatório de pesquisa independente é apoiado pela IOSG Ventures, e o processo de escrita foi inspirado pelos relatórios relacionados de Raghav Agarwal@LongHash e Jay Yu@Pantera. Agradecemos as valiosas sugestões feitas por Lex Sokolin @ Generative Ventures, Jordan@AIsa, e Ivy@(支无不言) blog. Durante o processo de redação, também consultamos as opiniões e feedback das equipes de projetos como Nevermined, Skyfire, Virtuals Protocol, AIsa, Heurist, e AEON. Este artigo busca ser objetivo e preciso, embora algumas opiniões envolvam julgamentos subjetivos, o que pode resultar em desvios, pedimos a compreensão dos leitores.
Ver original
A Evolução Convergente da Automação, IA e Web3 na Indústria de RobóticaAutor: 0xjacobzhao | https://linktr.ee/0xjacobzhao Este relatório de pesquisa independente é apoiado pela IOSG Ventures. O autor agradece a Hans (RoboCup Ásia-Pacífico), Nichanan Kesonpat(1kx), Robert Koschig (1kx), Amanda Young (Collab+Currency), Jonathan Victor (Ansa Research), Lex Sokolin (Generative Ventures), Jay Yu (Pantera Capital), Jeffrey Hu (Hashkey Capital) por seus comentários valiosos, assim como aos colaboradores da OpenMind, BitRobot, peaq, Auki Labs, XMAQUINA, GAIB, Vader, Gradient, Tashi Network e CodecFlow por seu feedback construtivo. Embora todos os esforços tenham sido feitos para garantir objetividade e precisão, algumas percepções inevitavelmente refletem interpretação subjetiva, e os leitores são encorajados a interagir com o conteúdo de forma crítica.

A Evolução Convergente da Automação, IA e Web3 na Indústria de Robótica

Autor: 0xjacobzhao | https://linktr.ee/0xjacobzhao
Este relatório de pesquisa independente é apoiado pela IOSG Ventures. O autor agradece a Hans (RoboCup Ásia-Pacífico), Nichanan Kesonpat(1kx), Robert Koschig (1kx), Amanda Young (Collab+Currency), Jonathan Victor (Ansa Research), Lex Sokolin (Generative Ventures), Jay Yu (Pantera Capital), Jeffrey Hu (Hashkey Capital) por seus comentários valiosos, assim como aos colaboradores da OpenMind, BitRobot, peaq, Auki Labs, XMAQUINA, GAIB, Vader, Gradient, Tashi Network e CodecFlow por seu feedback construtivo. Embora todos os esforços tenham sido feitos para garantir objetividade e precisão, algumas percepções inevitavelmente refletem interpretação subjetiva, e os leitores são encorajados a interagir com o conteúdo de forma crítica.
Ver original
Visões da Indústria de Robôs: A Evolução da Integração entre Automação, Inteligência Artificial e Web3作者:0xjacobzhao | https://linktr.ee/0xjacobzhao Este relatório independente é apoiado pela IOSG Ventures, agradecemos a Hans (RoboCup Asia-Pacific), Nichanan Kesonpat(1kx), Robert Koschig (1kx), Amanda Young (Collab+Currency), Jonathan Victor (Ansa Research), Lex Sokolin (Generative Ventures), Jay Yu (Pantera Capital) e Jeffrey Hu (Hashkey Capital) pelas valiosas sugestões feitas a este artigo. Durante a redação, também foram consultadas as opiniões e feedbacks de equipes de projetos como OpenMind, BitRobot, peaq, Auki Labs, XMAQUINA, GAIB, Vader, Gradient, Tashi Network e CodecFlow. Este artigo se esforça para ser objetivo e preciso, embora algumas opiniões envolvam julgamentos subjetivos, o que pode resultar em desvios, pedimos a compreensão dos leitores.

Visões da Indústria de Robôs: A Evolução da Integração entre Automação, Inteligência Artificial e Web3

作者:0xjacobzhao | https://linktr.ee/0xjacobzhao

Este relatório independente é apoiado pela IOSG Ventures, agradecemos a Hans (RoboCup Asia-Pacific), Nichanan Kesonpat(1kx), Robert Koschig (1kx), Amanda Young (Collab+Currency), Jonathan Victor (Ansa Research), Lex Sokolin (Generative Ventures), Jay Yu (Pantera Capital) e Jeffrey Hu (Hashkey Capital) pelas valiosas sugestões feitas a este artigo. Durante a redação, também foram consultadas as opiniões e feedbacks de equipes de projetos como OpenMind, BitRobot, peaq, Auki Labs, XMAQUINA, GAIB, Vader, Gradient, Tashi Network e CodecFlow. Este artigo se esforça para ser objetivo e preciso, embora algumas opiniões envolvam julgamentos subjetivos, o que pode resultar em desvios, pedimos a compreensão dos leitores.
Ver original
Relatório de Pesquisa Brevis: A Camada Infinita de Computação Verificável do zkVM e Coprocessador de Dados ZKO paradigma da Computação Verificável—“cálculo fora da cadeia + verificação na cadeia”—tornou-se o modelo computacional universal para sistemas de blockchain. Ele permite que aplicações de blockchain alcancem liberdade computacional quase infinita enquanto mantém a descentralização e a confiança como garantias de segurança essenciais. Provas de conhecimento zero (ZKPs) formam a espinha dorsal deste paradigma, com aplicações principalmente em três direções fundamentais: escalabilidade, privacidade e interoperabilidade & integridade de dados. A escalabilidade foi a primeira aplicação ZK a alcançar a produção, movendo a execução fora da cadeia e verificando provas concisas na cadeia para alta taxa de transferência e escalonamento sem confiança de baixo custo.

Relatório de Pesquisa Brevis: A Camada Infinita de Computação Verificável do zkVM e Coprocessador de Dados ZK

O paradigma da Computação Verificável—“cálculo fora da cadeia + verificação na cadeia”—tornou-se o modelo computacional universal para sistemas de blockchain. Ele permite que aplicações de blockchain alcancem liberdade computacional quase infinita enquanto mantém a descentralização e a confiança como garantias de segurança essenciais. Provas de conhecimento zero (ZKPs) formam a espinha dorsal deste paradigma, com aplicações principalmente em três direções fundamentais: escalabilidade, privacidade e interoperabilidade & integridade de dados. A escalabilidade foi a primeira aplicação ZK a alcançar a produção, movendo a execução fora da cadeia e verificando provas concisas na cadeia para alta taxa de transferência e escalonamento sem confiança de baixo custo.
Ver original
Relatório Brevis: Camada de Computação Verificável Infinita com ZKVM e Co-processadores de Dados“Cálculo fora da cadeia + Verificação na cadeia” é o paradigma de Computação Verificável, que se tornou o modelo de computação genérico para sistemas de blockchain. Ele permite que as aplicações de blockchain obtenham praticamente liberdade computacional ilimitada, mantendo a descentralização e a segurança mínima de confiança. A prova de conhecimento zero (ZKP) é o pilar central deste paradigma, com suas aplicações concentradas principalmente em três direções fundamentais: escalabilidade, privacidade e interoperabilidade & integridade de dados. Dentre elas, a escalabilidade é o cenário onde a tecnologia ZK foi inicialmente implementada, transferindo a execução de transações para fora da cadeia e usando provas curtas para validar resultados na cadeia, alcançando alta TPS e custo baixo de escalabilidade confiável.

Relatório Brevis: Camada de Computação Verificável Infinita com ZKVM e Co-processadores de Dados

“Cálculo fora da cadeia + Verificação na cadeia” é o paradigma de Computação Verificável, que se tornou o modelo de computação genérico para sistemas de blockchain. Ele permite que as aplicações de blockchain obtenham praticamente liberdade computacional ilimitada, mantendo a descentralização e a segurança mínima de confiança. A prova de conhecimento zero (ZKP) é o pilar central deste paradigma, com suas aplicações concentradas principalmente em três direções fundamentais: escalabilidade, privacidade e interoperabilidade & integridade de dados. Dentre elas, a escalabilidade é o cenário onde a tecnologia ZK foi inicialmente implementada, transferindo a execução de transações para fora da cadeia e usando provas curtas para validar resultados na cadeia, alcançando alta TPS e custo baixo de escalabilidade confiável.
Ver original
Relatório de Pesquisa Cysic: O Caminho ComputeFi da Aceleração de Hardware ZKAutor:0xjacobzhao | https://linktr.ee/0xjacobzhao Provas de Conhecimento Zero (ZK) — como uma infraestrutura criptográfica e de escalabilidade de próxima geração — estão demonstrando um imenso potencial em escalabilidade de blockchain, computação de privacidade, zkML e verificação entre cadeias. No entanto, o processo de geração de provas é extremamente intensivo em computação e pesado em latência, formando o maior gargalo para a adoção industrial. A aceleração de hardware ZK, portanto, surgiu como um habilitador central. Dentro desse cenário, GPUs se destacam em versatilidade e velocidade de iteração, ASICs buscam eficiência máxima e desempenho em larga escala, enquanto FPGAs servem como um meio flexível combinando programabilidade com eficiência energética. Juntas, elas formam a base de hardware que impulsiona a adoção do mundo real do ZK.

Relatório de Pesquisa Cysic: O Caminho ComputeFi da Aceleração de Hardware ZK

Autor:0xjacobzhao | https://linktr.ee/0xjacobzhao
Provas de Conhecimento Zero (ZK) — como uma infraestrutura criptográfica e de escalabilidade de próxima geração — estão demonstrando um imenso potencial em escalabilidade de blockchain, computação de privacidade, zkML e verificação entre cadeias. No entanto, o processo de geração de provas é extremamente intensivo em computação e pesado em latência, formando o maior gargalo para a adoção industrial. A aceleração de hardware ZK, portanto, surgiu como um habilitador central. Dentro desse cenário, GPUs se destacam em versatilidade e velocidade de iteração, ASICs buscam eficiência máxima e desempenho em larga escala, enquanto FPGAs servem como um meio flexível combinando programabilidade com eficiência energética. Juntas, elas formam a base de hardware que impulsiona a adoção do mundo real do ZK.
Ver original
Relatório de Pesquisa Cysic: O Caminho ComputeFi da Aceleração de Hardware ZKAutor: 0xjacobzhao | https://linktr.ee/0xjacobzhao Provas de conhecimento zero (ZK) como uma nova geração de infraestrutura de criptografia e escalabilidade, já demonstraram um amplo potencial em aplicações emergentes como escalabilidade de blockchain, computação de privacidade, zkML e validação entre cadeias. No entanto, o processo de geração de provas é extremamente computacionalmente intensivo e com alta latência, tornando-se o maior gargalo para a industrialização. A aceleração de hardware ZK surge como um componente central nesse contexto; no caminho de aceleração de hardware ZK, a GPU se destaca pela versatilidade e velocidade de iteração, o ASIC busca eficiência energética extrema e desempenho em larga escala, enquanto o FPGA serve como uma forma intermediária, combinando flexibilidade programável com alta eficiência energética; os três juntos formam a base de hardware que impulsiona a implementação de provas de conhecimento zero.

Relatório de Pesquisa Cysic: O Caminho ComputeFi da Aceleração de Hardware ZK

Autor: 0xjacobzhao | https://linktr.ee/0xjacobzhao
Provas de conhecimento zero (ZK) como uma nova geração de infraestrutura de criptografia e escalabilidade, já demonstraram um amplo potencial em aplicações emergentes como escalabilidade de blockchain, computação de privacidade, zkML e validação entre cadeias. No entanto, o processo de geração de provas é extremamente computacionalmente intensivo e com alta latência, tornando-se o maior gargalo para a industrialização. A aceleração de hardware ZK surge como um componente central nesse contexto; no caminho de aceleração de hardware ZK, a GPU se destaca pela versatilidade e velocidade de iteração, o ASIC busca eficiência energética extrema e desempenho em larga escala, enquanto o FPGA serve como uma forma intermediária, combinando flexibilidade programável com alta eficiência energética; os três juntos formam a base de hardware que impulsiona a implementação de provas de conhecimento zero.
Ver original
Relatório de Pesquisa GAIB: A Financeirização em Cadeia da Infraestrutura de IA — RWAiFiEscrito por 0xjacobzhao | https://linktr.ee/0xjacobzhao À medida que a IA se torna a onda tecnológica de crescimento mais rápido, o poder computacional é visto como uma nova "moeda", com as GPUs se transformando em ativos estratégicos. No entanto, o financiamento e a liquidez permanecem limitados, enquanto as finanças cripto precisam de ativos respaldados por fluxo de caixa real. A tokenização de RWA está surgindo como a ponte. A infraestrutura de IA, combinando hardware de alto valor + fluxos de caixa previsíveis, é vista como o melhor ponto de entrada para RWAs não padronizados — as GPUs oferecem praticidade a curto prazo, enquanto a robótica representa a fronteira a longo prazo. O RWAiFi da GAIB (RWA + IA + DeFi) introduz um novo caminho para a financeirização em cadeia, impulsionando o ciclo da Infra de IA (GPU & Robótica) × RWA × DeFi.

Relatório de Pesquisa GAIB: A Financeirização em Cadeia da Infraestrutura de IA — RWAiFi

Escrito por 0xjacobzhao | https://linktr.ee/0xjacobzhao
À medida que a IA se torna a onda tecnológica de crescimento mais rápido, o poder computacional é visto como uma nova "moeda", com as GPUs se transformando em ativos estratégicos. No entanto, o financiamento e a liquidez permanecem limitados, enquanto as finanças cripto precisam de ativos respaldados por fluxo de caixa real. A tokenização de RWA está surgindo como a ponte. A infraestrutura de IA, combinando hardware de alto valor + fluxos de caixa previsíveis, é vista como o melhor ponto de entrada para RWAs não padronizados — as GPUs oferecem praticidade a curto prazo, enquanto a robótica representa a fronteira a longo prazo. O RWAiFi da GAIB (RWA + IA + DeFi) introduz um novo caminho para a financeirização em cadeia, impulsionando o ciclo da Infra de IA (GPU & Robótica) × RWA × DeFi.
Ver original
Relatório da GAIB: O caminho para a financeirização em blockchain da infraestrutura de IA - RWAiFiAutor: 0xjacobzhao | https://linktr.ee/0xjacobzhao À medida que a IA se torna a onda tecnológica de crescimento mais rápida do mundo, a potência computacional é vista como uma nova "moeda", e hardware de alto desempenho como GPUs também está evoluindo para ativos estratégicos. No entanto, há muito tempo, o financiamento e a liquidez desses ativos são limitados. Ao mesmo tempo, as finanças criptográficas precisam acessar ativos de alta qualidade com fluxo de caixa real, e a tokenização de RWA (Ativos do Mundo Real) está se tornando a ponte chave entre as finanças tradicionais e o mercado de criptomoedas. Os ativos de infraestrutura de IA, com suas características de "hardware de alto valor + fluxo de caixa previsível", são amplamente considerados a melhor porta de entrada para ativos não padronizados RWA, onde as GPUs têm o maior potencial de implementação prática, enquanto os robôs representam uma direção de exploração mais a longo prazo. Nesse contexto, o caminho RWAiFi (RWA + IA + DeFi) proposto pela GAIB oferece uma nova solução para "a financeirização em blockchain da infraestrutura de IA", impulsionando o efeito de roda de "Infraestrutura de IA (potência computacional e robôs) x RWA x DeFi".

Relatório da GAIB: O caminho para a financeirização em blockchain da infraestrutura de IA - RWAiFi

Autor: 0xjacobzhao | https://linktr.ee/0xjacobzhao
À medida que a IA se torna a onda tecnológica de crescimento mais rápida do mundo, a potência computacional é vista como uma nova "moeda", e hardware de alto desempenho como GPUs também está evoluindo para ativos estratégicos. No entanto, há muito tempo, o financiamento e a liquidez desses ativos são limitados. Ao mesmo tempo, as finanças criptográficas precisam acessar ativos de alta qualidade com fluxo de caixa real, e a tokenização de RWA (Ativos do Mundo Real) está se tornando a ponte chave entre as finanças tradicionais e o mercado de criptomoedas. Os ativos de infraestrutura de IA, com suas características de "hardware de alto valor + fluxo de caixa previsível", são amplamente considerados a melhor porta de entrada para ativos não padronizados RWA, onde as GPUs têm o maior potencial de implementação prática, enquanto os robôs representam uma direção de exploração mais a longo prazo. Nesse contexto, o caminho RWAiFi (RWA + IA + DeFi) proposto pela GAIB oferece uma nova solução para "a financeirização em blockchain da infraestrutura de IA", impulsionando o efeito de roda de "Infraestrutura de IA (potência computacional e robôs) x RWA x DeFi".
Ver original
Do Aprendizado Federado às Redes de Agentes Descentralizados: Uma Análise sobre ChainOperaEscrito por 0xjacobzhao | https://linktr.ee/0xjacobzhao Em nosso relatório de junho "O Santo Graal da IA Cripto: Exploração Fronteiriça do Treinamento Descentralizado", discutimos o Aprendizado Federado—um paradigma de "descentralização controlada" posicionado entre treinamento distribuído e treinamento totalmente descentralizado. Seu princípio central é manter os dados locais enquanto agrega parâmetros centralmente, um design particularmente adequado para indústrias sensíveis à privacidade e pesadas em conformidade, como saúde e finanças.

Do Aprendizado Federado às Redes de Agentes Descentralizados: Uma Análise sobre ChainOpera

Escrito por 0xjacobzhao | https://linktr.ee/0xjacobzhao
Em nosso relatório de junho "O Santo Graal da IA Cripto: Exploração Fronteiriça do Treinamento Descentralizado", discutimos o Aprendizado Federado—um paradigma de "descentralização controlada" posicionado entre treinamento distribuído e treinamento totalmente descentralizado. Seu princípio central é manter os dados locais enquanto agrega parâmetros centralmente, um design particularmente adequado para indústrias sensíveis à privacidade e pesadas em conformidade, como saúde e finanças.
Ver original
Da aprendizagem federada às redes de agentes descentralizadas: uma análise do projeto ChainOperaEm nosso relatório de pesquisa de junho (O Santo Graal da Cripto IA: Explorando as Fronteiras do Treinamento Descentralizado), mencionamos o aprendizado federado, uma solução de "descentralização controlada" entre o treinamento distribuído e o descentralizado. Sua abordagem central é reter dados localmente e agregar parâmetros centralmente, atendendo aos requisitos de privacidade e conformidade em saúde, finanças e outras áreas. Ao mesmo tempo, destacamos consistentemente a ascensão das redes de agentes em relatórios anteriores. Seu valor reside em permitir a autonomia multiagente e a divisão de trabalho para concluir tarefas complexas de forma colaborativa, impulsionando a evolução de "grandes modelos" para "ecossistemas multiagentes".

Da aprendizagem federada às redes de agentes descentralizadas: uma análise do projeto ChainOpera

Em nosso relatório de pesquisa de junho (O Santo Graal da Cripto IA: Explorando as Fronteiras do Treinamento Descentralizado), mencionamos o aprendizado federado, uma solução de "descentralização controlada" entre o treinamento distribuído e o descentralizado. Sua abordagem central é reter dados localmente e agregar parâmetros centralmente, atendendo aos requisitos de privacidade e conformidade em saúde, finanças e outras áreas. Ao mesmo tempo, destacamos consistentemente a ascensão das redes de agentes em relatórios anteriores. Seu valor reside em permitir a autonomia multiagente e a divisão de trabalho para concluir tarefas complexas de forma colaborativa, impulsionando a evolução de "grandes modelos" para "ecossistemas multiagentes".
Ver original
Relatório de Pesquisa OpenLedge: Monetização de dados e modelos na cadeia de IA1. Introdução | A transição da camada de modelo da Crypto AI Dados, modelos e poder de computação são os três principais elementos da infraestrutura de IA, comparáveis a combustível (dados), motor (modelo) e energia (poder de computação), todos indispensáveis. Semelhante ao caminho de evolução da infraestrutura da indústria de IA tradicional, o campo da Crypto AI também passou por fases semelhantes. No início de 2024, o mercado foi dominado por projetos de GPU descentralizados (Akash, Render, io.net, etc.), enfatizando amplamente a lógica de crescimento extensivo de 'combinação de poder de computação'. Ao entrar em 2025, o foco da indústria começou a se deslocar gradualmente para as camadas de modelo e dados, marcando a transição da Crypto AI de uma competição por recursos subjacentes para uma construção de médio porte mais sustentável e de valor aplicado.

Relatório de Pesquisa OpenLedge: Monetização de dados e modelos na cadeia de IA

1. Introdução | A transição da camada de modelo da Crypto AI
Dados, modelos e poder de computação são os três principais elementos da infraestrutura de IA, comparáveis a combustível (dados), motor (modelo) e energia (poder de computação), todos indispensáveis. Semelhante ao caminho de evolução da infraestrutura da indústria de IA tradicional, o campo da Crypto AI também passou por fases semelhantes. No início de 2024, o mercado foi dominado por projetos de GPU descentralizados (Akash, Render, io.net, etc.), enfatizando amplamente a lógica de crescimento extensivo de 'combinação de poder de computação'. Ao entrar em 2025, o foco da indústria começou a se deslocar gradualmente para as camadas de modelo e dados, marcando a transição da Crypto AI de uma competição por recursos subjacentes para uma construção de médio porte mais sustentável e de valor aplicado.
Ver original
Relatório de Pesquisa OpenLedger: Uma Cadeia de IA para Dados e Modelos Monetizáveis1. Introdução | A Mudança de Camada do Modelo em Crypto AI Dados, modelos e computação formam os três pilares centrais da infraestrutura de IA—comparáveis a combustível (dados), motor (modelo) e energia (computação)—todos indispensáveis. Assim como a evolução da infraestrutura na indústria de IA tradicional, o setor de Crypto AI passou por uma trajetória semelhante. No início de 2024, o mercado foi dominado por projetos de GPU descentralizados (como Akash, Render e io.net), caracterizados por um modelo de crescimento pesado em recursos focado em poder computacional bruto. No entanto, até 2025, a atenção da indústria começou a mudar gradualmente para as camadas de modelo e dados, marcando uma transição da competição de infraestrutura de baixo nível para um desenvolvimento de camada intermediária mais sustentável e orientado a aplicações.

Relatório de Pesquisa OpenLedger: Uma Cadeia de IA para Dados e Modelos Monetizáveis

1. Introdução | A Mudança de Camada do Modelo em Crypto AI
Dados, modelos e computação formam os três pilares centrais da infraestrutura de IA—comparáveis a combustível (dados), motor (modelo) e energia (computação)—todos indispensáveis. Assim como a evolução da infraestrutura na indústria de IA tradicional, o setor de Crypto AI passou por uma trajetória semelhante. No início de 2024, o mercado foi dominado por projetos de GPU descentralizados (como Akash, Render e io.net), caracterizados por um modelo de crescimento pesado em recursos focado em poder computacional bruto. No entanto, até 2025, a atenção da indústria começou a mudar gradualmente para as camadas de modelo e dados, marcando uma transição da competição de infraestrutura de baixo nível para um desenvolvimento de camada intermediária mais sustentável e orientado a aplicações.
Ver original
Estratégias de Rendimento Pendle Reveladas: O Paradigma AgentFi do PulsePor 0xjacobzhao | https://linktr.ee/0xjacobzhao Sem dúvida, Pendle é um dos protocolos DeFi mais bem-sucedidos no ciclo atual de criptomoedas. Enquanto muitos protocolos pararam devido a secas de liquidez e narrativas em declínio, Pendle se destacou através de seu mecanismo único de divisão e negociação de rendimento, tornando-se o "local de descoberta de preços" para ativos que geram rendimento. Ao integrar-se profundamente com stablecoins, LSTs/LRTs e outros ativos geradores de rendimento, assegurou sua posição como a "infraestrutura de taxa de rendimento DeFi" fundamental.

Estratégias de Rendimento Pendle Reveladas: O Paradigma AgentFi do Pulse

Por 0xjacobzhao | https://linktr.ee/0xjacobzhao
Sem dúvida, Pendle é um dos protocolos DeFi mais bem-sucedidos no ciclo atual de criptomoedas. Enquanto muitos protocolos pararam devido a secas de liquidez e narrativas em declínio, Pendle se destacou através de seu mecanismo único de divisão e negociação de rendimento, tornando-se o "local de descoberta de preços" para ativos que geram rendimento. Ao integrar-se profundamente com stablecoins, LSTs/LRTs e outros ativos geradores de rendimento, assegurou sua posição como a "infraestrutura de taxa de rendimento DeFi" fundamental.
Ver original
Do zkVM ao Mercado de Provas Abertas: Uma Análise do RISC Zero e BoundlessNa blockchain, a criptografia é a base fundamental da segurança e confiança. As Provas de Conhecimento Zero (ZK) podem comprimir qualquer computação complexa fora da cadeia em uma prova sucinta que pode ser verificada de forma eficiente na cadeia—sem depender da confiança de terceiros—enquanto também permite ocultar entradas de forma seletiva para preservar a privacidade. Com sua combinação de verificação eficiente, universalidade e privacidade, o ZK se tornou uma solução chave em casos de uso de escalabilidade, privacidade e interoperabilidade. Embora ainda haja desafios, como o alto custo da geração de provas e a complexidade do desenvolvimento de circuitos, a viabilidade de engenharia do ZK e o grau de adoção já superaram outras abordagens, tornando-o o framework mais amplamente adotado para computação confiável.

Do zkVM ao Mercado de Provas Abertas: Uma Análise do RISC Zero e Boundless

Na blockchain, a criptografia é a base fundamental da segurança e confiança. As Provas de Conhecimento Zero (ZK) podem comprimir qualquer computação complexa fora da cadeia em uma prova sucinta que pode ser verificada de forma eficiente na cadeia—sem depender da confiança de terceiros—enquanto também permite ocultar entradas de forma seletiva para preservar a privacidade. Com sua combinação de verificação eficiente, universalidade e privacidade, o ZK se tornou uma solução chave em casos de uso de escalabilidade, privacidade e interoperabilidade. Embora ainda haja desafios, como o alto custo da geração de provas e a complexidade do desenvolvimento de circuitos, a viabilidade de engenharia do ZK e o grau de adoção já superaram outras abordagens, tornando-o o framework mais amplamente adotado para computação confiável.
Faça login para explorar mais conteúdos
Explore as últimas notícias sobre criptomoedas
⚡️ Participe das discussões mais recentes sobre criptomoedas
💬 Interaja com seus criadores favoritos
👍 Desfrute de conteúdos que lhe interessam
E-mail / número de telefone

Últimas Notícias

--
Ver Mais
Sitemap
Preferências de Cookies
Termos e Condições da Plataforma