Decentralized AI computing network Gonka has successfully completed its v0.2.9 mainnet upgrade. According to Odaily, the upgrade was implemented following an on-chain governance vote and executed at block height 2451000. The network has now fully transitioned to PoC v2 as its weight allocation mechanism, phasing out the previous PoC logic. This upgrade signifies a higher maturity level in Gonka's computing verification mechanism and network governance.

With the upgrade in effect, Confirmation PoC has become the authoritative source for network results, enhancing the verifiability and certainty of computing contributions. The network has entered a single-model operation phase, utilizing a unified model and verification standard to reduce heterogeneous computing noise, thereby providing a more stable infrastructure for decentralized AI inference and training. Currently, only ML Nodes running Qwen/Qwen3-235B-A22B-Instruct-2507-FP8 and using PoC v2 compatible images can participate in weight calculation. The transition period from Epoch 158 to 159 marks the first complete operational phase following the activation of PoC v2.

Real-time data from GonkaScan indicates that as of February 2, 2026, Gonka's total network computing power is nearing the equivalent of 14,000 H100 units, showcasing characteristics of a national-level AI computing cluster. Compared to early December 2025, when Bitfury announced a $50 million investment with approximately 6,000 H100 equivalent units, the network's computing power has demonstrated a monthly growth rate of about 52%, leading among similar decentralized computing networks.

In terms of computing structure, high-end GPUs such as NVIDIA H100, H200, and A100 account for over 80% of the network's total computing power, highlighting Gonka's significant advantage in aggregating and scheduling high-performance computing resources. Currently, network nodes span approximately 20 countries and regions across Europe, Asia, the Middle East, and North America, laying the foundation for a global, single-point risk-resistant AI computing infrastructure.